Re: [fonc] The Web Will Die When OOP Dies

2012-06-09 Thread Pascal J. Bourguignon
Toby Schachman  writes:

> This half hour talk from Zed Shaw is making rounds,
> https://vimeo.com/43380467
>
> The first half is typical complaints about broken w3 standards and
> processes. The second half is his own observations on the difficulties
> of teaching OOP. He then suggests that OOP is an unnatural programming
> paradigm and that the problems of the web stem from the problems of
> OOP.
>
> My take:
>
> I agree with Zed that the dominant "OOP" view of reality no longer
> serves us for creating the systems of the future. I imagine that this
> will be a hard pill to swallow because we (programmers) have
> internalized this way of looking at the world and it is hard to *see*
> anything in the world that doesn't fit our patterns.
>
> The hint for me comes at 22 minutes in to the video. Zed mentions
> OOP's mismatch with relational databases and its emphasis on
> request-response modes of communication. Philosophically, OOP
> encourages hierarchy. Its unidirectional references encourage trees.
> Request-response encourages centralized control (the programmer has to
> choose which object is "in charge"). Ted Nelson also complains about
> hierarchical vs. relational topologies with respect to the web's
> historical development, particularly unidirectional links.

This is wrong.

Request-responses mode comes from the mapping of the notion of message
sending to the low-level notion of calling a subroutine.  Unidirectional
references comes from the mapping of the notion of association to the
low-level notion of pointer.

But those mappings are not inherent of OOP, and aren't even necessarily
promoted by a OO programming language. 

And even if they may seem at first natural with common OO programming
languages, it's easy to avoid using them.

For example, request-response is NOT the paradigm used in OpenStep
(Cocoa) programming.   For example, when a GUI object is edited, it may
send a message to a controller object, but it does not send the new
value.  Instead, it sends itself, and let the controller ask for its new
value later.

Some languages implement message sending as an asynchronous operation
natively.



So it seems to me it's a case of taking the tree for the forest.



> I've been reading (and rereading) Sutherland's 1963 Sketchpad thesis (
> http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf ) and it
> strikes me that philosophically it is founded on *relationships*
> rather than *hierarchy*. Internally, references are always stored
> bi-directionally. It presents the user with a conceptual model based
> on creating constraints (i.e. relationships) between shapes.

Indeed.


> Chapter 7 has been particularly hard for me to grok because his
> "recursive merging" has no good analogue in OOP inheritance strategies
> as far as I know. Here he takes a structure A (a network of things and
> their relationships) and merges them onto another structure B by
> specifically associating certain things in A with things in B. This
> operation creates new relationships in structure B, corresponding to
> the analogous relationships in structure A. Inheritance by analogy.
>
> He claims to get quite a bit of leverage from this strategy.
>
> Toby
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-09 Thread Miles Fidelman

Toby Schachman wrote:

This half hour talk from Zed Shaw is making rounds,
https://vimeo.com/43380467

The first half is typical complaints about broken w3 standards and
processes. The second half is his own observations on the difficulties
of teaching OOP. He then suggests that OOP is an unnatural programming
paradigm and that the problems of the web stem from the problems of
OOP.

To echo Pascal Bourguignon's sentiment, "this is all just wrong."

First off, while I personally am not a big fan of OOP, seems to me that 
HyperCard was an existence proof that OOP is an incredibly natural 
programming paradigm, when it's packaged well.


Re. problems of the web stemming from problems of OOP:  Other than Java 
servelets, where is OOP much in evidence on the web?


If anything, the web is a big implementation of the Actor paradigm. Web 
servers, by and large, listen for messages, and respond to incoming 
messages by spawning a process that generates a response, then 
terminates.  Doesn't seem very object oriented to me - particularly when 
code takes the form of perl, or PHP.





--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-09 Thread Toby Schachman
On Sat, Jun 9, 2012 at 8:48 PM, Pascal J. Bourguignon
 wrote:
> Request-responses mode comes from the mapping of the notion of message
> sending to the low-level notion of calling a subroutine.  Unidirectional
> references comes from the mapping of the notion of association to the
> low-level notion of pointer.
>
> But those mappings are not inherent of OOP, and aren't even necessarily
> promoted by a OO programming language.
>
> And even if they may seem at first natural with common OO programming
> languages, it's easy to avoid using them.

I agree that these are not necessary or intended implications of OOP.
My comments used "OOP" to refer to the paradigm that I see used and
implemented in common practice.

Message passing does not necessitate a conceptual dependence on
request-response communication. Yet most code I see in the wild uses
this pattern. A hierarchy of control is set up. An object will request
information from its subordinate objects, do some coordination, send
commands back down, or report back to its superior. Centralized
control seems to be the only way we can keep track of things. I rarely
see an OO program where there is a "community" of objects who are all
sending messages to each other and it's conceptually ambiguous which
object is "in control" of the overall system's behavior.

As for bidirectional references, I see them used for specialized
subsystems (implemented by specialists) such as relational databases
and publish-subscribe servers. But they tend to be tricky to set up in
OOP settings. Yet, as I bring up in the Sketchpad remarks, I believe
they have a lot of potential when they can be used as a fundamental
primitive of a programming paradigm.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-09 Thread Igor Stasenko
While i agree with guy's bashing on HTTP,
the second part of his talk is complete bullshit.

He mentions a kind of 'signal processing' paradigm,
but we already have it: message passing.
Before i learned smalltalk, i was also thinking that OOP is about
structures and hierarchies, inheritance.. and all this
private/public/etc etc bullshit..
After i learned smalltalk , i know that OOP it is about message
passing. Just it. Period.
And no other implications: the hierarchies and structures is
implementation specific, i.e.
it is a way how an object handles the message, but it can be
completely arbitrary.

I think that indeed, it is a big industry's fault being unable to
grasp simple and basic idea of message passing
and replace it with horrible crutches with tons of additional
concepts, which makes it hard
for people to learn (and therefore be effective with OOP programming).

-- 
Best regards,
Igor Stasenko.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-09 Thread BGB

On 6/9/2012 9:28 PM, Igor Stasenko wrote:

While i agree with guy's bashing on HTTP,
the second part of his talk is complete bullshit.


IMO, he did raise some valid objections regarding JS and similar though 
as well.


these are also yet more areas though where BS differs from JS: it uses 
different semantics for "==" and "===" (in BS, "==" compares by value 
for compatible types, and "===" compares values by identity).



granted, yes, bashing OO isn't really called for, or at least absent 
making more specific criticisms.


for example, I am not necessarily a fan of Class/Instance OO or deeply 
nested class hierarchies, but I really do like having "objects" to hold 
things like fields and methods, but don't necessarily like it to be a 
single form with a single point of definition, ...


would this mean I am "for" or "against" OO?...

I had before been accused of being anti-OO because I had asserted that, 
rather than making deeply nested class hierarchies, a person could 
instead use some interfaces.


the problem is partly that "OO" often means one thing for one person and 
something different for someone else.




He mentions a kind of 'signal processing' paradigm,
but we already have it: message passing.
Before i learned smalltalk, i was also thinking that OOP is about
structures and hierarchies, inheritance.. and all this
private/public/etc etc bullshit..
After i learned smalltalk , i know that OOP it is about message
passing. Just it. Period.
And no other implications: the hierarchies and structures is
implementation specific, i.e.
it is a way how an object handles the message, but it can be
completely arbitrary.

I think that indeed, it is a big industry's fault being unable to
grasp simple and basic idea of message passing
and replace it with horrible crutches with tons of additional
concepts, which makes it hard
for people to learn (and therefore be effective with OOP programming).


yeah.


although a person may still implementing a lot of this for sake of 
convention, partly because it is just sort of expected.


for example, does a language really need classes or instances (vs, say, 
cloning or creating objects ex-nihilo)? not really.


then why have them? because people expect them; they can be a little 
faster; and they provide a convenient way to define and answer the 
question "is X a Y?", ...


I personally like having both sets of options though, so this is 
basically what I have done.




meanwhile, I have spent several days on-off pondering the mystery of if 
there is any good syntax (for a language with a vaguely C-like syntax), 
to express the concept of "execute these statements in parallel and 
continue when all are done".


practically, I could allow doing something like:
join( async{A}, async{B}, async{C} );
but this is ugly (and essentially abuses the usual meaning of "join").

meanwhile, something like:
do { A; B; C; } async;
would just be strange, and likely defy common sensibilities (namely, in 
that the statements would not be executed sequentially, in contrast to 
pretty much every other code block).


I was left also considering another possibly ugly option:
async![ A; B; C; ];
which just looks weird...

for example:
async![
{ sleep(1000); printf("A, "); };
{ sleep(2000); printf("B, "); };
{ sleep(3000); printf("C, "); }; ];
printf("Done\n");

would print "A, B, C, Done" with 1s delays before each letter.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-11 Thread Tony Garnock-Jones
On 9 June 2012 22:06, Toby Schachman  wrote:

> Message passing does not necessitate a conceptual dependence on
> request-response communication. Yet most code I see in the wild uses
> this pattern.


Sapir-Whorf strikes again? ;-)


> I rarely
> see an OO program where there is a "community" of objects who are all
> sending messages to each other and it's conceptually ambiguous which
> object is "in control" of the overall system's behavior.
>

Perhaps you're not taking into account programs that use the
observer/observable pattern? As a specific example, all the uses of the
"dependents" protocols (e.g. #changed:, #update:) in Smalltalk are just
this. In my Squeak image, there are some 50 implementors of #update: and
some 500 senders of #changed:.

In that same image, there is also protocol for "events" on class Object, as
well as an instance of Announcements loaded. So I think what you describe
really might be quite common in OO *systems*, rather than discrete programs.

All three of these aspects of my Squeak image - the "dependents" protocols,
triggering of "events", and Announcements - are encodings of simple
asynchronous messaging, built using the traditional request-reply-error
conversational pattern, and permitting conversational patterns other than
the traditional request-reply-error.

As an aside, working with such synchronous simulations of asynchronous
messaging causes all sorts of headaches, because asynchronous events
naturally involve concurrency, and the simulation usually only involves a
single process dispatching events by synchronous procedure call.

Regards,
  Tony
-- 
Tony Garnock-Jones
tonygarnockjo...@gmail.com
http://homepages.kcbbs.gen.nz/tonyg/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-14 Thread John Zabroski
Folks,

Arguing technical details here misses the point. For example, a different
conversation can be started by asking Why does my web hosting provider say
I need an FTP client? Already technology is way too much in my face and I
hate seeing programmers blame their tools rather than their
misunderstanding of people.

Start by asking yourself how would you build these needs from scratch to
bootstrap something like the Internet.

What would a web browser look like if the user didnt need a seperate
program to put data somewhere on their web server and could just use one
uniform mexhanism? Note I am not getting into "nice to have" features like
resumption of paused uploads due to weak or episodic connectivity, because
that too is basically a technical problem -- and it is not regarded as
academically difficult either. I am simply taking one example of how users
are forced to work today and asking why not something less technical. All I
want to do is upload a file and yet I have all these knobs to tune and
things to "install" and none of it takes my work context into consideration.

Why do I pay even $4 a month for such crappy service?
On Jun 11, 2012 8:17 AM, "Tony Garnock-Jones" 
wrote:

> On 9 June 2012 22:06, Toby Schachman  wrote:
>
>> Message passing does not necessitate a conceptual dependence on
>> request-response communication. Yet most code I see in the wild uses
>> this pattern.
>
>
> Sapir-Whorf strikes again? ;-)
>
>
>> I rarely
>> see an OO program where there is a "community" of objects who are all
>> sending messages to each other and it's conceptually ambiguous which
>> object is "in control" of the overall system's behavior.
>>
>
> Perhaps you're not taking into account programs that use the
> observer/observable pattern? As a specific example, all the uses of the
> "dependents" protocols (e.g. #changed:, #update:) in Smalltalk are just
> this. In my Squeak image, there are some 50 implementors of #update: and
> some 500 senders of #changed:.
>
> In that same image, there is also protocol for "events" on class Object,
> as well as an instance of Announcements loaded. So I think what you
> describe really might be quite common in OO *systems*, rather than
> discrete programs.
>
> All three of these aspects of my Squeak image - the "dependents"
> protocols, triggering of "events", and Announcements - are encodings of
> simple asynchronous messaging, built using the traditional
> request-reply-error conversational pattern, and permitting conversational
> patterns other than the traditional request-reply-error.
>
> As an aside, working with such synchronous simulations of asynchronous
> messaging causes all sorts of headaches, because asynchronous events
> naturally involve concurrency, and the simulation usually only involves a
> single process dispatching events by synchronous procedure call.
>
> Regards,
>   Tony
> --
> Tony Garnock-Jones
> tonygarnockjo...@gmail.com
> http://homepages.kcbbs.gen.nz/tonyg/
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-14 Thread Miles Fidelman

John Zabroski wrote:


Arguing technical details here misses the point. For example, a 
different conversation can be started by asking Why does my web 
hosting provider say I need an FTP client? Already technology is way 
too much in my face and I hate seeing programmers blame their tools 
rather than their misunderstanding of people.




Well... maybe you need a different hosting provider?

Whether it's ftp under the hood, or http, or something else, there are 
lots of ways to hide the complexity from general users.  Me... I prefer 
ftp (as well as being my own provider), but if you want simple, there 
are lots of hosting providers with easy-to-use interfaces that all work 
through a browser.


Start by asking yourself how would you build these needs from scratch 
to bootstrap something like the Internet.




Probably not that much differently.  The Internet evolved much like 
biological systems.  Start with small "piece parts," combine them in 
various ways, then start combining the combinations.  More complex 
behaviors emerge.



What would a web browser look like if the user didnt need a seperate 
program to put data somewhere on their web server and could just use 
one uniform mexhanism? Note I am not getting into "nice to have" 
features like resumption of paused uploads due to weak or episodic 
connectivity, because that too is basically a technical problem -- and 
it is not regarded as academically difficult either. I am simply 
taking one example of how users are forced to work today and asking 
why not something less technical. All I want to do is upload a file 
and yet I have all these knobs to tune and things to "install" and 
none of it takes my work context into consideration.




Again, there's a choice:

1. something monolithic - might do exactly what you want, very simply, 
but... as soon as you want to do something slightly different, you're 
screwed


2. something very modular and flexible - like unix shell commands and 
pipes - let's you do practically anything, but it's pretty complex


3. same as 2, plus an interface layer that hides complexity - like say, 
a web page with an "upload file" button, coupled to a server-side script 
that does the heavy lifting, or a remote file system that you can mount 
on your desktop - the capabilities are all there, different service 
providers set things up differently


as to "All I want to do is  and yet I have all these knobs to tune 
and things to "install" and none of it takes my work context into 
consideration."


- well... different people might want to do things differently, and how 
is it going to know your work context?



Why do I pay even $4 a month for such crappy service?



What do you expect for $4/month?



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-14 Thread BGB

On 6/14/2012 10:19 PM, John Zabroski wrote:


Folks,

Arguing technical details here misses the point. For example, a 
different conversation can be started by asking Why does my web 
hosting provider say I need an FTP client? Already technology is way 
too much in my face and I hate seeing programmers blame their tools 
rather than their misunderstanding of people.


Start by asking yourself how would you build these needs from scratch 
to bootstrap something like the Internet.


What would a web browser look like if the user didnt need a seperate 
program to put data somewhere on their web server and could just use 
one uniform mexhanism? Note I am not getting into "nice to have" 
features like resumption of paused uploads due to weak or episodic 
connectivity, because that too is basically a technical problem -- and 
it is not regarded as academically difficult either. I am simply 
taking one example of how users are forced to work today and asking 
why not something less technical. All I want to do is upload a file 
and yet I have all these knobs to tune and things to "install" and 
none of it takes my work context into consideration.




idle thoughts:
there is Windows Explorer, which can access FTP;
would be better if it actually remembered login info, had automatic 
logic, and could automatically resume uploads, ...


but, the interface is nice, as an FTP server looks much like a 
directory, ...



also, at least in the past, pretty much everything *was* IE:
could put HTML on the desktop, in directories (directory as webpage), ...
but most of this went away AFAICT (then again, not really like IE is 
"good").


maybe, otherwise, the internet would look like local applications or 
similar. they can sit on desktop, and maybe they launch windows. IMHO, I 
don't as much like tabs, as long ago Windows basically introduced its 
own form of tabs:

the Windows taskbar.

soon enough, it added another nifty feature:
it lumped various instances of the same program into popup menus.


meanwhile, browser tabs are like Win95 all over again, with the thing 
likely to experience severe lag whenever more than a few pages are open 
(and often have responsiveness and latency issues).


better maybe if more of the app ran on the client, and if people would 
use more asynchronous messages (rather than request/response).


...

so, then, webpages could have a look and feel more like normal apps.




Why do I pay even $4 a month for such crappy service?

On Jun 11, 2012 8:17 AM, "Tony Garnock-Jones" 
mailto:tonygarnockjo...@gmail.com>> wrote:


On 9 June 2012 22:06, Toby Schachman mailto:t...@alum.mit.edu>> wrote:

Message passing does not necessitate a conceptual dependence on
request-response communication. Yet most code I see in the
wild uses
this pattern.


Sapir-Whorf strikes again? ;-)

I rarely
see an OO program where there is a "community" of objects who
are all
sending messages to each other and it's conceptually ambiguous
which
object is "in control" of the overall system's behavior.


Perhaps you're not taking into account programs that use the
observer/observable pattern? As a specific example, all the uses
of the "dependents" protocols (e.g. #changed:, #update:) in
Smalltalk are just this. In my Squeak image, there are some 50
implementors of #update: and some 500 senders of #changed:.

In that same image, there is also protocol for "events" on class
Object, as well as an instance of Announcements loaded. So I think
what you describe really might be quite common in OO /systems/,
rather than discrete programs.

All three of these aspects of my Squeak image - the "dependents"
protocols, triggering of "events", and Announcements - are
encodings of simple asynchronous messaging, built using the
traditional request-reply-error conversational pattern, and
permitting conversational patterns other than the traditional
request-reply-error.

As an aside, working with such synchronous simulations of
asynchronous messaging causes all sorts of headaches, because
asynchronous events naturally involve concurrency, and the
simulation usually only involves a single process dispatching
events by synchronous procedure call.

Regards,
  Tony
-- 
Tony Garnock-Jones

tonygarnockjo...@gmail.com 
http://homepages.kcbbs.gen.nz/tonyg/

___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Pascal J. Bourguignon
John Zabroski  writes:

> Folks,
>
> Arguing technical details here misses the point. For example, a
> different conversation can be started by asking Why does my web
> hosting provider say I need an FTP client? Already technology is way
> too much in my face and I hate seeing programmers blame their tools
> rather than their misunderstanding of people.
>
> Start by asking yourself how would you build these needs from scratch
> to bootstrap something like the Internet.
>
> What would a web browser look like if the user didnt need a seperate
> program to put data somewhere on their web server and could just use
> one uniform mexhanism? Note I am not getting into "nice to have"
> features like resumption of paused uploads due to weak or episodic
> connectivity, because that too is basically a technical problem -- and
> it is not regarded as academically difficult either. I am simply
> taking one example of how users are forced to work today and asking
> why not something less technical. All I want to do is upload a file
> and yet I have all these knobs to tune and things to "install" and
> none of it takes my work context into consideration.


There are different problems.

About the tools and mechanisms, and their multiplicity, it's normal to
have a full toolbox.  Even with evolving technologies some tools are
used less often, each has its specific use and they're all useful.

Also, the point of discrete tools is that they're modular and can be
combined to great effect by a competent professionnal.  You wouldn't
want to dig all the holes with the same tool, be it either a spoon or a
caterpillar.


Now for the other problem, the "users", one cause of that problem is the
accessibility and openess of computer and software technology, which
doesn't put clear boundaries between the "professionnals" and the
"customers".  There're all shades of grays, amateurs, students and D.I.Y
in between.

But you're perfectly entitled to have expectations of good service and
ease of use.  You only need to realize that this will come with a cost,
and it won't be cheap.  

Basically, your choice is between:

- here, we have a toolbox, we will gladly lend it to you so you can have
  fun hacking your own stuff.

- tell us what you want, we'll work hard to provide you the easy
  service, and we'll send you the bill.

(ok, there are intermediary choices, but you can basically classify each
offer between a do-it-yourself solution and a everything-s-done-for-you
one).


However the difficulties of the later option is that things evolve so
fast that we may not have the time to develop affordable fine tuned
customer oriented solutions before they become obsolete.  Developing and
refining such services takes time, and money.


And in general, programmers are not paid well enough. 


Just compare the hourly wages of a plumber and a computer programmer,
and you'll understand why you don't get the same easy service from
programmers than what you get from plumbers.   But this is a problem
easily solved: just put the money on the table, and you'll find
competent programmers to implement your easy solution.


But it seems customers prefer crappy service as long as it's cheap (or
"free").

-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread John Zabroski
On Fri, Jun 15, 2012 at 6:36 AM, Pascal J. Bourguignon <
p...@informatimago.com> wrote:

> John Zabroski  writes:
>
> > Folks,
> >
> > Arguing technical details here misses the point. For example, a
> > different conversation can be started by asking Why does my web
> > hosting provider say I need an FTP client? Already technology is way
> > too much in my face and I hate seeing programmers blame their tools
> > rather than their misunderstanding of people.
> >
> > Start by asking yourself how would you build these needs from scratch
> > to bootstrap something like the Internet.
> >
> > What would a web browser look like if the user didnt need a seperate
> > program to put data somewhere on their web server and could just use
> > one uniform mexhanism? Note I am not getting into "nice to have"
> > features like resumption of paused uploads due to weak or episodic
> > connectivity, because that too is basically a technical problem -- and
> > it is not regarded as academically difficult either. I am simply
> > taking one example of how users are forced to work today and asking
> > why not something less technical. All I want to do is upload a file
> > and yet I have all these knobs to tune and things to "install" and
> > none of it takes my work context into consideration.
>
>
> There are different problems.
>
> About the tools and mechanisms, and their multiplicity, it's normal to
> have a full toolbox.  Even with evolving technologies some tools are
> used less often, each has its specific use and they're all useful.
>
> Also, the point of discrete tools is that they're modular and can be
> combined to great effect by a competent professionnal.  You wouldn't
> want to dig all the holes with the same tool, be it either a spoon or a
> caterpillar.
>
>
> Now for the other problem, the "users", one cause of that problem is the
> accessibility and openess of computer and software technology, which
> doesn't put clear boundaries between the "professionnals" and the
> "customers".  There're all shades of grays, amateurs, students and D.I.Y
> in between.
>
> But you're perfectly entitled to have expectations of good service and
> ease of use.  You only need to realize that this will come with a cost,
> and it won't be cheap.
>
> Basically, your choice is between:
>
> - here, we have a toolbox, we will gladly lend it to you so you can have
>  fun hacking your own stuff.
>
> - tell us what you want, we'll work hard to provide you the easy
>  service, and we'll send you the bill.
>
> (ok, there are intermediary choices, but you can basically classify each
> offer between a do-it-yourself solution and a everything-s-done-for-you
> one).
>
>
> However the difficulties of the later option is that things evolve so
> fast that we may not have the time to develop affordable fine tuned
> customer oriented solutions before they become obsolete.  Developing and
> refining such services takes time, and money.
>
>
> And in general, programmers are not paid well enough.
>
>
> Just compare the hourly wages of a plumber and a computer programmer,
> and you'll understand why you don't get the same easy service from
> programmers than what you get from plumbers.   But this is a problem
> easily solved: just put the money on the table, and you'll find
> competent programmers to implement your easy solution.
>
>
> But it seems customers prefer crappy service as long as it's cheap (or
> "free").
>
>

Sorry, you did not answer my question, but instead presented excuses for
why programmers misunderstand people.  (Can I paraphrase your thoughts as,
"Because people are not programmers!")
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Miles Fidelman

John Zabroski wrote:


On Fri, Jun 15, 2012 at 6:36 AM, Pascal J. Bourguignon 
mailto:p...@informatimago.com>> wrote:


John Zabroski mailto:johnzabro...@gmail.com>> writes:

> All I want to do is upload a file
> and yet I have all these knobs to tune and things to "install" and
> none of it takes my work context into consideration.



Basically, your choice is between:

- here, we have a toolbox, we will gladly lend it to you so you
can have
 fun hacking your own stuff.

- tell us what you want, we'll work hard to provide you the easy
 service, and we'll send you the bill.

(ok, there are intermediary choices, but you can basically
classify each
offer between a do-it-yourself solution and a
everything-s-done-for-you
one).



But it seems customers prefer crappy service as long as it's cheap (or
"free").



Sorry, you did not answer my question, but instead presented excuses 
for why programmers misunderstand people.  (Can I paraphrase your 
thoughts as, "Because people are not programmers!")


Ok.. let's try making it simple:
- you want it easy
- you want it cheap
- you don't want to take the effort to find a tool or service provider 
that makes it easy (plenty of them exist, some of them are cheap, some free)


Guess what, the problem is not the technology, or the programmers - it's 
you.



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Paul Homer
I see something deeper in what Zed is saying. 

My first really strong experiences with programming came from the 
data-structures world in the late 80s at the University of Waterloo. There was 
an implicit view that one could decompose all problems into data-structures 
(and a few algorithms and a little bit of glue). My sense at the time was that 
the newly emerging concepts of OO were a way of entrenching this philosophy 
directly into the programming languages.

When applied to tasks like building window systems, OO is an incredibly 
powerful approach. If one matches what they are seeing on the screen with the 
objects they are building in the back, there is a strong one-to-one mapping 
that allows the programmer to rapidly diagnose problems at a speed that just 
wasn't possible before.

But for many of the things that I've built in the back-end I find that OO 
causes me to jump through what I think are artificial hoops. Over the years 
I've spent a lot of time pondering why. My underlying sense is that there are 
some fundamental dualities in computational machines. Static vs. dynamic. Data 
vs. code. Nouns vs. verbs. Location vs. time. It is possible, of course, to 
'cast' one onto the other, there are plenty of examples of 'jumping' 
particularly in languages wrt. nouns and verbs. But I think that decompositions 
become 'easier' for us to understand when we partition them along the 'natural' 
lines of what they are underneath.

My thinking some time ago as it applies to OO is that the fundamental 
primitive, an object, essentially mixes its metaphors (sort of). That is, it 
contains both code and data. I think it's this relatively simple point that 
underlies the problems that people have in grokking OO. What I've also found is 
that that wasn't there in that earlier philosophy at Waterloo. Sure there were 
atomic primitives attached to each data-structure, but the way we build 
heavy-duty mechanics was more often to push the 'actions' to something like an 
intermediary data-structure and then do a clean simple traversal to actuate it 
(like lisp), so fundamentally the static/dynamic duality was daintily skipped 
over.

It is far more than obvious that OO opened the door to allow massive systems. 
Theoretically they were possible before, but it gave us a way to manage the 
complexity of these beasts. Still, like all technologies, it comes with a 
built-in 'threshold' that imposes a limit on what we can build. If we are too 
exceed that, then I think we are in the hunt for the next philosophy and as Zed 
points out the ramification of finding it will cause yet another technological 
wave to overtake the last one.

Just my thoughts.


Paul.  ___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Loup Vaillant

Paul Homer wrote:

It is far more than obvious that OO opened the door to allow massive
systems. Theoretically they were possible before, but it gave us a way
to manage the complexity of these beasts. Still, like all technologies,
it comes with a built-in 'threshold' that imposes a limit on what we can
build. If we are too exceed that, then I think we are in the hunt for
the next philosophy and as Zed points out the ramification of finding it
will cause yet another technological wave to overtake the last one.


I find that a bit depressing: if each tool that tackle complexity
better than the previous ones lead us to increase complexity (just
because we can), we're kinda doomed.

Can't we recognized complexity as a problem, instead of an unavoidable
law of nature?  Thank goodness we have STEPS project to shed some light.

Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Paul Homer
I wouldn't describe complexity as a problem, but rather an attribute of the 
universe we exist in, effecting everything from how we organize our societies 
to how the various solar systems interact with each other.

Each time you conquer the current complexity, your approach adds to it. 
Eventually all that conquering needs to be conquered itself ...


Paul.




>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Friday, June 15, 2012 1:54:04 PM
>Subject: Re: [fonc] The Web Will Die When OOP Dies
> 
>Paul Homer wrote:
>> It is far more than obvious that OO opened the door to allow massive
>> systems. Theoretically they were possible before, but it gave us a way
>> to manage the complexity of these beasts. Still, like all technologies,
>> it comes with a built-in 'threshold' that imposes a limit on what we can
>> build. If we are too exceed that, then I think we are in the hunt for
>> the next philosophy and as Zed points out the ramification of finding it
>> will cause yet another technological wave to overtake the last one.
>
>I find that a bit depressing: if each tool that tackle complexity
>better than the previous ones lead us to increase complexity (just
>because we can), we're kinda doomed.
>
>Can't we recognized complexity as a problem, instead of an unavoidable
>law of nature?  Thank goodness we have STEPS project to shed some light.
>
>Loup.
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Miles Fidelman
Hasn't that emerged as a key characteristic of biological evolution - 
the alternation between new "parts" (molecules), increasingly complex 
combinations of those parts, encapsulation of successful combinations 
into new parts (pathways, cells, tissues, organs, systems), etc., wash, 
rinse repeat?


For that matter, it's certainly how we build systems: components -> 
subsystems -> systems -> systems of systems -> ecosystems..


Miles Fidelman

Paul Homer wrote:
I wouldn't describe complexity as a problem, but rather an attribute 
of the universe we exist in, effecting everything from how we organize 
our societies to how the various solar systems interact with each other.


Each time you conquer the current complexity, your approach adds to 
it. Eventually all that conquering needs to be conquered itself ...



Paul.


*From:* Loup Vaillant 
*To:* fonc@vpri.org
*Sent:* Friday, June 15, 2012 1:54:04 PM
    *Subject:* Re: [fonc] The Web Will Die When OOP Dies

Paul Homer wrote:
> It is far more than obvious that OO opened the door to allow massive
> systems. Theoretically they were possible before, but it gave us
a way
> to manage the complexity of these beasts. Still, like all
technologies,
> it comes with a built-in 'threshold' that imposes a limit on
what we can
> build. If we are too exceed that, then I think we are in the
hunt for
> the next philosophy and as Zed points out the ramification of
finding it
> will cause yet another technological wave to overtake the last one.

I find that a bit depressing: if each tool that tackle complexity
better than the previous ones lead us to increase complexity (just
because we can), we're kinda doomed.

Can't we recognized complexity as a problem, instead of an unavoidable
law of nature?  Thank goodness we have STEPS project to shed some
light.

Loup.
___
fonc mailing list
fonc@vpri.org <mailto:fonc@vpri.org>
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread BGB

On 6/15/2012 12:27 PM, Paul Homer wrote:
I wouldn't describe complexity as a problem, but rather an attribute 
of the universe we exist in, effecting everything from how we organize 
our societies to how the various solar systems interact with each other.


Each time you conquer the current complexity, your approach adds to 
it. Eventually all that conquering needs to be conquered itself ...




yep.

the world of software is layers upon layers of stuff.
one thing is made, and made easier, at the cost of adding a fair amount 
of complexity somewhere else.


this is generally considered a good tradeoff, because the reduction of 
complexity in things that are seen is perceptually more important than 
the increase in internal complexity in the things not seen.


although it may be possible to reduce complexity, say by finding ways to 
do the same things with less total complexity, this will not actually 
change the underlying issue (or in other cases may come with costs worse 
than internal complexity, such as poor performance or drastically higher 
memory use, ...).




Paul.


*From:* Loup Vaillant 
*To:* fonc@vpri.org
*Sent:* Friday, June 15, 2012 1:54:04 PM
*Subject:* Re: [fonc] The Web Will Die When OOP Dies

Paul Homer wrote:
> It is far more than obvious that OO opened the door to allow massive
> systems. Theoretically they were possible before, but it gave us
a way
> to manage the complexity of these beasts. Still, like all
technologies,
> it comes with a built-in 'threshold' that imposes a limit on
what we can
> build. If we are too exceed that, then I think we are in the
hunt for
> the next philosophy and as Zed points out the ramification of
finding it
> will cause yet another technological wave to overtake the last one.

I find that a bit depressing: if each tool that tackle complexity
better than the previous ones lead us to increase complexity (just
because we can), we're kinda doomed.

Can't we recognized complexity as a problem, instead of an unavoidable
law of nature?  Thank goodness we have STEPS project to shed some
light.

Loup.
___
fonc mailing list
fonc@vpri.org <mailto:fonc@vpri.org>
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Pascal J. Bourguignon
John Zabroski  writes:


> Sorry, you did not answer my question, but instead presented excuses
> for why programmers misunderstand people.  (Can I paraphrase your
> thoughts as, "Because people are not programmers!") 

No, you misunderstood my answer: 
"Because people don't pay programmers enough."


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread David Leibs
I have kinda lost track of this thread so forgive me if I wander off in a 
perpendicular direction.

I believe that things do not have to continually get more and more complex.  
The way out for me is to go back to the beginning and start over (which is what 
this mailing list is all about).
I constantly go back to the beginnings in math and/or physics and try to 
re-understand from first principles.  Of course every time I do this I get less 
and less further along the material continuum because the beginnings are so 
darn interesting.

Let me give an example from arithmetic which I learned from Ken Iverson's 
writings years ago.

As children we spend a lot of time practicing adding up numbers. Humans are 
very bad at this if you measure making a silly error as bad. Take for example:

   365
+  366
--

this requires you to add 5 & 6, write down 1 and carry 1 to the next column
then add 6, 6, and that carried 1 and write down 2 and carry a 1 to the next 
column
finally add 3, 3 and the carried 1 and write down 7
this gives you 721, oops, the wrong answer.  In step 2 I made a totally 
dyslexic mistake and should have written down a 3.

Ken proposed learning to see things a bit differently and remember the  digits 
are a vector times another vector of powers.
Ken would have you see this as a two step problem with the digits spread out.

   3   6   5
+  3   6   6


Then you just add the digits. Don't think about the carries.

   3   6   5
+  3   6   6

   6  12  11


Now we normalize the by dealing with the carry part moving from right to left 
in fine APL style. You can almost see the implied loop using residue and 
n-residue.
6  12 11
6  13  0
7   3  0

Ken believed that this two stage technique was much easier for people to get 
right.  I adopted it for when I do addition by had and it works very well for 
me. What would it be like if we changed the education establishment and used 
this technique?  One could argue that this sort of hand adding of columns of 
numbers is also dated. Let's don't go there I am just using this as an example 
of going back and looking at a beginning that is hard to see because it is 
"just too darn fundamental". 

We need to reduce complexity at all levels and that includes the culture we 
swim in.

cheers,
-David Leibs

On Jun 15, 2012, at 10:58 AM, BGB wrote:

> On 6/15/2012 12:27 PM, Paul Homer wrote:
>> 
>> I wouldn't describe complexity as a problem, but rather an attribute of the 
>> universe we exist in, effecting everything from how we organize our 
>> societies to how the various solar systems interact with each other.
>> 
>> Each time you conquer the current complexity, your approach adds to it. 
>> Eventually all that conquering needs to be conquered itself ...
>> 
> 
> yep.
> 
> the world of software is layers upon layers of stuff.
> one thing is made, and made easier, at the cost of adding a fair amount of 
> complexity somewhere else.
> 
> this is generally considered a good tradeoff, because the reduction of 
> complexity in things that are seen is perceptually more important than the 
> increase in internal complexity in the things not seen.
> 
> although it may be possible to reduce complexity, say by finding ways to do 
> the same things with less total complexity, this will not actually change the 
> underlying issue (or in other cases may come with costs worse than internal 
> complexity, such as poor performance or drastically higher memory use, ...).
> 
> 
>> Paul.
>> 
>> From: Loup Vaillant 
>> To: fonc@vpri.org 
>> Sent: Friday, June 15, 2012 1:54:04 PM
>> Subject: Re: [fonc] The Web Will Die When OOP Dies
>> 
>> Paul Homer wrote:
>> > It is far more than obvious that OO opened the door to allow massive
>> > systems. Theoretically they were possible before, but it gave us a way
>> > to manage the complexity of these beasts. Still, like all technologies,
>> > it comes with a built-in 'threshold' that imposes a limit on what we can
>> > build. If we are too exceed that, then I think we are in the hunt for
>> > the next philosophy and as Zed points out the ramification of finding it
>> > will cause yet another technological wave to overtake the last one.
>> 
>> I find that a bit depressing: if each tool that tackle complexity
>> better than the previous ones lead us to increase complexity (just
>> because we can), we're kinda doomed.
>> 
>> Can't we recognized complexity as a problem, instead of an unavoidable
>> law of nature?  Thank goodness we have STEPS project to shed some light.
>> 
>> Loup.
>> ___
>> fonc mailing

Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Pascal J. Bourguignon
David Leibs  writes:

> I have kinda lost track of this thread so forgive me if I wander off
> in a perpendicular direction.
>
> I believe that things do not have to continually get more and more
> complex.  The way out for me is to go back to the beginning and start
> over (which is what this mailing list is all about).  I constantly go
> back to the beginnings in math and/or physics and try to re-understand
> from first principles.  Of course every time I do this I get less and
> less further along the material continuum because the beginnings are
> so darn interesting.
>
> Let me give an example from arithmetic which I learned from Ken
> Iverson's writings years ago.
>
> As children we spend a lot of time practicing adding up
> numbers. Humans are very bad at this if you measure making a silly
> error as bad. Take for example:
>
>365
> +  366
> --
>
> this requires you to add 5 & 6, write down 1 and carry 1 to the next
> column then add 6, 6, and that carried 1 and write down 2 and carry a
> 1 to the next column finally add 3, 3 and the carried 1 and write down
> 7 this gives you 721, oops, the wrong answer.  In step 2 I made a
> totally dyslexic mistake and should have written down a 3.
>
> Ken proposed learning to see things a bit differently and remember the
> digits are a vector times another vector of powers.  Ken would have
> you see this as a two step problem with the digits spread out.
>
>3   6   5
> +  3   6   6
> 
>
> Then you just add the digits. Don't think about the carries.
>
>3   6   5
> +  3   6   6
> 
>6  12  11
>
> Now we normalize the by dealing with the carry part moving from right
> to left in fine APL style. You can almost see the implied loop using
> residue and n-residue.

> 6  12 11
> 6  13  0
> 7   3  0
>
> Ken believed that this two stage technique was much easier for people
> to get right.  I adopted it for when I do addition by had and it works
> very well for me. What would it be like if we changed the education
> establishment and used this technique?  One could argue that this sort
> of hand adding of columns of numbers is also dated. Let's don't go
> there I am just using this as an example of going back and looking at
> a beginning that is hard to see because it is "just too darn
> fundamental". 

It's a nice way to do additions indeed.

When doing additions mentally, I tend to do them from right to left,
predicting whether we need a carry or not by looking ahead the next
column.  Usually carries don't "carry over" more than one column, but
even if it does, you only have to remember a single digit at a time.

There are several ways to do additions :-)


Your way works as well for substractions:

3  6  5
-   3  7  1
---
0 -1  4
0 -10 + 4 = -6

3  7  1
 -  3  6  5
---
0  1 -4
   10 -4 = 6

and of course, it's already how we do multiplications too.



> We need to reduce complexity at all levels and that includes the
> culture we swim in.

Otherwise, you can always apply the KISS principle 
(Keep It Simple Stupid).


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Paul Homer
Hi David,

The often used quote by Einstein is:

"Make everything as simple as possible, but not simpler"

People interpret that in a lot of interesting ways, but it does point to a 
'minimum simplicity'. 


In software, you might build a system with 100,000 lines of code. Someone else 
might come along and build it with 20,000 lines of code, but there is some 
underlying complexity tied to the functionality that dictates that it could 
never be any less the X lines of code. The system encapsulates a significant 
amount of information, and stealing from Shannon slightly, it cannot be 
represented in any less bits.

If you let your imagination go wild, you can envision a system with twice as 
much functionality. Now it may be that it requires less than 2X lines of code, 
but it still has a minimum. 


All modern systems we have today exist in silos. Many interact with each other, 
but for the most part we are forced to deal with them individually. If we 
wanted to get closer to what Zed was describing, we'd have to integrate them 
with each other. Given that there are probably huge redundancies it wouldn't 
end up being N*X lines of code, but given the amount of information we've 
encoded it our various systems it would still be stunningly large. 


I think Windows is up around 50M. There may be systems out there that are 
larger. I certainly can imagine a system that might contain 500M lines of code. 
It would be a pretty nice piece of software, but I honestly doubt that we could 
build such a thing (as a single thing*) given our current technologies.

*One could take the Internet as a whole with its 2.3 billion users and untold 
number of machines as a 'single system'. I might accept that, but if it were so 
I'd have to say that its overall quality is very low in the sense that as one 
big piece it is a pretty messy piece. Parts of it work well, parts of it don't.


If things are expanding then they have to get more complex, they encompass 
more. Our usage of computers is significant, but it isn't hard to imagine them 
doing more for us. To get there, we have to expand our usage and thus be able 
to handle more complexity. Big integrated systems that make life better by 
keeping it more organized are probably things we absolutely need right now. Our 
modern societies are outrageously complex and many suspect unsustainable. The 
promise of computers has been that we could embed our intellect into them so 
that we could use them as tools to tame that problem, but to get there we need 
systems that dwarf our current ones.


Paul.





>
> From: David Leibs 
>To: Fundamentals of New Computing  
>Sent: Friday, June 15, 2012 3:17:19 PM
>Subject: Re: [fonc] The Web Will Die When OOP Dies
> 
>
>I have kinda lost track of this thread so forgive me if I wander off in a 
>perpendicular direction.
>
>
>I believe that things do not have to continually get more and more complex.  
>The way out for me is to go back to the beginning and start over (which is 
>what this mailing list is all about).
>I constantly go back to the beginnings in math and/or physics and try to 
>re-understand from first principles.  Of course every time I do this I get 
>less and less further along the material continuum because the beginnings are 
>so darn interesting.
>
>
>Let me give an example from arithmetic which I learned from Ken Iverson's 
>writings years ago.
>
>
>As children we spend a lot of time practicing adding up numbers. Humans are 
>very bad at this if you measure making a silly error as bad. Take for example:
>
>
>   365
>+  366
>--
>
>
>this requires you to add 5 & 6, write down 1 and carry 1 to the next column
>then add 6, 6, and that carried 1 and write down 2 and carry a 1 to the next 
>column
>finally add 3, 3 and the carried 1 and write down 7
>this gives you 721, oops, the wrong answer.  In step 2 I made a totally 
>dyslexic mistake and should have written down a 3.
>
>
>Ken proposed learning to see things a bit differently and remember the  digits 
>are a vector times another vector of powers.
>Ken would have you see this as a two step problem with the digits spread out.
>
>
>   3   6   5
>+  3   6   6
>
>
>
>Then you just add the digits. Don't think about the carries.
>
>
>   3   6   5
>+  3   6   6
>
>   6  12  11
>
>
>
>
>Now we normalize the by dealing with the carry part moving from right to left 
>in fine APL style. You can almost see the implied loop using residue and 
>n-residue.
>6  12 11
>6  13  0
>7   3  0
>
>
>Ken believed that this two stage technique was much easier for people to get 
>right.  I adopted it for when I do addition by had and it works very well 

Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread David Leibs
Speaking of multiplication.  Ken Iverson teaches us to do multiplication by 
using a * outer product to build a times table for the digits involved.
+-++
| | 3  6  6|
+-++
|3| 9 18 18|
|6|18 36 36|
|5|15 30 30|
+-++

Now you sum each diagonal:
   (9) (18+18) (18+36+15) (36+30) (30)
 936   6966 30
And just normalize as usual:

   9 36 69 66 30
   9 36 69 69 0
   9 36 75 9  0
   9 43 5  9  0
  13 3  5  9  0
 1 3 3  5  9  0

The multiplication table is easy and just continued practice for your 
multiplication facts.

You don't need much more machinery before you have the kids doing Cannon's 
order n systolic array algorithm for matrix multiply, on the gym floor, with 
their bodies.  This assumes that the dance teacher is coordinating with the 
algorithms teacher. Of course if there isn't something relevant going on that 
warrants matrix multiply then all is lost. I guess that's a job for the 
motivation teacher. :-)

-David Leibs

On Jun 15, 2012, at 12:57 PM, Pascal J. Bourguignon wrote:

> David Leibs  writes:
> 
>> I have kinda lost track of this thread so forgive me if I wander off
>> in a perpendicular direction.
>> 
>> I believe that things do not have to continually get more and more
>> complex.  The way out for me is to go back to the beginning and start
>> over (which is what this mailing list is all about).  I constantly go
>> back to the beginnings in math and/or physics and try to re-understand
>> from first principles.  Of course every time I do this I get less and
>> less further along the material continuum because the beginnings are
>> so darn interesting.
>> 
>> Let me give an example from arithmetic which I learned from Ken
>> Iverson's writings years ago.
>> 
>> As children we spend a lot of time practicing adding up
>> numbers. Humans are very bad at this if you measure making a silly
>> error as bad. Take for example:
>> 
>>   365
>> +  366
>> --
>> 
>> this requires you to add 5 & 6, write down 1 and carry 1 to the next
>> column then add 6, 6, and that carried 1 and write down 2 and carry a
>> 1 to the next column finally add 3, 3 and the carried 1 and write down
>> 7 this gives you 721, oops, the wrong answer.  In step 2 I made a
>> totally dyslexic mistake and should have written down a 3.
>> 
>> Ken proposed learning to see things a bit differently and remember the
>> digits are a vector times another vector of powers.  Ken would have
>> you see this as a two step problem with the digits spread out.
>> 
>>   3   6   5
>> +  3   6   6
>> 
>> 
>> Then you just add the digits. Don't think about the carries.
>> 
>>   3   6   5
>> +  3   6   6
>> 
>>   6  12  11
>> 
>> Now we normalize the by dealing with the carry part moving from right
>> to left in fine APL style. You can almost see the implied loop using
>> residue and n-residue.
> 
>> 6  12 11
>> 6  13  0
>> 7   3  0
>> 
>> Ken believed that this two stage technique was much easier for people
>> to get right.  I adopted it for when I do addition by had and it works
>> very well for me. What would it be like if we changed the education
>> establishment and used this technique?  One could argue that this sort
>> of hand adding of columns of numbers is also dated. Let's don't go
>> there I am just using this as an example of going back and looking at
>> a beginning that is hard to see because it is "just too darn
>> fundamental". 
> 
> It's a nice way to do additions indeed.
> 
> When doing additions mentally, I tend to do them from right to left,
> predicting whether we need a carry or not by looking ahead the next
> column.  Usually carries don't "carry over" more than one column, but
> even if it does, you only have to remember a single digit at a time.
> 
> There are several ways to do additions :-)
> 
> 
> Your way works as well for substractions:
> 
>3  6  5
> -   3  7  1
> ---
>0 -1  4
>0 -10 + 4 = -6
> 
>3  7  1
> -  3  6  5
> ---
>0  1 -4
>   10 -4 = 6
> 
> and of course, it's already how we do multiplications too.
> 
> 
> 
>> We need to reduce complexity at all levels and that includes the
>> culture we swim in.
> 
> Otherwise, you can always apply the KISS principle 
> (Keep It Simple Stupid).
> 
> 
> -- 
> __Pascal Bourguignon__ http://www.informatimago.com/
> A bad day in () is better than a good day in {}.
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Andre van Delft
Fascinating.
How did Iverson do division?

Op 15 jun. 2012, om 23:08 heeft David Leibs het volgende geschreven:

> Speaking of multiplication.  Ken Iverson teaches us to do multiplication by 
> using a * outer product to build a times table for the digits involved.
> +-++
> | | 3  6  6|
> +-++
> |3| 9 18 18|
> |6|18 36 36|
> |5|15 30 30|
> +-++
> 
> Now you sum each diagonal:
>(9) (18+18) (18+36+15) (36+30) (30)
>  936   6966 30
> And just normalize as usual:
> 
>9 36 69 66 30
>9 36 69 69 0
>9 36 75 9  0
>9 43 5  9  0
>   13 3  5  9  0
>  1 3 3  5  9  0
> 
> The multiplication table is easy and just continued practice for your 
> multiplication facts.
> 
> You don't need much more machinery before you have the kids doing Cannon's 
> order n systolic array algorithm for matrix multiply, on the gym floor, with 
> their bodies.  This assumes that the dance teacher is coordinating with the 
> algorithms teacher. Of course if there isn't something relevant going on that 
> warrants matrix multiply then all is lost. I guess that's a job for the 
> motivation teacher. :-)
> 
> -David Leibs
> 
> On Jun 15, 2012, at 12:57 PM, Pascal J. Bourguignon wrote:
> 
>> David Leibs  writes:
>> 
>>> I have kinda lost track of this thread so forgive me if I wander off
>>> in a perpendicular direction.
>>> 
>>> I believe that things do not have to continually get more and more
>>> complex.  The way out for me is to go back to the beginning and start
>>> over (which is what this mailing list is all about).  I constantly go
>>> back to the beginnings in math and/or physics and try to re-understand
>>> from first principles.  Of course every time I do this I get less and
>>> less further along the material continuum because the beginnings are
>>> so darn interesting.
>>> 
>>> Let me give an example from arithmetic which I learned from Ken
>>> Iverson's writings years ago.
>>> 
>>> As children we spend a lot of time practicing adding up
>>> numbers. Humans are very bad at this if you measure making a silly
>>> error as bad. Take for example:
>>> 
>>>   365
>>> +  366
>>> --
>>> 
>>> this requires you to add 5 & 6, write down 1 and carry 1 to the next
>>> column then add 6, 6, and that carried 1 and write down 2 and carry a
>>> 1 to the next column finally add 3, 3 and the carried 1 and write down
>>> 7 this gives you 721, oops, the wrong answer.  In step 2 I made a
>>> totally dyslexic mistake and should have written down a 3.
>>> 
>>> Ken proposed learning to see things a bit differently and remember the
>>> digits are a vector times another vector of powers.  Ken would have
>>> you see this as a two step problem with the digits spread out.
>>> 
>>>   3   6   5
>>> +  3   6   6
>>> 
>>> 
>>> Then you just add the digits. Don't think about the carries.
>>> 
>>>   3   6   5
>>> +  3   6   6
>>> 
>>>   6  12  11
>>> 
>>> Now we normalize the by dealing with the carry part moving from right
>>> to left in fine APL style. You can almost see the implied loop using
>>> residue and n-residue.
>> 
>>> 6  12 11
>>> 6  13  0
>>> 7   3  0
>>> 
>>> Ken believed that this two stage technique was much easier for people
>>> to get right.  I adopted it for when I do addition by had and it works
>>> very well for me. What would it be like if we changed the education
>>> establishment and used this technique?  One could argue that this sort
>>> of hand adding of columns of numbers is also dated. Let's don't go
>>> there I am just using this as an example of going back and looking at
>>> a beginning that is hard to see because it is "just too darn
>>> fundamental". 
>> 
>> It's a nice way to do additions indeed.
>> 
>> When doing additions mentally, I tend to do them from right to left,
>> predicting whether we need a carry or not by looking ahead the next
>> column.  Usually carries don't "carry over" more than one column, but
>> even if it does, you only have to remember a single digit at a time.
>> 
>> There are several ways to do additions :-)
>> 
>> 
>> Your way works as well for substractions:
>> 
>>3  6  5
>> -   3  7  1
>> ---
>>0 -1  4
>>0 -10 + 4 = -6
>> 
>>3  7  1
>> -  3  6  5
>> ---
>>0  1 -4
>>   10 -4 = 6
>> 
>> and of course, it's already how we do multiplications too.
>> 
>> 
>> 
>>> We need to reduce complexity at all levels and that includes the
>>> culture we swim in.
>> 
>> Otherwise, you can always apply the KISS principle 
>> (Keep It Simple Stupid).
>> 
>> 
>> -- 
>> __Pascal Bourguignon__ http://www.informatimago.com/
>> A bad day in () is better than a good day in {}.
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

_

Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread David Leibs
At some point one moves on and uses powerful enough computer system like APL, 
J, or Smalltalk which support rational numbers. :-)
-djl

On Jun 15, 2012, at 3:14 PM, Andre van Delft wrote:

> Fascinating.
> How did Iverson do division?
> 
> Op 15 jun. 2012, om 23:08 heeft David Leibs het volgende geschreven:
> 
>> Speaking of multiplication.  Ken Iverson teaches us to do multiplication by 
>> using a * outer product to build a times table for the digits involved.
>> +-++
>> | | 3  6  6|
>> +-++
>> |3| 9 18 18|
>> |6|18 36 36|
>> |5|15 30 30|
>> +-++
>> 
>> Now you sum each diagonal:
>>(9) (18+18) (18+36+15) (36+30) (30)
>>  936   6966 30
>> And just normalize as usual:
>> 
>>9 36 69 66 30
>>9 36 69 69 0
>>9 36 75 9  0
>>9 43 5  9  0
>>   13 3  5  9  0
>>  1 3 3  5  9  0
>> 
>> The multiplication table is easy and just continued practice for your 
>> multiplication facts.
>> 
>> You don't need much more machinery before you have the kids doing Cannon's 
>> order n systolic array algorithm for matrix multiply, on the gym floor, with 
>> their bodies.  This assumes that the dance teacher is coordinating with the 
>> algorithms teacher. Of course if there isn't something relevant going on 
>> that warrants matrix multiply then all is lost. I guess that's a job for the 
>> motivation teacher. :-)
>> 
>> -David Leibs
>> 
>> On Jun 15, 2012, at 12:57 PM, Pascal J. Bourguignon wrote:
>> 
>>> David Leibs  writes:
>>> 
 I have kinda lost track of this thread so forgive me if I wander off
 in a perpendicular direction.
 
 I believe that things do not have to continually get more and more
 complex.  The way out for me is to go back to the beginning and start
 over (which is what this mailing list is all about).  I constantly go
 back to the beginnings in math and/or physics and try to re-understand
 from first principles.  Of course every time I do this I get less and
 less further along the material continuum because the beginnings are
 so darn interesting.
 
 Let me give an example from arithmetic which I learned from Ken
 Iverson's writings years ago.
 
 As children we spend a lot of time practicing adding up
 numbers. Humans are very bad at this if you measure making a silly
 error as bad. Take for example:
 
   365
 +  366
 --
 
 this requires you to add 5 & 6, write down 1 and carry 1 to the next
 column then add 6, 6, and that carried 1 and write down 2 and carry a
 1 to the next column finally add 3, 3 and the carried 1 and write down
 7 this gives you 721, oops, the wrong answer.  In step 2 I made a
 totally dyslexic mistake and should have written down a 3.
 
 Ken proposed learning to see things a bit differently and remember the
 digits are a vector times another vector of powers.  Ken would have
 you see this as a two step problem with the digits spread out.
 
   3   6   5
 +  3   6   6
 
 
 Then you just add the digits. Don't think about the carries.
 
   3   6   5
 +  3   6   6
 
   6  12  11
 
 Now we normalize the by dealing with the carry part moving from right
 to left in fine APL style. You can almost see the implied loop using
 residue and n-residue.
>>> 
 6  12 11
 6  13  0
 7   3  0
 
 Ken believed that this two stage technique was much easier for people
 to get right.  I adopted it for when I do addition by had and it works
 very well for me. What would it be like if we changed the education
 establishment and used this technique?  One could argue that this sort
 of hand adding of columns of numbers is also dated. Let's don't go
 there I am just using this as an example of going back and looking at
 a beginning that is hard to see because it is "just too darn
 fundamental". 
>>> 
>>> It's a nice way to do additions indeed.
>>> 
>>> When doing additions mentally, I tend to do them from right to left,
>>> predicting whether we need a carry or not by looking ahead the next
>>> column.  Usually carries don't "carry over" more than one column, but
>>> even if it does, you only have to remember a single digit at a time.
>>> 
>>> There are several ways to do additions :-)
>>> 
>>> 
>>> Your way works as well for substractions:
>>> 
>>>3  6  5
>>> -   3  7  1
>>> ---
>>>0 -1  4
>>>0 -10 + 4 = -6
>>> 
>>>3  7  1
>>> -  3  6  5
>>> ---
>>>0  1 -4
>>>   10 -4 = 6
>>> 
>>> and of course, it's already how we do multiplications too.
>>> 
>>> 
>>> 
 We need to reduce complexity at all levels and that includes the
 culture we swim in.
>>> 
>>> Otherwise, you can always apply the KISS principle 
>>> (Keep It Simple Stupid).
>>> 
>>> 
>>> -- 
>>> __Pascal Bourguignon__ http://www.informatimago.com/
>>> A bad day in () is

Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Mark Haniford
Paul,

I found your post interesting in that it might reflect a fundamental
problem that I have with "normal, average" OO, and that is that
methods belong with data.  I have never bought that ideaever.   I
remember feeling stupid because I could never grok that idea and then
felt better when the chief scientist at Franz (who of course produce
Franz Lisp) said that functions don't belong with data.  So of course
in Common Lisp we have generic functions.

There's this continuous mental dilenma with me about what belongs in a
class.  It's just a big bowl of wrong (TM Jeff Garlin) where we have
everything that can ever be done to a class belong to that class.  So
that's why I subscribe to anemic classes, despite what Fowler and
others say.  The exception I have is valid state (factories), but even
then I find that I want to bring extension methods (in c#) back into
the proper class.

We're doing something wrong, but I'm just a joe-average programmer and
like subscribing to newsgroups like this to hear what the big brains
are saying :)

On Fri, Jun 15, 2012 at 11:45 AM, Paul Homer  wrote:
> I see something deeper in what Zed is saying.
>
> My first really strong experiences with programming came from the
> data-structures world in the late 80s at the University of Waterloo. There
> was an implicit view that one could decompose all problems into
> data-structures (and a few algorithms and a little bit of glue). My sense at
> the time was that the newly emerging concepts of OO were a way of
> entrenching this philosophy directly into the programming languages.
>
> When applied to tasks like building window systems, OO is an incredibly
> powerful approach. If one matches what they are seeing on the screen with
> the objects they are building in the back, there is a strong one-to-one
> mapping that allows the programmer to rapidly diagnose problems at a speed
> that just wasn't possible before.
>
> But for many of the things that I've built in the back-end I find that OO
> causes me to jump through what I think are artificial hoops. Over the years
> I've spent a lot of time pondering why. My underlying sense is that there
> are some fundamental dualities in computational machines. Static vs.
> dynamic. Data vs. code. Nouns vs. verbs. Location vs. time. It is possible,
> of course, to 'cast' one onto the other, there are plenty of examples of
> 'jumping' particularly in languages wrt. nouns and verbs. But I think that
> decompositions become 'easier' for us to understand when we partition them
> along the 'natural' lines of what they are underneath.
>
> My thinking some time ago as it applies to OO is that the fundamental
> primitive, an object, essentially mixes its metaphors (sort of). That is, it
> contains both code and data. I think it's this relatively simple point that
> underlies the problems that people have in grokking OO. What I've also found
> is that that wasn't there in that earlier philosophy at Waterloo. Sure there
> were atomic primitives attached to each data-structure, but the way we build
> heavy-duty mechanics was more often to push the 'actions' to something like
> an intermediary data-structure and then do a clean simple traversal to
> actuate it (like lisp), so fundamentally the static/dynamic duality was
> daintily skipped over.
>
> It is far more than obvious that OO opened the door to allow massive
> systems. Theoretically they were possible before, but it gave us a way to
> manage the complexity of these beasts. Still, like all technologies, it
> comes with a built-in 'threshold' that imposes a limit on what we can build.
> If we are too exceed that, then I think we are in the hunt for the next
> philosophy and as Zed points out the ramification of finding it will cause
> yet another technological wave to overtake the last one.
>
> Just my thoughts.
>
>
> Paul.
>
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Miles Fidelman

Pascal J. Bourguignon wrote:

John Zabroski  writes:



Sorry, you did not answer my question, but instead presented excuses
for why programmers misunderstand people.  (Can I paraphrase your
thoughts as, "Because people are not programmers!")

No, you misunderstood my answer:
"Because people don't pay programmers enough."



I think that might be an inaccurate statement in two regards:

- programmers make VERY good money, at least in some fields (if you know 
where to find good, cheap coders, in the US, let me know where)


- (a lot of programmers) do NOT have a particularly user-focused mindset 
(just ask a C coder what they think of Hypercard - you'll get all kinds 
of answers about why end-users can't do any useful; despite a really 
long track record of good stuff written in Hypercard, particularly by 
educators)


Note: This is irrelevant vis-a-vis Jon's question, however.  The answer 
to why he can't find easy ways to upload files, is because he isn't looking.


Miles

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Miles Fidelman

Paul Homer wrote:


In software, you might build a system with 100,000 lines of code. 
Someone else might come along and build it with 20,000 lines of code, 
but there is some underlying complexity tied to the functionality that 
dictates that it could never be any less the X lines of code. The 
system encapsulates a significant amount of information, and stealing 
from Shannon slightly, it cannot be represented in any less bits.


Of course, going from 100k lines to 20k lines might be a result of:
- coding tricks that lead to incomprehensible code, particularly in 
languages that encourage such things (APL and perl come to mind)
- re-writing in a domain-specific language (more powerful, but more 
specialized constructs that shift complexity into the underlying platform)


Just saying :-)


--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Igor Stasenko
On 15 June 2012 21:17, David Leibs  wrote:
> I have kinda lost track of this thread so forgive me if I wander off in a
> perpendicular direction.
>
> I believe that things do not have to continually get more and more complex.
>  The way out for me is to go back to the beginning and start over (which is
> what this mailing list is all about).
> I constantly go back to the beginnings in math and/or physics and try to
> re-understand from first principles.  Of course every time I do this I get
> less and less further along the material continuum because the beginnings
> are so darn interesting.
>

I think otherwise. According to human's history we tend to create and
control more and more
complex systems.
Just compare today's cars with cars 100 years ago.. Or first
microchips and today's microchips.

But i agree that going back to beginnings has its own value: since
your today's experience is
always better than yesterday's one, you can often see solutions which
you didn't saw in a first place,
which would allow you to understand how to make things simpler.
But only to do even more complex things at next iteration. :)

> Let me give an example from arithmetic which I learned from Ken Iverson's
> writings years ago.
>
> As children we spend a lot of time practicing adding up numbers. Humans are
> very bad at this if you measure making a silly error as bad. Take for
> example:
>
>    365
> +  366
> --
>
> this requires you to add 5 & 6, write down 1 and carry 1 to the next column
> then add 6, 6, and that carried 1 and write down 2 and carry a 1 to the next
> column
> finally add 3, 3 and the carried 1 and write down 7
> this gives you 721, oops, the wrong answer.  In step 2 I made a totally
> dyslexic mistake and should have written down a 3.
>
> Ken proposed learning to see things a bit differently and remember the
>  digits are a vector times another vector of powers.
> Ken would have you see this as a two step problem with the digits spread
> out.
>
>    3   6   5
> +  3   6   6
> 
>
> Then you just add the digits. Don't think about the carries.
>
>    3   6   5
> +  3   6   6
> 
>    6  12  11
>
>
> Now we normalize the by dealing with the carry part moving from right to
> left in fine APL style. You can almost see the implied loop using residue
> and n-residue.
> 6  12 11
> 6  13  0
> 7   3  0
>
> Ken believed that this two stage technique was much easier for people to get
> right.

But still it won't prevent people from doing mistakes, which you
nicely demonstrated by
by putting zeroes in right column :)



-- 
Best regards,
Igor Stasenko.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Igor Stasenko
On 16 June 2012 02:23, Mark Haniford  wrote:
> Paul,
>
> I found your post interesting in that it might reflect a fundamental
> problem that I have with "normal, average" OO, and that is that
> methods belong with data.  I have never bought that ideaever.   I
> remember feeling stupid because I could never grok that idea and then
> felt better when the chief scientist at Franz (who of course produce
> Franz Lisp) said that functions don't belong with data.  So of course
> in Common Lisp we have generic functions.
>

I also thought that so called OO is just a mere coupling of data and functions..
but after learning smalltalk and especially an image-based development,
i came to conclusion, that it is often not necessary to separate those two,
it's all just data or, if you like another word, - an information.
Information describing some numbers or connections between multiple
entities, or their groups,
and information which describes how computer system operates with
those entities.

And that's IMO fairly easy to get.

> There's this continuous mental dilenma with me about what belongs in a
> class.  It's just a big bowl of wrong (TM Jeff Garlin) where we have
> everything that can ever be done to a class belong to that class.  So
> that's why I subscribe to anemic classes, despite what Fowler and
> others say.  The exception I have is valid state (factories), but even
> then I find that I want to bring extension methods (in c#) back into
> the proper class.
>
> We're doing something wrong, but I'm just a joe-average programmer and
> like subscribing to newsgroups like this to hear what the big brains
> are saying :)
>
> On Fri, Jun 15, 2012 at 11:45 AM, Paul Homer  wrote:
>> I see something deeper in what Zed is saying.
>>
>> My first really strong experiences with programming came from the
>> data-structures world in the late 80s at the University of Waterloo. There
>> was an implicit view that one could decompose all problems into
>> data-structures (and a few algorithms and a little bit of glue). My sense at
>> the time was that the newly emerging concepts of OO were a way of
>> entrenching this philosophy directly into the programming languages.
>>
>> When applied to tasks like building window systems, OO is an incredibly
>> powerful approach. If one matches what they are seeing on the screen with
>> the objects they are building in the back, there is a strong one-to-one
>> mapping that allows the programmer to rapidly diagnose problems at a speed
>> that just wasn't possible before.
>>
>> But for many of the things that I've built in the back-end I find that OO
>> causes me to jump through what I think are artificial hoops. Over the years
>> I've spent a lot of time pondering why. My underlying sense is that there
>> are some fundamental dualities in computational machines. Static vs.
>> dynamic. Data vs. code. Nouns vs. verbs. Location vs. time. It is possible,
>> of course, to 'cast' one onto the other, there are plenty of examples of
>> 'jumping' particularly in languages wrt. nouns and verbs. But I think that
>> decompositions become 'easier' for us to understand when we partition them
>> along the 'natural' lines of what they are underneath.
>>
>> My thinking some time ago as it applies to OO is that the fundamental
>> primitive, an object, essentially mixes its metaphors (sort of). That is, it
>> contains both code and data. I think it's this relatively simple point that
>> underlies the problems that people have in grokking OO. What I've also found
>> is that that wasn't there in that earlier philosophy at Waterloo. Sure there
>> were atomic primitives attached to each data-structure, but the way we build
>> heavy-duty mechanics was more often to push the 'actions' to something like
>> an intermediary data-structure and then do a clean simple traversal to
>> actuate it (like lisp), so fundamentally the static/dynamic duality was
>> daintily skipped over.
>>
>> It is far more than obvious that OO opened the door to allow massive
>> systems. Theoretically they were possible before, but it gave us a way to
>> manage the complexity of these beasts. Still, like all technologies, it
>> comes with a built-in 'threshold' that imposes a limit on what we can build.
>> If we are too exceed that, then I think we are in the hunt for the next
>> philosophy and as Zed points out the ramification of finding it will cause
>> yet another technological wave to overtake the last one.
>>
>> Just my thoughts.
>>
>>
>> Paul.
>>
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc



-- 
Best regards,
Igor Stasenko.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread Miles Fidelman


Igor Stasenko wrote:

On 16 June 2012 02:23, Mark Haniford  wrote:

Paul,

I found your post interesting in that it might reflect a fundamental
problem that I have with "normal, average" OO, and that is that
methods belong with data.  I have never bought that ideaever.   I
remember feeling stupid because I could never grok that idea and then
felt better when the chief scientist at Franz (who of course produce
Franz Lisp) said that functions don't belong with data.  So of course
in Common Lisp we have generic functions.


I also thought that so called OO is just a mere coupling of data and functions..
but after learning smalltalk and especially an image-based development,
i came to conclusion, that it is often not necessary to separate those two,
it's all just data or, if you like another word, - an information.
Information describing some numbers or connections between multiple
entities, or their groups,
and information which describes how computer system operates with
those entities.



The problem I've always had with OO paradigms is that they simply ignore 
flow-of-control.


Define classes -> instantiate objects -> a miracle occurs

- some languages are single threaded: start a main loop going in a 
master object, it starts generating messages to other objects, the flow 
of control gets very murky, very quickly


- other languages/systems are event driven - multiple event-responses 
can be in process, but just try to understand the flow of messages, and 
the resulting system state, that result while a myriad of events are 
being processed


Traditional procedural languages impose some discipline in thinking 
about flow-of-control, subroutine calls and returns, etc.


Actor formalisms (e.g., as in Erlang) impose strict constraints that 
enable massive concurrency without having to worry (too much) about 
things like deadlocks, race conditions, deadly embraces, and so on.


The OO paradigm simply seems to hide flow-of-control in ways that make 
it very hard to even think about the issues.  Has always struck me as a 
recipe for disaster when building seriously complex systems.


Miles Fidelman


--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread John Zabroski
On Jun 15, 2012 2:39 PM, "Pascal J. Bourguignon" 
wrote:
>
> John Zabroski  writes:
>
>
> > Sorry, you did not answer my question, but instead presented excuses
> > for why programmers misunderstand people.  (Can I paraphrase your
> > thoughts as, "Because people are not programmers!")
>
> No, you misunderstood my answer:
> "Because people don't pay programmers enough."

In the words of comedian Spike Milligan, "All I ask is for the chance to
prove money can't make me happy."

But my motto comes from pianist Glenn Gould: the ideal ratio of performers
to audience is one. I have never seen a software team produce better
results with better pay, but most of the great advances in software came
from somebody doing something differently because any other way was simply
wrong.

Having seen millionaires throw their money around to build their dream app
(the Chandler project featured in Scott Rosenberg's book Dreaming in Code
and all of Sandy Klausner's vaporware graphical programming ideas), and
seeing what road blocks still remained, I disbelieve your answer.

Who invented the spreadsheet? One person.
Who invented pivot tables? One person.
Who invented modeless text editing? One person.

How much money is enough, anyway?  In the words of John D. Rockefellar, "A
little bit more"?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread John Zabroski
On Fri, Jun 15, 2012 at 10:52 PM, Miles Fidelman  wrote:

> Pascal J. Bourguignon wrote:
>
>> John Zabroski  writes:
>>
>>
>>  Sorry, you did not answer my question, but instead presented excuses
>>> for why programmers misunderstand people.  (Can I paraphrase your
>>> thoughts as, "Because people are not programmers!")
>>>
>> No, you misunderstood my answer:
>> "Because people don't pay programmers enough."
>>
>>
>>  I think that might be an inaccurate statement in two regards:
>
> - programmers make VERY good money, at least in some fields (if you know
> where to find good, cheap coders, in the US, let me know where)
>
> - (a lot of programmers) do NOT have a particularly user-focused mindset
> (just ask a C coder what they think of Hypercard - you'll get all kinds of
> answers about why end-users can't do any useful; despite a really long
> track record of good stuff written in Hypercard, particularly by educators)
>
> Note: This is irrelevant vis-a-vis Jon's question, however.  The answer to
> why he can't find easy ways to upload files, is because he isn't looking.



I have probably spent the better part of my life looking for examples of
good design, and my conclusion is there are so few good designers. Even
fewer non-designers actually notice a good designers' abilities to put them
in a position to succeed.  For that reason, we are languishing under the
canopy of a squalid subcultural darkness where I get answers saying greed
is good and will solve all.

Your answer and Pascal's can only be described as the answers adults would
give.  My question was a thought exercise, to build in the world of
imagination.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Randy MacDonald

On 6/10/2012 1:15 AM, BGB wrote:
meanwhile, I have spent several days on-off pondering the mystery of 
if there is any good syntax (for a language with a vaguely C-like 
syntax), to express the concept of "execute these statements in 
parallel and continue when all are done".

I believe that the expression in Dyalog APL is:

⍎&¨statements

or

{execute}{spawn}{each}statements.

--
---
|\/| Randy A MacDonald   | If the string is too tight, it will snap
|\\| array...@ns.sympatico.ca|   If it is too loose, it won't play...
 BSc(Math) UNBF '83  | APL: If you can say it, it's done.
 Natural Born APL'er | I use Real J
 Experimental webserver http://mormac.homeftp.net/
<-NTP>{ gnat }-

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread David Leibs
1&+
-djl
On Jun 16, 2012, at 7:19 AM, Randy MacDonald wrote:

> On 6/10/2012 1:15 AM, BGB wrote:
>> meanwhile, I have spent several days on-off pondering the mystery of if 
>> there is any good syntax (for a language with a vaguely C-like syntax), to 
>> express the concept of "execute these statements in parallel and continue 
>> when all are done".
> I believe that the expression in Dyalog APL is:
> 
> ⍎&¨statements
> 
> or
> 
> {execute}{spawn}{each}statements.
> 
> -- 
> ---
> |\/| Randy A MacDonald   | If the string is too tight, it will snap
> |\\| array...@ns.sympatico.ca|   If it is too loose, it won't play...
> BSc(Math) UNBF '83  | APL: If you can say it, it's done.
> Natural Born APL'er | I use Real J
> Experimental webserver http://mormac.homeftp.net/
> <-NTP>{ gnat }-
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 9:19 AM, Randy MacDonald wrote:

On 6/10/2012 1:15 AM, BGB wrote:
meanwhile, I have spent several days on-off pondering the mystery of 
if there is any good syntax (for a language with a vaguely C-like 
syntax), to express the concept of "execute these statements in 
parallel and continue when all are done".

I believe that the expression in Dyalog APL is:

⍎&¨statements

or

{execute}{spawn}{each}statements.



I recently thought about it off-list, and came up with a syntax like:
async! {A}&{B}&{C}

but, decided that this isn't really needed at the more moment, and is a 
bit "extreme" of a feature anyways (and would need to devise a mechanism 
for implementing a multi-way join, ...).


actually, probably in my bytecode it would look something like:
mark
mark; push A; close; call_async
mark; push B; close; call_async
mark; push C; close; call_async
multijoin

(and likely involve adding some logic into the green-thread scheduler...).


ended up basically opting in this case for something simpler which I had 
used in the past:
callback events on timers. technically, timed callbacks aren't really 
"good", but they work well enough for things like animation tasks, ...


but, I may still need to think about it.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Randy MacDonald
@BGB, if the braces around the letters defers execution, as my memories 
of Perl confirm, this is perfect.  With APL, quoting an expression 
accomplishes the same end: '1+1'



On another note, I agree with the thesis that OO is just message passing:

  aResult ← someParameters 'messageName' to anObject ⍝⍝ so, once 
'to' is defined, APL does OO.


I was thinking 'new' didn't fit, but

   'new' to aClass

convinced me otherwise.

It also means that 'object oriented language' is a category error.

On 6/16/2012 11:40 AM, BGB wrote:


I recently thought about it off-list, and came up with a syntax like:
async! {A}&{B}&{C}



--
---
|\/| Randy A MacDonald   | If the string is too tight, it will snap
|\\| array...@ns.sympatico.ca|   If it is too loose, it won't play...
 BSc(Math) UNBF '83  | APL: If you can say it, it's done.
 Natural Born APL'er | I use Real J
 Experimental webserver http://mormac.homeftp.net/
<-NTP>{ gnat }-

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it is 
the "async" keyword which indicates that there is deferred execution. 
(in my language, quoting indicates symbols or strings, as in "this is a 
string", 'a', or 'single-quoted string', where "a" is always a string, 
but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in "{x: 3, y: 4}".


the language sort-of distinguishes between statements and expressions, 
but this is more relaxed than in many other languages (it is more built 
on "context" than on a strict syntactic divide, and in most cases an 
explicit "return" is optional since any statement/expression in "tail 
position" may implicitly return a value).



the letters in this case were just placeholders for the statements which 
would go in the blocks.


for example example:
if(true)
{
printf("A\n");
sleep(1000);
printf("B\n");
sleep(1000);
}
printf("Done\n");

executes the print statements synchronously, causing the thread to sleep 
for 1s in the process (so, "Done" is printed 1s after "B").


and, with a plain "async" keyword:
async {
sleep(1000);
printf("A\n");
}
printf("Done\n");

will print "Done" first, and then print "A" about 1 second later (since 
the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the "join()" will block until the given thread has returned, and 
return the return value from the thread.
generally though, a "join" in this form only makes sense with a single 
argument (and would be implemented in the VM using a special bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and likewise:
join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}&{B}&{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean "async with join", and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it may 
also imply "we don't really care what the return value is".


basically, the "!" suffix has ended up on several of my keywords to 
indicate "alternate forms", for example: "a as int" and "a as! int" will 
have slightly different semantics (the former will return "null" if the 
cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up more 
of the logic for this (adding multiway join logic, ...).





On another note, I agree with the thesis that OO is just message passing:

  aResult ? someParameters 'messageName' to anObject ?? so, once 
'to' is defined, APL does OO.


I was thinking 'new' didn't fit, but

   'new' to aClass

convinced me otherwise.

It also means that 'object oriented language' is a category error.



my language is a bit more generic, and loosely borrows much of its 
current syntax from JavaScript and ActionScript.


however, it has a fair number of non-JS features and semantics exist as 
well.
it is hardly an elegant, cleanly designed, or minimal language, but it 
works, and is a design more based on being useful to myself.




On 6/16/2012 11:40 AM, BGB wrote:


I recently thought about it off-list, and came up with a syntax like:
async! {A}&{B}&{C}



--
---
|\/| Randy A MacDonald   | If the string is too tight, it will snap
|\\|array...@ns.sympatico.ca|   If it is too loose, it won't play...
  BSc(Math) UNBF '83  | APL: If you can say it, it's done.
  Natural Born APL'er | I use Real J
  Experimental webserverhttp://mormac.homeftp.net/
<-NTP>{ gnat }-


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Randy MacDonald
@BGB, by the 'same end' i meant tranforming a statement into something 
that a flow control operator can act on, like if () {...} else {}  The 
domain of the execute operator in APL is quoted strings.  I did not mean 
that the same end was allowing asynchronous execution.



On 6/16/2012 1:23 PM, BGB wrote:

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it is 
the "async" keyword which indicates that there is deferred execution. 
(in my language, quoting indicates symbols or strings, as in "this is 
a string", 'a', or 'single-quoted string', where "a" is always a 
string, but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in "{x: 3, y: 4}".


the language sort-of distinguishes between statements and expressions, 
but this is more relaxed than in many other languages (it is more 
built on "context" than on a strict syntactic divide, and in most 
cases an explicit "return" is optional since any statement/expression 
in "tail position" may implicitly return a value).



the letters in this case were just placeholders for the statements 
which would go in the blocks.


for example example:
if(true)
{
printf("A\n");
sleep(1000);
printf("B\n");
sleep(1000);
}
printf("Done\n");

executes the print statements synchronously, causing the thread to 
sleep for 1s in the process (so, "Done" is printed 1s after "B").


and, with a plain "async" keyword:
async {
sleep(1000);
printf("A\n");
}
printf("Done\n");

will print "Done" first, and then print "A" about 1 second later 
(since the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the "join()" will block until the given thread has returned, and 
return the return value from the thread.
generally though, a "join" in this form only makes sense with a single 
argument (and would be implemented in the VM using a special bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and 
likewise:

join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}&{B}&{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean "async with join", and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it 
may also imply "we don't really care what the return value is".


basically, the "!" suffix has ended up on several of my keywords to 
indicate "alternate forms", for example: "a as int" and "a as! int" 
will have slightly different semantics (the former will return "null" 
if the cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up more 
of the logic for this (adding multiway join logic, ...).





On another note, I agree with the thesis that OO is just message passing:

  aResult ? someParameters 'messageName' to anObject ?? so, once 
'to' is defined, APL does OO.


I was thinking 'new' didn't fit, but

   'new' to aClass

convinced me otherwise.

It also means that 'object oriented language' is a category error.



my language is a bit more generic, and loosely borrows much of its 
current syntax from JavaScript and ActionScript.


however, it has a fair number of non-JS features and semantics exist 
as well.
it is hardly an elegant, cleanly designed, or minimal language, but it 
works, and is a design more based on being useful to myself.




On 6/16/2012 11:40 AM, BGB wrote:


I recently thought about it off-list, and came up with a syntax like:
async! {A}&{B}&{C}



--
---
|\/| Randy A MacDonald   | If the string is too tight, it will snap
|\\|array...@ns.sympatico.ca|   If it is too loose, it won't play...
  BSc(Math) UNBF '83  | APL: If you can say it, it's done.
  Natural Born APL'er | I use Real J
  Experimental webserverhttp://mormac.homeftp.net/
<-NTP>{ gnat }-


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



--
---
|\/| Randy A MacDonald   | If the string is too tight, it will snap
|\\| array...@ns.sympatico.ca|   If it is too loose, it won't play...
 BSc(Math) UNBF '83  | APL: If you can say it, it's done.
 Natural Born 

Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 11:36 AM, Randy MacDonald wrote:
@BGB, by the 'same end' i meant tranforming a statement into something 
that a flow control operator can act on, like if () {...} else {}  The 
domain of the execute operator in APL is quoted strings.  I did not 
mean that the same end was allowing asynchronous execution.




yes, ok.

just, a lot of this logic is hard-coded into the parser and compiler 
logic though, but yeah I think I understand what is mean here.



FWIW, it is possible to use code-blocks at runtime by using the syntax 
"fun{...}", which is basically a shorthand equivalent to "function() { 
... }" (this will create closures if there are any captured bindings, 
but otherwise will create a raw block).


(by default, bindings are captured by identity, and may outlive the 
parent scope).


note that, "async{...}" will also work similar to a closure by default, 
so that variables will be captured by-reference. technically, it is also 
possible to write something like: "async(i){...}" which would capture 
'i' by-value (this being because, internally, async is implemented by 
calling a closure in a newly-created green-thread, and in the case where 
variables are used, they are treated as arguments, with the closure 
having a matching argument list).


the reason for this latter form of async is to allow things like:
for(i=0; i<16; i++)
async(i) { ... }

where each would capture the value of 'i' (rather than the variable 'i').
vaguely similar could be possible with closures, say: "fun[i]{...}", but 
thus far nothing along these lines has been implemented (and would 
require altering how closures work).





On 6/16/2012 1:23 PM, BGB wrote:

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it 
is the "async" keyword which indicates that there is deferred 
execution. (in my language, quoting indicates symbols or strings, as 
in "this is a string", 'a', or 'single-quoted string', where "a" is 
always a string, but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in "{x: 3, y: 4}".


the language sort-of distinguishes between statements and 
expressions, but this is more relaxed than in many other languages 
(it is more built on "context" than on a strict syntactic divide, and 
in most cases an explicit "return" is optional since any 
statement/expression in "tail position" may implicitly return a value).



the letters in this case were just placeholders for the statements 
which would go in the blocks.


for example example:
if(true)
{
printf("A\n");
sleep(1000);
printf("B\n");
sleep(1000);
}
printf("Done\n");

executes the print statements synchronously, causing the thread to 
sleep for 1s in the process (so, "Done" is printed 1s after "B").


and, with a plain "async" keyword:
async {
sleep(1000);
printf("A\n");
}
printf("Done\n");

will print "Done" first, and then print "A" about 1 second later 
(since the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the "join()" will block until the given thread has returned, 
and return the return value from the thread.
generally though, a "join" in this form only makes sense with a 
single argument (and would be implemented in the VM using a special 
bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and 
likewise:

join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}&{B}&{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean "async with join", and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it 
may also imply "we don't really care what the return value is".


basically, the "!" suffix has ended up on several of my keywords to 
indicate "alternate forms", for example: "a as int" and "a as! int" 
will have slightly different semantics (the former will return "null" 
if the cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up 
more of the logic for this (adding multiway join logic, ...).





On another note, I agree with the thesis that OO is just message 
passing:


  aResult ? someParameters 'messageName' to anObject ?? so, once 
'to' is defined, APL does OO.


I was thinking 'new' didn't fit, but

   'new' to aClass

convinced me otherwise.

It also means that 'object oriented language' is a category error.



my language is a bit more generic, and loosely borrows much of its 
current syntax from JavaSc

Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 11:36 AM, Randy MacDonald wrote:
@BGB, by the 'same end' i meant tranforming a statement into something 
that a flow control operator can act on, like if () {...} else {}  The 
domain of the execute operator in APL is quoted strings.  I did not 
mean that the same end was allowing asynchronous execution.




side note:
a lot of how this is implemented came from how it was originally 
designed/implemented.


originally, the main use of the "call_async" opcode was not for async 
blocks, but rather for explicit asynchronous function calls:
foo!(...);//calls function, doesn't wait for return (return value is 
a thread-handle).

likewise:
join(foo!(...));
would call a function asynchronously, and join against the result 
(return value).


async also was latter added as a modifier:
async function bar(...) { ... }

where the function will be called asynchronously by default:
bar(...);//will perform an (implicit) async call

for example, it was also possible to use a lot of this to pass messages 
along channels:

chan!(...);//send a message, don't block for receipt.
chan(...);//send a message, blocking (would wait for other end to join)
join(chan);//get message from channel, blocks for message

a lot of this though was in the 2004 version of the language (the VM was 
later re-implemented, twice), and some hasn't been fully reimplemented 
(the 2004 VM was poorly implemented and very slow).


the async-block syntax was added later, and partly built on the concept 
of async calls.



but, yeah, probably a lot of people here have already seen stuff like 
this before.





On 6/16/2012 1:23 PM, BGB wrote:

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it 
is the "async" keyword which indicates that there is deferred 
execution. (in my language, quoting indicates symbols or strings, as 
in "this is a string", 'a', or 'single-quoted string', where "a" is 
always a string, but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in "{x: 3, y: 4}".


the language sort-of distinguishes between statements and 
expressions, but this is more relaxed than in many other languages 
(it is more built on "context" than on a strict syntactic divide, and 
in most cases an explicit "return" is optional since any 
statement/expression in "tail position" may implicitly return a value).



the letters in this case were just placeholders for the statements 
which would go in the blocks.


for example example:
if(true)
{
printf("A\n");
sleep(1000);
printf("B\n");
sleep(1000);
}
printf("Done\n");

executes the print statements synchronously, causing the thread to 
sleep for 1s in the process (so, "Done" is printed 1s after "B").


and, with a plain "async" keyword:
async {
sleep(1000);
printf("A\n");
}
printf("Done\n");

will print "Done" first, and then print "A" about 1 second later 
(since the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the "join()" will block until the given thread has returned, 
and return the return value from the thread.
generally though, a "join" in this form only makes sense with a 
single argument (and would be implemented in the VM using a special 
bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and 
likewise:

join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}&{B}&{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean "async with join", and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it 
may also imply "we don't really care what the return value is".


basically, the "!" suffix has ended up on several of my keywords to 
indicate "alternate forms", for example: "a as int" and "a as! int" 
will have slightly different semantics (the former will return "null" 
if the cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up 
more of the logic for this (adding multiway join logic, ...).





On another note, I agree with the thesis that OO is just message 
passing:


  aResult ? someParameters 'messageName' to anObject ?? so, once 
'to' is defined, APL does OO.


I was thinking 'new' didn't fit, but

   'new' to aClass

convinced me otherwise.

It also means that 'object oriented language' is a category error.



my language is a bit more generic, and loosely borrows much of its 
current syntax from JavaScript and ActionScript.


h

Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Shawn Morel
> My first really strong experiences with programming came from the 
> data-structures world in the late 80s at the University of Waterloo.

Glad to know that the approach I was taught at Waterloo 2 decades later didn't 
change that much :)

Seriously though, I think the lack of better ways to manage artificial 
complexity indicates our field is lacking a "true architecture". Rome 
(aqueducts, arches and great buildings) was not built just because the romans 
had more bricks and slaves than the egyptians.

> There was an implicit view that one could decompose all problems into 
> data-structures (and a few algorithms and a little bit of glue).

I actually agree completely. Rich Hickey is clearly in this camp with his 
approach to clojure and exemplified in his talk "Simple Made Easy". 
http://www.infoq.com/presentations/Simple-Made-Easy

I think the problem is that OO has been pitched as an alternative to structured 
programing rather than a complement. For example, because you can make a 
function call (c-style) doesn't mean that what happens inside a function isn't 
sequential machine instructions - it's just a way of abstracting. Similarly, 
objects should NOT be about the data + code point of view that java and c++ 
have pushed but about how it creates a boundary (like lipid membranes for 
cells) - the boundary and messages are what important. What happens "inside" 
very well might be a data driven / decomposition approach.


> But for many of the things that I've built in the back-end I find that OO 
> causes me to jump through what I think are artificial hoops.

Totally agreed, when I'm "inside" the "back-end" I really want to look at data 
structures and operate on them. Inevitably, to allow "other things" to 
interface with and use the backend, I come up with a 1-off facade pattern.

- collection of name-spaced c functions that take an opaque struct
- the sys-call interface to an OS
- countless protocols to remote servers (rpc, http, soap...)

I find "Delimited continuations in operating systems" Oleg Kiselyov and 
Chung-chieh Shan really interesting in highlighting this.


> Over the years I've spent a lot of time pondering why. My underlying sense is 
> that there are some fundamental dualities in computational machines. Static 
> vs. dynamic. Data vs. code. Nouns vs. verbs. Location vs. time. It is 
> possible, of course, to 'cast' one onto the other, there are plenty of 
> examples of 'jumping' particularly in languages wrt. nouns and verbs. But I 
> think that decompositions become 'easier' for us to understand when we 
> partition them along the 'natural' lines of what they are underneath.

Yes, I think that's one of the points of fonc - find more optimal 
representations of meaning and representations of execution. Goedel Escher Bach 
is at least in part focused heavily on this duality between encodings of 
meaning in static representations vs dynamic executions of systems. A finite 
static representation can represent something infinite at execution (e.g. 
regex's Kleene*). Likewise, finite computation descriptions can take an 
infinite amount of static meaning definition for the generated artifact (any 
sort of fractal generating code).


> My thinking some time ago as it applies to OO is that the fundamental 
> primitive, an object, essentially mixes its metaphors (sort of).

Yes, because I think the meaning of object in pop-culture is not the right 
thing. They took what they knew (structured programing) and the "news" of 
objects and made algebraic data types with named slots (dicts that are a finite 
map).

The metaphor I use is imagine if mechanical engineering worked like "OO" 
software. 1 engineer whips up a steel I-beam in CAD and commits it to the repo. 
Internally it's obviously made up of atoms which have quarks with spins. 
Externally, steel beams have well understood properties and will hold up a sky 
scraper when done right. Another engineer comes along, sees this beam thing and 
notices that it has almost the same structure as this rubber bungie cord he 
needs to build. Thankfully, the inner quark "object" has setters for count and 
spin - "SWEET code re-use!" he thinks.

It's the type of thing you could never do in real life.

You might think this is a hyperbole, but until the lipid membrane came along 
there could NOT be life since everything was a violent pile of ionized atoms. 
Electrons would be ripped away and molecules ripped apart. The information 
encoded in DNA could NOT exist until the lipid membrane came along - a 
protective membrane to allow separation of what kind of "computation" could 
happen "where" + a way of "sensing" and knowing things that happen around "you".

Simply, I think we need a better abstraction tool beyond what we have today 
with the function, Yes, the abstraction might be "built / simulated" out of 
functions - see the lambda papers. Lipid membranes are also just an illusion of 
encapsulation - permeable things made out of the same(ish) low

Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Wesley Smith
> If things are expanding then they have to get more complex, they encompass
> more.

Aside from intuition, what evidence do you have to back this statement
up?  I've seen no justification for this statement so far.  Biological
systems naturally make use of objects across vastly different scales
to increase functionality with a much less significant increase in
complexity.  Think of how early cells incorporated mitochondria whole
hog to produce a new species.

Also, I think talking about minimum bits of information is not the
best view onto the complexity problem.  It doesn't account for
structure at all.  Instead, why don't we talk about Gregory Chaitin's
[1] notion of a minimal program.  An interesting biological parallel
to compressing computer programs can be found in looking at bacteria
DNA.  For bacteria near undersea vents where it's very hot and genetic
code transcriptions can easily go awry due to thermal conditions, the
bacteria's genetic code as evolved into a compressed form that reuses
chunks of itself to express the same features that would normally be
spread out in a larger sequence of DNA.

wes

[1] http://www.umcs.maine.edu/~chaitin/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Miles Fidelman

Wesley Smith wrote:

If things are expanding then they have to get more complex, they encompass
more.

Aside from intuition, what evidence do you have to back this statement
up?  I've seen no justification for this statement so far.


As I recall, there was a recent Nobel prize that boiled down to: 
Increase the energy flowing into a system, and new, more complex, 
behaviors arise.

Biological
systems naturally make use of objects across vastly different scales
to increase functionality with a much less significant increase in
complexity.  Think of how early cells incorporated mitochondria whole
hog to produce a new species.


Encapsulating complexity (e.g, in mitochondria) doesn't eliminate 
complexity.  Encapsulation and layering MANAGES complexity allowing new 
layers of complexity to be constructed (or emerge) through combinations 
of more complicated building blocks.


Miles Fidelman


--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 1:39 PM, Wesley Smith wrote:

If things are expanding then they have to get more complex, they encompass
more.

Aside from intuition, what evidence do you have to back this statement
up?  I've seen no justification for this statement so far.  Biological
systems naturally make use of objects across vastly different scales
to increase functionality with a much less significant increase in
complexity.  Think of how early cells incorporated mitochondria whole
hog to produce a new species.


in code, the later example is often called "copy / paste".
some people demonize it, but if a person knows what they are doing, it 
can be used to good effect.


a problem is partly how exactly one defines "complex":
one definition is in terms of "visible complexity", where basically 
adding a feature causes code to become harder to understand, more 
tangled, ...


another definition, apparently more popular among programmers, is to 
simply obsess on the total amount of code in a project, and just 
automatically assume that a 1 Mloc project is much harder to understand 
and maintain than a 100 kloc project.


if the difference is that the smaller project consists almost entirely 
of hacks and jury-rigging, it isn't necessarily much easier to understand.


meanwhile, building abstractions will often increase the total code size 
(IOW: adding complexity), but consequently make the code easier to 
understand and maintain (reducing visible complexity).


often the code using an abstraction will be smaller, but usually adding 
an abstraction will add more total code to the project than that saved 
by the code which makes use of it (except past a certain point, namely 
where the redundancy from the client code will outweigh the cost of the 
abstraction).



for example:
MS-DOS is drastically smaller than Windows;
but, if most of what we currently have on Windows were built directly on 
MS-DOS (with nearly every app providing its own PMode stuff, driver 
stack, ...), then the total wasted HD space would likely be huge.


and, developing a Windows-like app on Windows is much less total effort 
than doing similar on MS-DOS would be.




Also, I think talking about minimum bits of information is not the
best view onto the complexity problem.  It doesn't account for
structure at all.  Instead, why don't we talk about Gregory Chaitin's
[1] notion of a minimal program.  An interesting biological parallel
to compressing computer programs can be found in looking at bacteria
DNA.  For bacteria near undersea vents where it's very hot and genetic
code transcriptions can easily go awry due to thermal conditions, the
bacteria's genetic code as evolved into a compressed form that reuses
chunks of itself to express the same features that would normally be
spread out in a larger sequence of DNA.


yep.

I have sometimes wondered what an organism which combined most of the 
best parts of "what nature has to offer" would look like (an issue seems 
to be that most major organisms seem to be more advanced in some ways 
and less advanced in others).




wes

[1] http://www.umcs.maine.edu/~chaitin/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread David Barbour
On Fri, Jun 15, 2012 at 1:38 PM, Paul Homer  wrote:

> there is some underlying complexity tied to the functionality that
> dictates that it could never be any less the X lines of code. The system
> encapsulates a significant amount of information, and stealing from Shannon
> slightly, it cannot be represented in any less bits.
>

A valid question might be: how much of this information should be
represented in code? How much should instead be heuristically captured by
generic machine learning techniques, indeterminate STM solvers, or
stability models? I can think of much functionality today for control
systems, configurations, UIs, etc. that would be better (more adaptive,
reactive, flexible) achieved through generic mechanisms.

Sure, there is a "minimum number of bits" to represent information in the
system, but code is a measure of human effort, not information in general.


>
> If things are expanding then they have to get more complex, they encompass
> more.
>

Complexity can be measured by number of possible states or configurations,
and in that sense things do get more complex as they scale. But they don't
need to become more *complicated*. The underlying structure can become
simpler, more uniform, especially compared to what we have today.

Regards,

David

-- 
bringing s-words to a pen fight
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Miles Fidelman

BGB wrote:


a problem is partly how exactly one defines "complex":
one definition is in terms of "visible complexity", where basically 
adding a feature causes code to become harder to understand, more 
tangled, ...


another definition, apparently more popular among programmers, is to 
simply obsess on the total amount of code in a project, and just 
automatically assume that a 1 Mloc project is much harder to 
understand and maintain than a 100 kloc project.


And there are functional and behavioral complexity - i.e., REAL 
complexity, in the information theory sense.


I expect that there is some correlation between minimizing visual 
complexity and lines of code (e.g., by using domain specific languages), 
and being able to deal with more complex problem spaces and/or develop 
more sophisticated approaches to problems.


Miles



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 2:20 PM, Miles Fidelman wrote:

BGB wrote:


a problem is partly how exactly one defines "complex":
one definition is in terms of "visible complexity", where basically 
adding a feature causes code to become harder to understand, more 
tangled, ...


another definition, apparently more popular among programmers, is to 
simply obsess on the total amount of code in a project, and just 
automatically assume that a 1 Mloc project is much harder to 
understand and maintain than a 100 kloc project.


And there are functional and behavioral complexity - i.e., REAL 
complexity, in the information theory sense.


I expect that there is some correlation between minimizing visual 
complexity and lines of code (e.g., by using domain specific 
languages), and being able to deal with more complex problem spaces 
and/or develop more sophisticated approaches to problems.




a lot depends on what code is being abstracted, and how much code can be 
reduced by how much.


if the DSL makes a lot of code a lot smaller, it will have a good effect;
if it only makes a little code only slightly smaller, it may make the 
total project larger.



personally, I assume not worrying too much about total LOC, and more 
concern with how much personal effort is required (to 
implement/maintain/use it), and how well it will work (performance, 
memory use, reliability, ...).


but, I get a lot of general criticism for how I go about doing things...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread Pascal J. Bourguignon
John Zabroski  writes:

> On Jun 15, 2012 2:39 PM, "Pascal J. Bourguignon"  
> wrote:
>>
>> John Zabroski  writes:
>>
>>
>> > Sorry, you did not answer my question, but instead presented excuses
>> > for why programmers misunderstand people.  (Can I paraphrase your
>> > thoughts as, "Because people are not programmers!") 
>>
>> No, you misunderstood my answer:
>> "Because people don't pay programmers enough."
>
> In the words of comedian Spike Milligan, "All I ask is for the chance to 
> prove money can't make me happy."
>
> But my motto comes from pianist Glenn Gould: the ideal ratio of performers to 
> audience is one. I have never seen a software team produce better results 
> with better pay, but most of
> the great advances in software came from somebody doing something differently 
> because any other way was simply wrong.
>
> Having seen millionaires throw their money around to build their dream app 
> (the Chandler project featured in Scott Rosenberg's book Dreaming in Code and 
> all of Sandy Klausner's
> vaporware graphical programming ideas), and seeing what road blocks still 
> remained, I disbelieve your answer.
>
> Who invented the spreadsheet? One person.
> Who invented pivot tables? One person.
> Who invented modeless text editing? One person.
>
> How much money is enough, anyway?  In the words of John D. Rockefellar, "A 
> little bit more"?

I wasn't speaking of the work of art programmers would do anyway.

I was speaking of what the customers want.  If they want to have the
same services as offered by plumbers (you don't hold the spanner to a
plumber, or you don't bring your own tubes; you don't get wet;  you just
call him, and let him deal with the leak: simple and nice user
interface, good end-result, including the hefty bill), then you'll have
to pay the same hourly rates as what you pay to plumbers.  Just google
some statistics.


-- 
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-17 Thread GrrrWaaa

On Jun 16, 2012, at 12:07 PM, Miles Fidelman wrote:

> Wesley Smith wrote:
>>> If things are expanding then they have to get more complex, they encompass
>>> more.
>> Aside from intuition, what evidence do you have to back this statement
>> up?  I've seen no justification for this statement so far.
> 
> As I recall, there was a recent Nobel prize that boiled down to: Increase the 
> energy flowing into a system, and new, more complex, behaviors arise.

Are you thinking of Prigogine's dissipative structures? Nobel laureate in 1977.
http://en.wikipedia.org/wiki/Ilya_Prigogine
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-17 Thread GrrrWaaa

On Jun 15, 2012, at 12:17 PM, David Leibs wrote:

> As children we spend a lot of time practicing adding up numbers. Humans are 
> very bad at this if you measure making a silly error as bad. Take for example:
> 
>365
> +  366
> --
> 
> this requires you to add 5 & 6, write down 1 and carry 1 to the next column
> then add 6, 6, and that carried 1 and write down 2 and carry a 1 to the next 
> column
> finally add 3, 3 and the carried 1 and write down 7
> this gives you 721, oops, the wrong answer.  In step 2 I made a totally 
> dyslexic mistake and should have written down a 3.
> 
> Ken proposed learning to see things a bit differently and remember the  
> digits are a vector times another vector of powers.
> Ken would have you see this as a two step problem with the digits spread out.
> 
>3   6   5
> +  3   6   6
> 
> 
> Then you just add the digits. Don't think about the carries.
> 
>3   6   5
> +  3   6   6
> 
>6  12  11
> 
> 
> Now we normalize the by dealing with the carry part moving from right to left 
> in fine APL style. You can almost see the implied loop using residue and 
> n-residue.
> 6  12 11
> 6  13  0
> 7   3  0
> 
> Ken believed that this two stage technique was much easier for people to get 
> right.  

I'm not sure the argument holds: the answer should be 731. :-)

But, to be fair, spreading out the calculation like this makes it easier to 
debug and find the place where it went awry. Ha - I never thought of that 
before - writing out proofs in math problems is as much debugging as it is 
verifying! Maybe programming interfaces could help us debug by more readily 
showing the 'reasoning' behind a particular value or state, the particular 
data/control-flows that led to it. Like picking up the program-mesh by holding 
the result value we are interested in, and seeing the connected inputs draping 
away to the floor.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-17 Thread David Leibs
I really like you observation about debugging.  The error you see was bad 
copying from another workspace. Totally botched. My email proof reading skill 
are totally lacking as well.   In general I will get everything I try to do 
initially wrong and if I don't get something "very wrong" every 30 minutes then 
I am not doing anything.

-David Leibs

On Jun 17, 2012, at 9:49 AM, GrrrWaaa wrote:

> 
> On Jun 15, 2012, at 12:17 PM, David Leibs wrote:
> 
>> As children we spend a lot of time practicing adding up numbers. Humans are 
>> very bad at this if you measure making a silly error as bad. Take for 
>> example:
>> 
>>   365
>> +  366
>> --
>> 
>> this requires you to add 5 & 6, write down 1 and carry 1 to the next column
>> then add 6, 6, and that carried 1 and write down 2 and carry a 1 to the next 
>> column
>> finally add 3, 3 and the carried 1 and write down 7
>> this gives you 721, oops, the wrong answer.  In step 2 I made a totally 
>> dyslexic mistake and should have written down a 3.
>> 
>> Ken proposed learning to see things a bit differently and remember the  
>> digits are a vector times another vector of powers.
>> Ken would have you see this as a two step problem with the digits spread out.
>> 
>>   3   6   5
>> +  3   6   6
>> 
>> 
>> Then you just add the digits. Don't think about the carries.
>> 
>>   3   6   5
>> +  3   6   6
>> 
>>   6  12  11
>> 
>> 
>> Now we normalize the by dealing with the carry part moving from right to 
>> left in fine APL style. You can almost see the implied loop using residue 
>> and n-residue.
>> 6  12 11
>> 6  13  0
>> 7   3  0
>> 
>> Ken believed that this two stage technique was much easier for people to get 
>> right.  
> 
> I'm not sure the argument holds: the answer should be 731. :-)
> 
> But, to be fair, spreading out the calculation like this makes it easier to 
> debug and find the place where it went awry. Ha - I never thought of that 
> before - writing out proofs in math problems is as much debugging as it is 
> verifying! Maybe programming interfaces could help us debug by more readily 
> showing the 'reasoning' behind a particular value or state, the particular 
> data/control-flows that led to it. Like picking up the program-mesh by 
> holding the result value we are interested in, and seeing the connected 
> inputs draping away to the floor.
> 
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-17 Thread David Leibs
Thanks for the link.  This thread has had me thinking quite a bit about the 
Central Limit Theorem from probability.

http://en.wikipedia.org/wiki/Central_limit_theorem

It explains why so many of our measurements result in normal distributions.

-David Leibs

On Jun 17, 2012, at 9:36 AM, GrrrWaaa wrote:

> 
> On Jun 16, 2012, at 12:07 PM, Miles Fidelman wrote:
> 
>> Wesley Smith wrote:
 If things are expanding then they have to get more complex, they encompass
 more.
>>> Aside from intuition, what evidence do you have to back this statement
>>> up?  I've seen no justification for this statement so far.
>> 
>> As I recall, there was a recent Nobel prize that boiled down to: Increase 
>> the energy flowing into a system, and new, more complex, behaviors arise.
> 
> Are you thinking of Prigogine's dissipative structures? Nobel laureate in 
> 1977.
> http://en.wikipedia.org/wiki/Ilya_Prigogine
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-17 Thread Miles Fidelman

GrrrWaaa wrote:

On Jun 16, 2012, at 12:07 PM, Miles Fidelman wrote:


Wesley Smith wrote:

If things are expanding then they have to get more complex, they encompass
more.

Aside from intuition, what evidence do you have to back this statement
up?  I've seen no justification for this statement so far.

As I recall, there was a recent Nobel prize that boiled down to: Increase the 
energy flowing into a system, and new, more complex, behaviors arise.

Are you thinking of Prigogine's dissipative structures? Nobel laureate in 1977.
http://en.wikipedia.org/wiki/Ilya_Prigogine


That's the one.  Thanks!

Miles



--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-17 Thread Toby Schachman
On Sat, Jun 16, 2012 at 12:18 PM, David Barbour  wrote:
>
> A valid question might be: how much of this information should be
> represented in code? How much should instead be heuristically captured by
> generic machine learning techniques, indeterminate STM solvers, or stability
> models? I can think of much functionality today for control systems,
> configurations, UIs, etc. that would be better (more adaptive, reactive,
> flexible) achieved through generic mechanisms.
>
> Sure, there is a "minimum number of bits" to represent information in the
> system, but code is a measure of human effort, not information in general.

I think you'd be interested in this work,
http://wekinator.cs.princeton.edu/

The idea is to build electronic musical instruments by training
supervised machine learning algorithms on physical input signals
(accelerometers, cameras, etc). The machine learning is pretty naive
as I understand, but I think it works because the person training the
machine is simultaneously exploring the instrument, training herself
to play it. Person and instrument learning how to play each other.

You can watch an overview video here,
http://www.cs.princeton.edu/~fiebrink/drop/wekinator/WekinatorDemo2.m4v

Toby
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-18 Thread Paul Homer
This discussion has inspired me to try once again to express my sense of what I 
mean by complexity. It's probably too rambly for most people, but some may find 
it interesting:

http://theprogrammersparadox.blogspot.ca/2012/06/what-is-complexity.html

Paul.




>
> From: Miles Fidelman 
>To: Fundamentals of New Computing  
>Sent: Saturday, June 16, 2012 3:20:22 PM
>Subject: Re: [fonc] The Web Will Die When OOP Dies
> 
>BGB wrote:
>> 
>> a problem is partly how exactly one defines "complex":
>> one definition is in terms of "visible complexity", where basically adding a 
>> feature causes code to become harder to understand, more tangled, ...
>> 
>> another definition, apparently more popular among programmers, is to simply 
>> obsess on the total amount of code in a project, and just automatically 
>> assume that a 1 Mloc project is much harder to understand and maintain than 
>> a 100 kloc project.
>
>And there are functional and behavioral complexity - i.e., REAL complexity, in 
>the information theory sense.
>
>I expect that there is some correlation between minimizing visual complexity 
>and lines of code (e.g., by using domain specific languages), and being able 
>to deal with more complex problem spaces and/or develop more sophisticated 
>approaches to problems.
>
>Miles
>
>
>
>-- In theory, there is no difference between theory and practice.
>In practice, there is.    Yogi Berra
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc