[fonc] PICOBIT (was: OPERATING SYSTEM ON A FPGA)

2012-12-09 Thread Jecel Assumpcao Jr.
Josh Grams wrote:

> I have always wondered how far in that direction you could go with
> Scheme or another high-level dynamic language.  In my (again, fairly
> uninformed) opinion it seems mainly a question of how much of the
> dynamic stuff can be analysed and compiled down to static code to reduce
> the runtime size/speed costs, and whether you can give the programmer
> the fine-grained control over memory usage that they might need for such
> limited systems.

Take a look at the paper "PICOBIT: A Compact Scheme System for
Microcontrollers" by Vincent St-Amour and Marc Feeley:

http://www.iro.umontreal.ca/~feeley/papers/StAmourFeeleyIFL09.pdf

They implement a cross development system to run Scheme in less than 7KB
of memory in Microchip PC18 microcontrollers.

For those of us who prefer native systems to cross development, "The
LISP Implementation for the PDP- 1 Computer" by L. Peter Deutsch and
Edmund C . Berkeley is an interesting text from 1984:

> http://archive.computerhistory.org/resources/text/DEC/pdp-1/DEC.pdp_1.1964.102650371.pdf

That Lisp system needed at least 2000 registers (roughly equivalent to
4500 bytes) to run, though it could make use of larger configurations.
Unlike PICOBIT, this is a fully interactive operating system.

http://simh.trailing-edge.com/kits/lispswre.zip

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-15 Thread Jecel Assumpcao Jr.
Iian Neill wrote:
> Although there are plenty of blogs and forums on programming out there, it's
> really sad that there isn't some mass medium for programming literacy -- and
> I suspect that a big part of it is that, despite its many documented flaws, 
> BASIC
> at least had a small and graspable vocabulary that didn't require any header
> files, libraries, drivers, compilers, IDEs, or profiling tools.

The Sinclair machines even took advantage of BASIC's limited and fixed
vocabulary to work around their bad keyboards by putting one keyword per
key and having a mode based input system. This eliminated many cases of
typing expressions with bad syntax, which was really helpful for
beginners. The tile based syntax of Scratch and Etoys is a modern way of
getting the same effect.

I totally agree with you about magazines and books still being needed in
this age of blogs, which is why I am really glad that the magazine for
the Raspberry Pi has reached its third issue already with some
interesting listings for the users to type into their machines:

http://www.themagpi.com/

The first books about the machine is about to come out (I am sure there
will be others):

http://www.amazon.co.uk/gp/product/111846446X/

The idea of a computer with Logo in ROM instead of BASIC was mentioned
in this thread. I did build such a machine in 1983, but it was never
released commercially, unfortunately:

http://www.merlintec.com/lsi/pegasus.html

There were four implementations of Logo for the BBC Micro which were
supplied as ROMs, so that machine should probably count:

http://www.nostalgia8.nl/logo/docs/mudeel1.jpg
http://www.nostalgia8.nl/logo/docs/mudeel2.jpg
http://www.nostalgia8.nl/logo/docs/mudeel3.jpg
http://www.nostalgia8.nl/logo/docs/mudeel4.jpg

This Soviet clone of the Sinclair Spectrum added the following ROM
"modes": CP/M, Forth and Logo

http://en.wikipedia.org/wiki/Hobbit_%28computer%29

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] st71-one-pager (was: Barbarians at the gate!)

2012-03-25 Thread Jecel Assumpcao Jr.
Shaun,

> Here is my current attempt at replicating the diagram alongside the code laid
> out as in appendix III. http://order-of-no.posterous.com/st71-one-pager .

Shouldn't this be called st72-one-pager?

If I understood correctly, the software systems that were designed by
Alan were:

1) 1968/1969 - Flex

2) 1971 - Slogo (later Smalltalk-71) using pattern matching

3) 1972 - Visual language based on flow charts

4) 1972 - bet and one pager in appendix III

5) 1972 - Smalltalk-72: Dan Ingalls implements previous in Nova Basic

It seems to me that you are confusing 2 and 4. If you are indeed
interested in learning more about 4, then you should study the evolution
of 5:

http://ftp.squeak.org/goodies/Smalltalk-72/

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-15 Thread Jecel Assumpcao Jr.
Alan Kay wrote on Wed, 14 Mar 2012 16:44:33 -0700 (PDT)
> The CRISP was too slow, and had other problems in its details. Sakoman liked 
> it ...

Thanks for the information! Just looking at the papers about it I had
the impression that it would be reasonably faster than an ARM at the
same clock frequency while having a VAX-like code density. I was going
to suggest that implementing CRISP on an FPGA could be an interesting
project for one of the grad students at my university, but that doesn't
seem to be the case.

A rather different processor for running C (for floating point intensive
code) was the WM architecture:

http://www.cs.virginia.edu/~wm/wm.html
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.1092

I only heard about it because it was the inspiration for the memory
interface in the version of Chuck Thacker's Tiny Computer used in the
Beehive multicore project. This, unfortunately, is probably too complex
for a student in my group.

> Bill Atkinson did Hypercard ... Larry made many other contributions at Xerox 
> and Apple

I know that Bill was the developer, but had the impression that Larry
had done what was needed to move from project to product. He certainly
was the one promoting it pre-launch in the old Smalltalk forums at BIX
(Byte Information eXchange).

> To me the Dynabook has always been 95% a "service model" and 5% physical
> specs (there were three main physical ideas for it, only one was the tablet).

2015 is almost here - time to move to the "computer in glasses" model
(lame "Back to the Future" reference, but I know you will agree).

BGB wrote on Wed, 14 Mar 2012 17:23:07 -0700
> the TSS?...
> 
> it is still usable on x86 in 32-bit Protected-Mode.

I was thinking about about the LDT and GDT (Local Descriptor Table and
Global Descriptor Table). These still work, of course, but the current
implementations are so bad that it is faster to do the same thing 100%
in software. You do lose the security aspect, however.

It is funny that the wish to put the TSS to good use was the big
motivation for Linux. The resulting non portability (which is a lot less
important now than then) was one of the main complaints in Andrew
Tanenbaum's famous early rant about the OS. The first attempt to port
Linux (to the Alpha, if I remember correctly) required completely
rewriting that part and the changes were quickly brought back to the
Intel version.

Marcel Weiher wrote on Thu, 15 Mar 2012 15:33:07 +0100
> I have a little Postscript interpreter/scratchpad in the AppStore 
> (TouchScript,
> http://itunes.apple.com/en/app/touchscript/id398914579?mt=8 ).  Admittedly, it
> was mostly a trial balloon to see if something like that would be accepted, 
> and
> it was (2nd revision so far).  And somewhat surprisingly a (very) few people
> even seem to be using it!
>
> Sharing is via iTunes.

Thanks for the tip! I see your description is "Use the Postscript(tm)
language to express your ideas and see the results on your iPhone.
Transfer your creations to your computer via iTunes sharing as either
PNG or Postscript documents."

It is likely that the reviewers considered that "Postscript documents"
means a text file (like a .pdf or .doc). The user who gave you a bad
review certainly did (another user corrected him/her). So this doesn't
tell us what Apple would do with a language that allows you to share
programs.

David Harris wrote on Thu, 15 Mar 2012 08:35:06 -0700 about
Wolfram|Alpha mobile

Thanks, but that is exactly what I was calling "just a terminal" and "a
waste".

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-14 Thread Jecel Assumpcao Jr.
Alan Kay wrote on Wed, 14 Mar 2012 11:36:30 -0700 (PDT)
> Yep, I was there and trying to get the Newton project off the awful ATT chip
> they had first chosen.

Interesting - a few months ago I studied the datasheets for the Hobbit
and read all the old CRISP papers and found this chip rather cute. It is
even more C centric than RISCs (specially the ARM) so might not be a
good choice for other languages. Another project that started out using
this and then had to switch (to the PowerPC) was the BeBox. In the link
I give below it says both projects were done by the same people (Jean
Louis Gassée and Steve Sakoman), so in a way it was really just one
project that used the chip.

> Larry Tesler (who worked with us at PARC) finally wound up taking over this
> project and doing a number of much better things with it.

He was also responsible for giving us Hypercard, right?

> Overall what happened with Newton was too bad -- it could have been much
> better -- but there were many too many different opinions and power bases
> involved.

This looks like a reasonable history of the Newton project (though some
parts that I know aren't quite right, so I can't guess how accurate the
parts I didn't know are):

http://lowendmac.com/orchard/06/john-sculley-newton-origin.html

It doesn't mention NewtonScript nor Object Soups. I have never used it
myself, only read about it and seen some demos. But my impression is
that this was the closest thing we have had to the dynabook yet.

> If you have a good version of confinement (which is pretty simple HW-wise) you
> can use Butler Lampson's schemes for Cal-TSS to make a workable version of a
> capability system.

The 286 protected mode was good enough for this, and was extended in the
386. I am not sure all modern x86 processors still implement these, and
if they do it is likely that actually using them will hurt performance
so much that it isn't an option in practice.

> And, yep, I managed to get them to allow interpreters to run on the iPad, but 
> was
> not able to get Steve to countermand the "no sharing" rule.

That is a pity, though at least having native languages makes these
devices a reasonable replacement for my old Radio Shack PC-4 calculator.
I noticed that neither Matlab nor Mathematica are available for the
iPad, but only simple terminal apps that allow you to access these
applications running on your PC. What a waste!

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-14 Thread Jecel Assumpcao Jr.
Alan Kay wrote on Wed, 14 Mar 2012 05:53:21 -0700 (PDT)
> A hardware vendor with huge volumes (like Apple) should be able to get a CPU
> vendor to make HW that offers real protection, and at a granularity that makes
> more systems sense.

They did just that when they founded ARM Ltd (with Acorn and VTI): the
most significant change from the ARM3 to the ARM6 was a new MMU with a
more fine grained protection mechnism which was designed specially for
the Newton OS. No other system used it and though I haven't checked, I
wouldn't be surprised if this feature was eliminated from more recent
versions of ARM.

Compared to a real capability system (like the Intel iAPX432/BiiN/960XA
or the IBM AS/400) it was a rather awkward solution, but at least they
did make an effort.

Having been created under Scully, this technology did not survive Jobs'
return.

> But the main point here is that there are no technical reasons why a child 
> should
> be restricted from making an Etoys or Scratch project and sharing it with 
> another
> child on an iPad.
> No matter what Apple says, the reasons clearly stem from strategies and 
> tactics
> of economic exclusion.
> So I agree with Max that the iPad at present is really the anti-Dynabook

They have changed their position a little. I have a "Hand Basic" on my
iPhone which is compatible with the Commodore 64 Basic. I can write and
save programs, but can't send them to another device or load new
programs from the Internet. Except I can - there are applications for
the iPhone that give you access to the filing system and let you
exchange files with a PC or Mac. But that is beyond most users, which
seems to be a good enough barrier from Apple's viewpoint.

The same thing applies to this nice native development environment for
Lua on the iPad:

http://twolivesleft.com/Codea/

You can program on the iPad/iPhone, but can't share.

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Raspberry Pi

2012-02-08 Thread Jecel Assumpcao Jr.
Alan,

> Hi Loup
> Actually, your last guess was how we thought most of the optimizations would
> be done (as separate code "guarded" by the meanings). For example, one idea
> was that Cairo could be the optimizations of the "graphics meanings code" we
> would come up with. But Dan Amelang did such a great job at the meanings that
> they ran fast enough tempt us to use them directly (rather than on a 
> supercomputer,
> etc.). In practice, the optimizations we did do are done in the translation 
> chain and
> in the run-time, and Cairo never entered the picture.
> 
> However, this is a great area for developing more technique for how "math" can
> be made practical -- because the model is so "pretty" and "compact" -- and 
> there
> is much more that could be done here.

Here is an old idea I had for a "cache manager" (as described in
http://www.merlintec.com/lsi/tech.html):

One feature that distinguishes programs by experts from those of novices
is the use of caching as a performance enhancement. Saving results for
later reuse greatly decreases source code readability, unfortunately,
obscuring program logic and making debugging much harder. Reflection
allows us to move caching to a separate implementation layer in a cache
manager object. So the application can be written and debugged "naively"
and, after it works, can be annotated to use the cache manager at
critical points to significantly improve performance without having to
write a new version.

This is only possible because Merlin uses message passing "at the
bottom" and includes a reflective access to such a fundamental
operation. In other words, user applications are never made up of mainly
"big black boxes" which the OS can do nothing about. Even simple math
expressions as '(x * 2 ) + y' are entirely built from messages that are
(in theory - the compiler actually eliminates most of the overhead)
handled by a set of meta-objects. So all that the system has to do when
the user annotates an expression as cacheable is to replace the standard
send meta-object with one that looks up its arguments in a table (cache)
and returns a previously calculated answer if it is found there.
Otherwise it works exactly like a normal send meta-object.

An example of how this works is in rendering text. A given font's glyphs
might be given as sets of control points for Bezier curves describing
their outlines plus some "hints" for adjusting these points when
scaling. We could then draw a character from a font called myFont on
aCanvas with the expression:

   aCanvas drawPixmap: ((myFont glyphFor: 'a' Size: 12)

asVectors asPixmap)

This should work perfectly, but will be unacceptably slow. For each time
some character must be shown in the display its points must be scaled by
the 'glyphFor:Size:' method, then the control points must be rendered as
short vectors approximating the indicated Bezier curves ('asVectors')
and finally these vectors must be used to fill in a picture ('asPixmap')
which can finally simply be blasted on the screen for the final result.
By marking each of these messages as cacheable, the next time
'glyphFor:Size:' is sent to myFont with exactly 'a' and 12 as arguments
it will return the same list of control points without executing the
method again. Sending a cacheable 'asVectors' message to the same list
of point as before will fetch the same list of vectors as was created
the first time, and sending 'asPixmap' to that results in the same
character image without actually invoking the filling method once more.
So we have replaced three very complex and slow calculations with three
simple table lookups. If you think that even that is too much, you are
right. The cached control point lists and short vector lists are not
really needed. Unfortunately, the cache manager can do nothing about
that, but if the user moves multiply cached expression to their own
methods like this:

  pixmapForFont: f Char: c Size: s =  ( (f glyphFor: c Size: s)

  asVectors asPixmap).

   

   aCanvas drawPixmap: (pixmapForFont: myFont Char: 'a' Size: 12)

Now we can make only the 'pixmapForFont:Char:Size:' method cacheable if
we want. This will save the final pixmaps without also storing the
intermediate (and useless to us) results. This did involve rewriting
application code, but actually made it a little more readable, unlike
when caching is "hand coded".

> Why can't a Nile backend for the GPU board be written? Did I miss something?

As Reuben Thomas pointed it, the needed information is not available. In
fact, there was no information at all about the chip until this week,
but now we have some 30% (the most important part for porting the basic
funcionality of some OS). This is the same issue that the OLPC people
had (worse in their case, because people would promise to release the
information to get them to use a chip and then just didn't do it). Even
dealing with FPGA is a pain because of the sec

Re: [fonc] Raspberry Pi

2012-02-07 Thread Jecel Assumpcao Jr.
Alan Kay wrote:
> In the "difference between research and engineering department" I
> think I would first port a version of Smalltalk to this system. 

The Squeak VM used in the new OLPC machine should work just fine on this
board on top of one of the Linuxes that have already been tested on it.
It probably will be half the speed of the OLPC XO 1.75, but I expect it
to be very usable.
 
> One of the fun side-projects done in the early part of the Squeak
> system was when John Maloney and a Berkeley grad student ported
> Squeak to "a luggage tag" -- that is to the Mitsubishi hybrid "computer
> on a chip" that existed back ca 1997. This was a ARM-like 32 bit
> microprocessor plus 4MB (or more) memory on a single die. This plus
> one ASIC constituted the whole computer. 

I had meant to mention this project in my previous post. It was briefly
mentioned in these pages:

http://wiki.squeak.org/squeak/5727
http://sib-download.ddo.jp/~sib/Squeak_World/Squeak_Swiki/271.html (this
link didn't work for me today)

A similar effort used the Hyperstone chip:

http://wiki.squeak.org/squeak/1836
 
> Mitsubishi did this nice system for mobile use. Motorola bought the
> rights to this technology and completely buried it to kill competition.

Actually, a company called Renesas sold this chip for many years. But I
have just checked and their version lacks the large embedded DRAM and
only has the typical memory of a high end microcontroller:

http://www.renesas.com/products/mpumcu/m32r/index.jsp

> (We call it the "luggage tag" because they would embed failed chips
> in Lucite to make luggage tags!)

Cute! I was interested in doing something like this with working chips
to have a part children could use to build their own computers, even if
they dropped it on the floor and the dog chewed on it a bit. By having
the bare chip visible (even if not very clearly), it would seem a little
less mysterious than normal electronics.

> Anyway, for fun John and the grad student ported Squeak to this bare chip
> (including having to write the BiOS code). It worked fine, and I was able to
> do a large scale Etoy demo on it.
> Although Squeak was quite small in those days, a number of effective
> optimizations had been done at various levels, and so it was quite efficient,
> and all plus Etoys fit easily into 4MB. 

The Squeak that David Ungar and friends are running on the Tile64 chips
uses a 2MB image in order to fit entirely in that chip's level 2 cache
so that they don't have to deal with access to the external memory.

> In the earliest days of the OLPC XO project we made an offer to make Squeak
> the entire OS of the XO, etc., but you can imagine the resistance!

I only learned about this in 2009. Back in 2005 I tried to form a group
on the squeak-dev list to put Squeak on that machine. Nobody showed any
interest, so I wrote offering to collaborate but was told that they
didn't need my help because you and your team were already working on
it.

At the time the argument about the relative size of the Python and
Squeak communities made perfect sense. And L. Peter Deutsch was then
working on a real VM for what had become his favorite language:

http://marginalguesswork.blogspot.com/2004/08/you-dont-tug-on-supermans-
cape.html

In retrospect, I feel it was a mistake. Everyone who is coding in Python
for OLPC and Sugar learned the language for that project. The Python
community is mostly unaware of these projects and have not helped in any
way that I have seen. If the money and effort spent on Sugar had been
spent on cleaning up SqueakNOS and Morphic/Etoys instead I think the
result might have been more interesting. But all the technical people
were Unix guys and Red Hat and AMD were paying, so there was no real
chance of the project being done other than how it was.

> Frank on the other hand has very few optimizations -- it is about lines of 
> code
> that carry meaning. It is a goal of the project to "separate optimizations 
> from
> the meanings" so it would still run with the optimizations turned off but 
> slower.

I liked so much that I have adopted it as a goal for my PhD project. I
will attempt a different path than VPRI (partial evaluation of nested
interpreters instead of a chain of compilers) so we can better explore
the territory.

> We have done very little of this so far, and very few optimizations. We can 
> give
> live dynamic demos in part because Dan Amelang's Nile graphics system turned
> out to be more efficient than we thought with very few optimizations.

Here is were the binary blob thing in the Raspberry Pi would be a
problem. A Nile backend for the board's GPU can't be written, and the
CPU can't compare to the PCs you use in your demos.

> I think it could be an valuable project for interested parties to see about 
> how to
> organize the separate "optimization spaces" that use the meanings as 
> references.

I didn't get the part about "meanings as references".

Reuben Thomas asked:
> I'm baffled as to how having

Re: [fonc] Raspberry Pi

2012-02-07 Thread Jecel Assumpcao Jr.
Reuben Thomas wrote:
> On 7 February 2012 11:34, Ryan Mitchley wrote:
> >
> > I think the limited capabilities would be a great visceral demonstration of
> > the efficiencies learned during the FONC research.
> >
> > I was thinking in terms of replacing the GNU software, using it as a cheap
> > hardware target... some FONC-based system should blow the GNU stack out of
> > the water when resources are restricted.
> 
> Now that's an exciting idea.

People complain about *only* having 256MB (128MB in the A model) but
that is way more than is needed for SqueakNOS and, I imagine, Frank.
Certainly the boot time for SqueakNOS would be a second or less on this
hardware, which should impress a few people when compared to the various
Linux on the same board.

Fortunately, some information needed to port an OS to the Raspberry Pi
was released yesterday:

> http://dmkenr5gtnd8f.cloudfront.net/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf

The GPU stuff is still secret but I don't think the current version of
Frank would make use of it anyway.

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-20 Thread Jecel Assumpcao Jr.
Eugen Leitl wrote on Sat, 17 Dec 2011 10:43:09 +0100

> [300 EUR GPU]
> [InfiniBand features]

Thanks for the tip about InfiniBand. I kept track of it while it was
being developed but had wrongly assumed it had mostly died off when PCI
Express started to become popular. It is actually a lot faster than my
design in terms of bandwidth, though I think my latency is better (it
has hard to compare since the architectures have some important
differences).

What I am doing it patching the Squeak VM so that the send bytecodes
work the normal way when the receiver is in the same core as the sender
but they become a "send message over the network" instruction if the
receiver is remote.

The Wikipedia article mentions an article about using a different
topology (flattened butterfly) for a network using Infiniband in order
to reduce the power consumed by the network. I was able to find the
paper - "Energy proportional datacenter networks" by rs:Dennis Abts,
Michael R. Marty, Philip M. Wells, Peter Klausler and Hong Liu.

http://dl.acm.org/citation.cfm?id=1816004
> http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/pt-BR//pubs/archive/36462.pdf

While looking for this, I saw that this conference's keynote was Chuck
Thacker's Turing Award talk:

> http://isca2010.inria.fr/media/slides/Turing-Improving_the_future_by_examining_the_past.pdf

And this brings us right back to the start of this thread since he is
saying the same thing that Alan Kay said at SJSU.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Jecel Assumpcao Jr.
Eugen Leitl wrote:

> [human limits - growing code]

Perhaps the copycat work Doug Hofstadter, did is a step in this
direction?

http://en.wikipedia.org/wiki/Copycat_%28software%29

> [OpenMP doesn't match reality]

I agree 100%. But it didn't mess up the logic as much as MPI does. This
was a class, so every learning experience is interesting.

> FPGAs suffer the problem of lack of embedded memory. Consider
> GPGPU with quarter of TByte/s bandwidth across 2-3 GByte grains.
> You just can't compete with economies of scale which allows you
> hundreds to thousands of meshing such with InfiniBand.

You can have the same external memories with an FPGA as with a GPU. I
don't think that is an important difference between them.

> [large clusters with COTS]

Ok, but the networks used in clusters are a bit slow for the experiments
I want to do.

Steve Dekorte wrote:

> [NeXTStation memories versus reality]

I still have a running Apple II. My slowest working PC is a 33MHz 486,
so I can't directly do the comparison I mentioned. But I agree we
shouldn't trust what we remember things feeling like.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Jecel Assumpcao Jr.
John Zabroski wrote:

> You said that our field had become so impoverished because nobody
> googles Douglas Englebart and watches The Mother of All Demoes, and
> also noted that evolution finds "fits" rather than optimal solutions.
> But you didn't really provide any examples of how we are the victims
> of evolution finding these fits. 

Alan mentioned the Burroughs B5000 compared with the architectures that
survived. In Donald Knuth's talk the same design was mentioned as an
example of a mistake we got rid of (a guy who still only programs in
assembly would say that ;-). So the students got to hear both sides.

> So I think I am providing a valuable
> push back by being my stubborn self and saying, Hey, wait, I know
> that's not true.  It just seemed very incongruent to the question of
> how we see the present: is it solely in terms of the past?

Normally Alan presents seeing the past only in terms of the present as
being the problem because this also limits how you see the future. Take
any modern timeline of the microprocessor, for example. It will indeed
be a line and not a tree. It will start with the 4004, then 8008, 8080,
8086, 286 and so on to the latest Core i7. Interesting parts of the
past, like the 6502, the 29000 and so many others can't be seen because
nothing in the present traces back to them.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Jecel Assumpcao Jr.
Eugen Leitl wrote:

> It's remarkable how few are using MPI in practice. A lot of code 
> is being made multithread-proof, and for what? So that they'll have
> to rewrite it for message-passing, again?

Having seen a couple of applications which used MPI it seems like a dead
end to me. The code is mangled to the point where it becomes really hard
to understand what it does (in one case I rewrote it with OpenMP and the
difference in clarity was amazing). Fortunately, message passing in
Smalltalk looks far nicer and doesn't get in the way. So that is what I
am working on (and yes, I know all about Peter Deutsch's opinion about
making local and remote messages look the same -
http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing).

In other messages in this thread there were comments about software
bloat hiding the effects of Moore's Law. There was a funny quote about
that (I was not able to track down who first said it): "What Andy
giveth, Bill taketh away!" (meaning Andrew Grove of Intel and Bill Gates
of Microsoft - this was a while ago). But we were talking about
selecting machines for research, and in that case the same software
would be used.

Compare running Squeak on a 40MHz 386 PC (my 1992 computer) with running
the exact same code on a 1GHz Pentium 4 PC (available to me in 2000).
Not even the old MVC interface is really usable on the first while the
second machine can handle Morphic just fine. The quantitive difference
becomes a qualititive one. I didn't feel the same between my 1 MHz Apple
II and the 6MHz PC AT. But of course there was a diffence - to show of
the AT in trade shows we used to run a Microsoft flight simulator called
Jet (later merged with MS Flight Simulator) on that machine side by side
with a 4.77MHz PC XT. It was a fun game on the AT, but looked more like
a slide show on the XT. I still felt I could get by with the Apple II,
however.

How can we spend money now to live in the future? Alan mentioned the
first way in his talk: put lots and lots of FPGA together. The BEE3
board isn't cheap (something like $5K without the FPGAs, which are a few
thousand dollars each themselves, nor memory) and a good RAMP machine
hook a bunch of these together. The advantage of this approach is that
each FPGA is large enough to do pretty much anything you can imagine. If
you know your processors will be rather small, it might be more cost
effective to have a larger number of cheaper FPGAs. That is what I am
working on.

A second way to live in the future is far less flexible, and so should
only be a second step after the above is no longer getting you the
results you need: use wafer scale integration to have now roughly the
same number of transistors you will have in 2020 on a typical chip. This
is pretty hard (just ask Clive Sinclair or Gene Amdahl how much they
lost on wafer scale integration back in the 1980s). But if you can get
it to work, then you could distribute hundreds (or more) of 2020's
computers to today's researchers.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-14 Thread Jecel Assumpcao Jr.
Karl Ramberg wrote:

> One of Alans points in his talk is that students should be using bleeding edge
> hardware, not just regular laptops. I think he is right for some part but he 
> also
> recollected the Joss environment which was done on a machine about to be
> scraped. Some research and development does not need the bleeding edge
> hardware. It can get a long way by using what you have till it's fullest.

You mixed research and development, and they are rather different. One
is building stuff for the computers of 2020, the other for those of
2012.

I was at a talk where Intel was showing their new multicore direction
and the guy kept repeating how the academic people really should be
changing their courses to teach their students to deal with, for
example, four cores. At the very end he showed an experimental 80 core
chip and as he ended the talk and took questions he left that slide up.
When it was my turn to ask, I pointed to the 80 core chip on the screen
and asked if programming it was exactly the same as on a quad core. He
said it was different, so I asked if it wouldn't be better investment to
teach the students to program the 80 core one instead? He said he didn't
have an answer to that.

About Joss, we normally like to plot computer improvement on a log
scale. But if you look at it on a linear scale, you see that many years
go by initially where we don't see any change. So the relative
improvement in five years is more or less the same no matter what five
years you pick, but the absolute improvement is very different. When I
needed a "serious" computer for software development back in 1985 I
built an Apple II clone for myself, even though that machine was already
8 years old at the time (about five Moore cycles). The state of the art
in personal computers at the time was the IBM PC AT (6MHz iAPX286) which
was indeed a few times faster than the Apple II, but not enough to make
a qualitative difference for me. If I compare a 1992 PC with one from
2000, the difference is far more important to me.

> On Tue, Dec 13, 2011 at 9:02 PM, Kim Rose wrote:
> 
> For those of you looking to hear more from Alan Kay -- you'll find a talk from
> him and several other "big names in computer science" here -- thanks to San
> Jose State University.
> 
>  http://www.sjsu.edu/atn/services/webcasting/archives/fall_2011/hist/computing.html

Thanks, Kim, for the link!

I have added this and four other talks from 2011 to

http://www.smalltalk.org.br/movies/

I also added a link to the Esug channel on Youtube, which has lots of
stuff from their recent conferences.

Cheers,
-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OLPC related

2011-11-14 Thread Jecel Assumpcao Jr.
Hans-Martin Mosner wrote:

> AFAIK the production board will have a number of (unbuffered) GPIO pins on a 
> header, so the hardware-inclined should be
> able to use them.

That is great news! I could see lots of extra connectors on the alpha
board, but had read somewhere that the plan was to remove all of those
from the final product so users wouldn't be able to easily burn their
boards by accident. I thought that was the whole point of a $25 machine,
specially if sold to children who already have some level of access to a
normal PC.

> However, since debian is not primarily real-time oriented, it might be a bit 
> difficult to use these pins in the same way
> as you would use an Arduino. Don't know whether a RT kernel with appropriate 
> drivers will be available. The R-PI has so
> much more processing power than an Arduino that it should be able to perform 
> mostly the same functions in user level (on
> a RT kernel, of course).

Indeed, even with a non real time OS you can probably generate waveforms
of a couple of MHz on a 600MHz ARM. It will be extremely sad if this
turns out not to be the case since the whole reason for creating the ARM
in the first place was that existing 16 bit processors didn't handle
interrupts fast enough.

> One of the nice things of the R-PI is that it is not tied to any specific 
> operating system, there are already efforts
> under way to port RiscOS, and given enough determination, other OSes should 
> be doable as well.

There was a nice report about  RiscOS on this board from an event in
London. The person doing the report wondered about this, pointing out
that such an OS would exclude software like Scratch. Given that Tim kept
the RiscOS port of the Squeak VM updated until version 3.8, it should be
relatively easy to fix this.

> Although this creates
> some barrier for sharing software, it can stimulate experimentation, so 
> overall I think it's not bad.

If the cost of having five different OSes is basically the cost of
buying five SD cards, then it is a pretty small barrier compared to what
we have had in the past. So I see this as a very good thing.

> I'm seriously contemplating getting one - the unusual situation with this 
> geek toy is that I don't have to convince my
> wife that it's not too expensive (nobody can argue against a $25 or $35 proce 
> tag) but that I won't spend day and night
> with this thing after the package arrives :-)

The only negative thing in the whole project is that the chip choice was
based on a relationship that the creator has but the users don't (being
employed by Broadcom, in this case, and being funded by AMD in the OLPC
case). This tends to cause conflict when people end up leaking
information they were not supposed to, or to cause conflict when nobody
leaks the needed information.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OLPC related

2011-11-14 Thread Jecel Assumpcao Jr.
The Raspberry Pi people have my full support. Certainly I would like
children to have the same free access to their computers that the
Sinclair Spectrum/BBC Micro generation had and this is normally not the
case even if they have a PC of their own. But while the $25 price tag is
a necessary condition, I am not sure that it is a sufficient one.

In my talks about the OLPC and related projects, I like to point out
that we had three interesting computing communities in the 1970s with a
shared language, a shared platform and some communication system. The AI
researchers had Lisp, their PDP-10 and the Arpanet. The Unix guys had C,
their PDP-11 and VAX machines with Unix itself and the UUNET. The
microcomputer people had Basic, their personal computers and magazines.

It is interesting to me how much the limitations of the micro people
actually led to an extra level of learning. You could get some program
over the Arpanet or UUNET and install it in your machine without looking
at it, but while typing something in from a magazine listing you would
get some impression of it even if you didn't pay much attention. Of
course, not all Lisps or Basics were the same so often you get to figure
out how to change stuff to work in the system you had.

I don't think having Debian on the Raspberry Pi machine will get us the
same results. At the very least it would be interesting to have a
programming language closer to the surface (the scripting pane in the
Frank descriptions of the latest Steps report would be an option I would
be happy to see). On the hardware side, having to do everything through
USB adds a level of complexity that is a real problem. I know some
people feel the best way to handle this would be to add something like
the Arduino, but access to a simple parallel port on the main machine
would be nice.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] a little more FLEXibility

2011-09-06 Thread Jecel Assumpcao Jr.
Michael Haupt wrote:
> Am 06.09.2011 um 13:49 schrieb Bert Freudenberg:
> 
> In the latest Squeak alpha you can drag any slot from one inspector onto any 
> slot of another inspector, replacing the object in it. 
> 
> 
> indeed. That is cool. :-DAlmost like arrow dragging in Self, only without the 
> arrows.

Fantastic! I was going to say that I probably wasn't paying as much
attention to the squeak-dev commit messages if I had missed this, but
then saw that this change went in today.

Morphic in Self tried to reinforce the idea that the visual
representation of an object on the screen was the real thing. So if you
asked for an outliner for an object and there was already one being
shown, you got that one instead of a new one. Of course, having a
separate outliner view and a morphic view ruined this and we had
interesting discussions about how to solve that but never did anything
about it. So the arrows were important to allow the same morph or
outliner to pop up in a new place without really going missing from its
old place.

Since Morphic in Squeak has a different model (you can have as many
inspectors as you want on the same object) I don't think the arrows
would be as important.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Re: a little more FLEXibility

2011-09-05 Thread Jecel Assumpcao Jr.
Alan,

> I hate to be the one to bring this up, but this has always been a
> feature of all the Smalltalks ...

I was going to say that this was introduced in 1976 and that the first
two version of Smalltalk had a more traditional REPL. But I would have
to check since I might be remembering it wrong.

The select/eval GUI was also a key design feature of Oberon, which was
the start of this thread. So we have come full circle :-)

> one has to ask, what is there about current general practice
> that makes this at all remarkable? ...

Most professors in the CS department where I study, which consistently
gets rated among the top 5 in the country, have never even heard of
Smalltalk. So you can just imagine how much the students or professional
developers don't know.

Francisco,

thanks for bringing up MorphicWrappers. For some reason I remembered the
name of the MathMorphs project from the same group, but not this one.
But it is indeed probably the closest thing out there to what I was
thinking. Casey mentioned to me the Maui GUI for Squeak, which was
inspired by this.

Michael,

> ah, but instead of Smalltalk >> #at:put: you can use any object
> member's setter. I was just too lazy to write that. :-)

If I have two inspectors open, one for MorphA and the other for ObjB,
then I don't see what I could type in either window to get ObjB to
reference MorphA. Your solution via globals solves this problem (but
introduces a global). But it might be just a lack of imagination on my
part.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: a little more FLEXibility (was: [fonc] Re: Ceres and Oberon)

2011-09-02 Thread Jecel Assumpcao Jr.
Alan,

> [second part was about wafer scale memories]

That was a great idea and was eventually adopted by DRAM makers to
increase yields (spare rows that could replace faulty ones at
manufacturing test time). These days losses due to cutting up the wafers
or encapsulation are pretty low, but I am still very interested in wafer
scale technology.

I think Mosys is using your idea and got a patent on it:

http://www.mosys.com/technology.php

The first claim in their second patent says:

"1. A semiconductor circuit on a substrate comprising:

at least one circuit block which includes a plurality of replaceable
circuit modules, each including a software programmable identification
circuit with a unique preset address code; and at least one redundant
circuit module in the at least one circuit block, the redundant circuit
module including a software programmable identification circuit
programmable with a unique preset address code."

Dividing memory into small blocks is good for saving energy since only
the addressed one has to be active.

Thanks for the Paul Rovner and Denis Seror references about the disk
thing. And thanks to Shaun Gilchrist for the DCPL reference. I haven't
been able to download it yet (all my attempts time out near the end) but
will try it on a faster network to see if I have better luck. It is
interesting that I wasn't able to find this text starting from the home
page and using the various search options. Makes me wonder what other
interesting stuff might be hidden there.

Bert,

thanks for reminding me about the option to "create textual references
to dropped morphs" in Squeak. That is pretty close to what I want,
though not as general. I don't remember if the MathMorphs guys had
something like this.

Michael,

your solution is a little more indirect than dragging arrows in Self
since you have to create a global, which is what I would like to avoid.
Not to mention that one solution is direct manipulation while the other
is typing and evaluating an expression. But between your solution and
Bert's it is obvious that the system can do what I want but the
limitation in the GUI.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


a little more FLEXibility (was: [fonc] Re: Ceres and Oberon)

2011-09-01 Thread Jecel Assumpcao Jr.
Alan,

> The Flex Machine was "the omelet you have to throw away to clean the pan",
> so I haven't put any effort into saving that history.

Fair enough! Having the table of contents but not the text made me think
that perhaps the section B.6.b.ii The Disk as a Serial "Associative
Memory" and B.6.c. An Associativeley Mapped LSI Memory might be
interesting in light of Ian's latest paper. Or the first part might be
more related to OOZE instead.

> But there were "4 or 5" pretty good things and "4 or 5" really bad things that
> helped the Alto-Smalltalk effort a few years later.

Was being able to input drawings one of the good things? There was one
Lisp GUI that put a lot of effort into allowing you to input objects
instead of just text. It did that by outputting text but keeping track
of where it came from. So if you pointed to the text generated by
listing the contents of a disk directory while there was some program
waiting for input, that program would read the actual entry object.

It is frustrating for me that while the Squeak VM could easily handle an
expression like

myView add:  copy.

I have no way of typing that. I can't use any object as a literal nor as
input. In Etoys I can get close enough by getting  a tile representing
the yellowEllpiseMorph from its halo and use that in expressions. In
Self I could add a constant slot with some easy to type value, like 0,
and then drag the arrow from that slot to point to the object I really
wanted. It was a bit indirect but it worked and I used this a lot. The
nice thing about having something like this is that you never need
global variable again.

> I'd say that the huge factors after having tried to do one of these were two
> geniuses: Chuck Thacker (who was an infinitely better hardware designer and
> builder than I was), and Dan Ingalls (who was infinitely better at most phases
> of software design and implementation than I was).

True. You were lucky to have them, though perhaps we might say Bob
Taylor had built that luck into PARC.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Re: Ceres and Oberon

2011-08-31 Thread Jecel Assumpcao Jr.
Alan,

thanks for the detailed history!

> 1966 was the year I entered grad school (having programmed for 4-5 years,
> but essentially knowing nothing about computer science). Shortly after
> encounters with and lightning bolts from the sky induced by Sketchpad and
> Simula, I found the Euler papers and thought you could make something with
> "objects" that would be nicer if you used Euler for a basis rather than how
> Simula was built on Algol. That turned out to be the case and I built this 
> into
> the table-top plus display plus pointing device personal computer Ed Cheadle
> and I made over the next few years. 

Is this available anywhere beyond the small fragments at

http://www.mprove.de/diplom/gui/kay68.html

and

http://www.mprove.de/diplom/gui/kay69.html

?

Though you often mention the machine itself, I have never seen you put
these texts in the list of what people should read like you do with
Ivan's thesis.

> The last time I looked at Oberon (at Apple more than 15 years ago) it did
> not impress, and did not resemble anything I would call an object-oriented
> language -- or an advance on anything that was already done in the 70s.
> But that's just my opinion. And perhaps it has improved since then.

It was an attempt to step back from the complexity of Modula-2, which is
a good thing. It has the FONC goal of being small enough to be
completely read and understood by one person (he does mention that this
is in the form of a 600 page book in the talk).

In the early 1990s I was trying to build a really low cost computer
around the Self language and a professor who always had interesting
insights suggested that something done with Oberon would require fewer
hardware resources. I studied the language and saw that they had
recently made it object oriented:

http://en.wikipedia.org/wiki/Oberon-2_%28programming_language%29

But it turned out that this was a dead end and the then current system
was built with the original, non object oriented version of the language
(as it is to this day - the OO programming Wirth mentioned in the talk
is the kind of thing you can do in plain C). I liked the size of the
system, but the ALL CAPS code hurt my eyes and the user interface was
awkward (both demonstrators in the movie had problems using it, though
Wirth had the excuse that he hadn't used it in a long time).

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Line endings

2011-08-11 Thread Jecel Assumpcao Jr.
Casey,

> Did Squeak pick up Macintosh style line endings when it travelled through
> Apple, or did Apple pick up Smalltalk style line endings when it travelled
> through PARC?
> 
> I've been wondering about this for awhile now. 

Wow, this is pretty off topic. But this is indeed where you are mostly
likely to get a reply to a question like that.

The original Smalltalk-80 from Xerox used CR as its line separation
character, but the really big external influence on Apple was UCSD
Pascal which shared that convention. Apple, however, (along with
Commodore and Tandy/Radio Shack - the big 3 from 1977) had already
adopted this convention from the start.

DEC had adopted the LF+CR style required by teletype terminals and this
was copied in CP/M, then MS DOS and finally Windows.

Multics adopted LF and this was copied by Unix. For more details on
these and other, more obscure (like early QNX and Sinclair), conventions
see:

http://en.wikipedia.org/wiki/Newline

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread Jecel Assumpcao Jr.
Casey Ransberger wrote:
> I want to try using a fluffy pop song to sell a protest album... it worked for
> others before me:) If you're really lucky, some people will accidentally 
> listen
> to your other songs. 
> 
> (metaphorically speaking)
> 
> "A spoonful of sugar" 

http://netjam.org/spoon/

http://www.sugarlabs.org/

Sounds like a plan!
-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] misc: x86 and ARM

2011-07-09 Thread Jecel Assumpcao Jr.
BGB,

> or, maybe all my x86 experience blinds me some to the "elegance" of 
> ARM's ISA?...
> whatever is so great about it, well, I am not seeing it at this level.
> 
> why then do so many people seem to complain that the x86 ISA is so 
> horrible?...

I think this is completely off topic for this list, but it would be rude
to leave your without an answer. Back in 1988 I wrote an ARM emulator
for the PC and a 8086 emulator for the ARM and can tell you that the ARM
was so much "cleaner" than the x86 that it was hard to compare them.

When you add a hack to go from 26 to 32 bit addressing, an object
oriented MMU from Apple, floating point, 16 bit encoding (Thumb), Java
compatibility, DSP and so on then it would be suprising if the core
simplicity of the ARM didn't get buried way deep.

Of course, if you are not using segments (and nobody is, these days)
then you are only programming in a limited subset of the x86 and I can
see how it can look elegant (see what Chuck Moore does on a Pentium for
his ColorForth, for example).

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Logo and Silicon

2011-06-14 Thread Jecel Assumpcao Jr.
Karl,

> Here is one proposed to be buildt in 
> Squeak http://www.computer.org/comp/proceedings/c5/2003/1975/00/19750120.pdf 

Thanks for the link! It looks nice. I am currently helping out with an
undergraduate course on computer architecture and adopted the WinMIPS64
simulator. A more flexible option would be good.

Though the figure says "screen image of the proposed simulator", the
conclusion says "is developed" and "is able to show". So it is not clear
to me if it was built or not, but I would guess yes and that the figure
label is bad English.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Logo and Silicon

2011-06-13 Thread Jecel Assumpcao Jr.
Casey,

> > But did you actually understand the Visual6502 and not just the idea of
> > it?
> 
> Nope. But it struck me to be able to see it compute. I do think I took
> something of value from the experience: I just don't know what it is yet. 

I agree it is a very interesting experiment and I like to show it to
people.

> Even simple microprocessors are hard to grok, yes, because they're
> vast. The next watershed, though, might be finding a relatively simple
> architecture which is also fast.

Here were are back to the FONC/Steps goals. C isn't a particularly hard
language to understand. Once you learn the rules, you can figure out
what a 100 line program does. But not human can understand a 4 million
line program in any meaningful way. At least we have subroutines - which
in the case of the microprocessor would be hardware modules. A modular
design is far easier for us to understand than a flat one, but sometimes
the needs of the different levels are different and one language for all
of them isn't the best way.

Many designs these days are done in SystemC or Bluespec and then
translated to Verilog or VHDL. In the Smalltalk area, you might want to
look at

http://averschu.home.xs4all.nl/idass/

> Field programmability gives me a touch of hope that systems will be
> able to optimize adaptively to the behavior of the user(s) driving, and
> evolution itself is a pretty straightforward idea, but this is just a
> thought-example. Most likely the shape it would take would end up
> surprising me. Biology is a great place to look for working concurrent
> systems, at least I think so, so hopefully that's a worthwhile thought
> experiment. 

http://www.cogs.susx.ac.uk/users/adrianth/ade.html

> Have you looked at the ALUs that kids have been making in Minecraft?
> You can _walk around_ in there. Inside the simulated microprocessor,
> and actually watch the "electrons" walk down the "Redstone wire." And
> when you want the birds-eye, a simple server mod lets you fly way up
> and look down. 

I watched some movies of this and while very neat, it has some of the
limitations of Visual6502. If I had actually played with it and had been
able to choose what to look at, it might have been more undestandable.

> Sure. So in this hypothetical Logo, which I'm calling WeeHDL like a right
> parody, you should be able to do macroscopic things like what you do
> in Verilog. We seem to have learned that different sets of metaphors
> help explain different sets of problems. 

I should have mentioned that the newer OKAD, like Logo, gives you a
textual representation for a visual object. A Logo program that draws a
house is not a drawing of a house that you can directly manipulate. In a
very large scale that is often considered an advantage and not a
failure, hence OKAD and Verilog and so on.

> The problem I have with Verilog seems to be that it's used to avoid
> thinking about the very details that I hope to understand. I obviously
> want a lot of abstraction, but I also want to able to understand the
> mapping between these representations, which got me thinking
> OMeta, etc. 

Verilog allows you to be as detailed as you like. If you download any
processor design from Sun, for example, you will see that they define
their own flip flops and adders rather than let the Verilog compiler
generate what it likes from a high level description.

Having said that, there is a feature in a simulador I wrote in Self 1.0
that I miss in all other tools I have seen: each componente could be
defined as a behavior or as a composition of lower level components (all
the way down to transistors). If any given component had both, then you
got to select which one to use in each simulation. So if a processor had
8 registers, you could simulate 7 of them at the behavioral level and
the remaining one it terms of its subcomponents. And you could watch two
registers, one of each kind, side by side to better understand what was
going on and to check that the two descriptions were equivalent.

One design method that you might find very interesting is the Path
Programmable Logic - PPL. This was developed at the university of Utah
back when the students only can access to text terminals hooked up to
large computers. So their tool was a patched Emacs and used characters
to represent both the functionality and the layout of the integrated
circuits. Rather than use different languages for different things
(schematics for funcionality and rectangles for layout) like we have
been discussing above, they aimed to have a single representation that
would be good for both.

"A strctured approach to VLSI circuit design"
J. Gu and K. F. Smith
IEEE Computer, vol 22, issue 11, Nov 1989
http://portal.acm.org/citation.cfm?id=74714
http://ieeexplore.ieee.org/iel1/2/1667/00043523.pdf

You need to be an ACM or IEEE member to get the paper, unfortunately.
This methods was made even more visual for graphical workstations at
Cirrus Logic, but it was only used internally and never 

Re: [fonc] Logo and Silicon

2011-06-11 Thread Jecel Assumpcao Jr.
Casey,

> Here's a fun thought: while staring at the Visual6502 visualization, it 
> occurred to
> me that the likes of Verilog and VHDL probably represent a rather tall order 
> to
> new folks (like, hey, me,) and the idea popped in there. I personally find it 
> easier
> to fathom designing circuits in a way that's both visual and programmatic, 
> just
> because I'm a very visual/verbal person, and it fits my learning style well.

But did you actually understand the Visual6502 and not just the idea of
it? I didn't, and I am reasonably familiar with that processor at the
schematic level and also an integrated circuit designer (I have created
a few chips at the "rectangle" level). The problem is quantitative -
there are just too many rectangles changing color at once and there are
too many to fit in the screen at a reasonable size.

I really hate to deal with structural designs in Verilog or VHDL (as
opposed to behavioral designs) which is why I use TkGate. Unfortunately,
we get into quantitative problems again with screen sizes. My hand drawn
schematics in the 1980s were always one to three pages of very large
paper. You needed a big desk to be able to fully open them up and you
could see both the big picture and details at the same time. It was easy
to quickly trace some signal from one side of the design to the other.

Now people do schematics on letter sized paper. The project is broken
down into some 20 or so pages. Each page has just one or two integrated
circuits (or subblocks) in them and wires running to the edge of the
page to "connectors" that indicate other pages. In other words, this is
a netlist and not a schematic and there is no advantage compared to the
same thing in VHDL. It has the disadvantage of taking up 20 pages to do
what VHDL would do in just 3.

> It dawned on me that I could probably make a little Logo where the turtles 
> draw
> with "metal ink." Has anyone tried anything like this before? Does it seem 
> like a
> good idea to try it now?

You might like Chuck Moore OKAD system which is used to create the
GreenArray chips. The software is not available, but there are videos of
him giving demos of it. Mostly in his "fireside chats":

http://www.ultratechnology.com/rmvideo.htm
http://www.ultratechnology.com/okad.htm
http://www.colorforth.com/vlsi.html

Note that the software evolved quite a bit from the early 1990s (when it
was a "paint the rectangles" style) to the late 1990s and today, when it
become a kind of programming language (the last page above, for example,
describes the earlier version though it was updated in 2009).

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Electrical Actors?

2011-06-06 Thread Jecel Assumpcao Jr.
Casey,

> Has anyone taken the actor model down to the metal?

I studied this in detail back in 1990 and had several references. These
are physically hard for me to reach right now and probably are not easy
to find on the web.

Though not an actor model, you might find my "RNA" idea of objects and
messages at the transistor level interesting:

http://www.merlintec.com:8080/hardware/19

This text is mostly just a note to myself and probably doesn't make much
sense to other people. An animation of the idea, however, would probably
be easily understood even by children (and probably more easily by
biologists than computer "scientists", hence the forced acronym from
Ring Network Architecture). I plan to work on this next year.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Static typing and/vs. boot strap-able, small kernel, comprehensible, user modifiable systems

2011-06-04 Thread Jecel Assumpcao Jr.
Scott McLoughlin wrote on Sat, 04 Jun 2011 12:04:20 -0400
> My intention was to far more specifically ask: why "small
> core, user comprehensible and modifiable, and boot-strapable"
> systems seem to be the province of either latently typed (Smalltak,
> Lisp, Scheme, Icon (?), etc.) or untyped (Forth, B (?)) languages
> rather than...
> 
> ...manifestly typed languages (wide variety - C++, D, Oberon,
> Haskell, Standard ML and oodles more).

Having played a bit with Oberon on a raw 386 machine with 4MB of RAM, I
would say it nicely disproves your theory. But I do agree with the
others who have said that when searching for minimal solutions, a heavy
type system is a good candidate for things to throw out.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Parsimony (was: Alto-2?)

2011-05-27 Thread Jecel Assumpcao Jr.
Casey,

> [Chuck Thacker chapter]

Note that TinyComputer is a series of designs starting with the one in
the first "Steps" report. Just looking at what changed from one to the
next is very educational.

The recent version in the Beehive brought back some of the flavor of
programming in Alto microcode. Then, as now, the processors were far
faster than main memory. So starting a memory access was a separate
operation from the actual read or write. In the case of Beehive there
are queues instead of single registers (inspired by
http://www.cs.virginia.edu/~wm/wm.html), which is even nicer.

Another important feature of the Alto, that Alan explained in this
thread, was the hardware coroutines (inspired by the TX-2). A great
advantage of coroutines over interrupts is that the code already has
much more context when it starts to execute and so it is done earlier.
In the Notetaker this was approximated by having several slower MOS
processors sharing a bus and Beehive is a modern version of this. Even
so, it would be a cool project to patch the TinyComputers themselves to
have hardware coroutines. 16 PCs using distributed memory (64 in a
Spartan 6) take up the same FPGA area as a single PC using flip flops,
and one block RAM can hold 16 banks of 32 registers (32 bits) each.

Merik wrote a very detailed email about SiliconSqueak, so I will just
make two more comments about it. The "64 ALU Matrix" he mentioned is an
optional coprocessor which has 64 ALUs, each 8 bits wide. By selecting
carry options, you can operate on wider values. The ALUs are organized
in an 8 by 8 matrix and the registers are shared between neighbors so
they can be used for communication.

While it is possible to simply compile the Squeak VM for the
TinyComputer
(http://web.mit.edu/6.173/www/currentsemester/handouts/SoftwareV2.pdf),
the idea in SiliconSqueak is to have more layers so the "jump" in each
layer is easier for people to follow. These are:

1) Etoys or Scratch tiles
2) Squeak source text
3) bytecodes
4) microcode
5) caches
6) Verilog/Schematics

Level 4 has much of the flavor of the macrocode of the first Lisp
machines rather than their microcode (which was very Alto-like). This
level presents many registers to the programmer that don't really exist,
but are actually fancy ways to access the caches (bytecode cache,
microcode cache, data cache and stack cache) as can be seen by someone
studying level 5.

A friend of mine, Etienne Delacroix, has had a lot of success with a
project that mixes electronics and arts with both children and
university level students. They learn about transistors and build
circuits from TTLs recovered from old machines. He would like very much
to have an alternative to level 6 where the children would build a
SiliconSqueak out of these TTLs and the slightly more modern GALs. I
think that is far too ambitious, but would never stand in the way of
someone's dream. Perhaps a simpler machine capable of running Little
Smalltalk would be more practical?

> [OISC]

I created a design for one OISC, the SBN (subtract and branch if
negative), out of TTLs for Etienne and we wrote some software for it. I
should make all that available somewhere.

An even more primitive OISC which only copies bytes between two
addresses and then unconditionally jumps to a third address is the core
of a videgame:

http://esoteric.voxelperfect.net/wiki/BytePusher

> ["transport triggered" architecture]

Back in 1999 I started Merlintec to bring to the market a TTA based
Smalltalk machine (ok, Self). The Tachyon processor would have executed
four 16 bit wide MOVE instructions on each clock cycle from a 768KB
cache. The code in main memory was bytecodes and the compiler translated
those to MOVEs on cache misses. There were several insteresting ideas in
this project, but I came to the conclusion that I wouldn't be able to
write a compiler good enough to take advantage of this hardware on my
own. By 2001 the FPGAs had enough internal memory that I could replace
the single fast processor with several simpler ones, each with their own
caches.

One of the functional units in Tachyon implemented PICs (polymorphic
inline cache) up to three elements long.

> ["Wireworld Computer"]

It is very cute. The circuit style of the Wireworld might actually be
the way of the future if quantum dots catch on.

http://en.wikipedia.org/wiki/Quantum_dot_cellular_automaton
http://nd.edu/~qcahome/

Cheers,
-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alto-2?

2011-05-25 Thread Jecel Assumpcao Jr.
Ian Piumarta wrote on Wed, 25 May 2011 21:20:24 -0400
> Dear Casey,
> 
> > a) I want to play with software
> > b) I want to play with FPGAs
> 
> You could start with Thacker's 'Tiny Computer' (described from p.123 onwards
> in http://piumarta.com/pov/points-of-view.pdf) and add/fix whatever you think 
> is
> missing/broken.

Great advice, see also

http://projects.csail.mit.edu/beehive/

You might consider starting out with a simulator before moving on to
FPGAs. See the example microprocessor in

http://www.tkgate.org/

This tool is a lot of fun.

Unfortunately, I'll only be able to post a proper reply on Friday (and
can make comments on SiliconSqueak then).

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] tiny computer (was: The Elements of Computing Systems)

2011-01-07 Thread Jecel Assumpcao Jr.
Shawn Morel wrote on Mon, 03 Jan 2011 16:43:19 -0800
> Somewhat of a tangent - since Alan mentioned the Alto architecture.
> Chuck Thacker is working on a few interesting HW prototyping
> platforms at MS research.
> The FPGA prototype system:
> http://research.microsoft.com/en-us/news/features/bee3.aspx
> More interestingly, though, is beehive:
> http://projects.csail.mit.edu/beehive/Beehive-2010-01-MIT.pdf

I had mentioned this, but thanks for providing these links.

> On Jan 3, 2011, at 2:16 PM, Jecel Assumpcao Jr. wrote:
> [...] Chuck Thacker's series of TinyComputer
> processors also have an extremely high ceiling. One detail I mentioned
> to the professors is that if they take VHDL or Verilog as their starting
> point (as Chuck does and they are currently doing) then they will be
> left with a huge black box. The "start with just NAND" solution that
> "Elements" and Etienne took will result in a much deeper understanding.
> They don't have to be mutually exclusive - it is probably good to have
> the students design the same processor both ways. [...]

There are interesting differences between the various versions of Tiny
Computer. I'll try to compile a list with more links and some comments
about these differences from an educational viewpoint next week.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Re: Visual 6502

2011-01-07 Thread Jecel Assumpcao Jr.
Casey Ransberger wrote:
> Also inaccurate: in their slide deck, they call out that what they've
> done is "more like a simulation than an emulation," and that this
> approach reduced the amount of code the had tow write, if their
> graphs are meaningful, by something like an order of magnitude. 

Different groups use the terms "emulation" and "simulation" in slightly
different ways, which can cause a lot of confusion.

For hardware developers, a simulator is some software that runs on your
PC to see if the design is correct or not. An emulator is a piece of
hardware that does the same job as what you are designing or some
important part of it. For example, a 6502 emulator would be a board with
a flat cable and a 40 pin connector which could plug into the socket of
an Apple II in place of a real 6502. This board would also be connected
to a PC or a logic analyser and would allow you to see what is happening
inside the processor while the board is running and even generate memory
accesses and stuff like that on a board that is not fully working.

For the retro-computing crowd, an emulator is any software that can
create a virtual old computer or video game closely enough to run the
old software. A simulator is a very detailed emulator which recreate
aspects of the original in order to more faithfully run the old
software. So an emulator might just grab a byte from the simulated
framebuffer and do a simple conversion before sending it to the video
card while a simulator might recreate with the original video chip did
and then convert the final result into what the modern video card needs.
The visual 6502 guys are using this definition. Normally, a simulation
is far more work and emulation. But in their case the simulation is so
detailed (it goes all the way down to the layout in the silicon) that
the code was simple and generic and only needed the very detailed input
which they were able to obtain semi-automatically.

I said "slightly different", but in a sense these two uses of this pair
of terms are almost opposites. 

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Elements of Computing Systems

2011-01-03 Thread Jecel Assumpcao Jr.
I am *very* interested in this subject - not only do I hope that the
Squeak computer I am building will be itself an educational object, but
I am also helping two related projects. I'll briefly describe those two
projects before making comments on the "Nand to Tetris" course, but I
should mention that both of these take the "students should learn the
way I learned" approach with which I don't agree with at all. And it
isn't easy to help while letting them explore on their own. Watching
them happily test several little programs without pointing out that none
of their examples use indexed addressing and that they will need it a
lot later on is not easy for me.

Etienne Delacroix might be known to some in this list, but probably not
to most. An artist and a physicist, he is now back in his native Belgium
but spent most of the past decade roaming Brazil, Uruguay and other
countries in the region. He gave workshops to both university students
and to children where he used electronic components salvaged from old
computers to create interesting art. They would learn what transistors
are, how to use TTLs chips and then would mix Javascript software and
such to get their results. Etienne's own projects often included a Z80
processor.

Lately he became interested in opening up the black box that is the Z80
or Pentium II and studied several online texts to see how to build a
processor out of TTLs. He was not interested in simulators or FPGA
implementations but wants something that he, and his students, can
touch. I helped him play around with such radical stuff as the
Subtract-And-Branch-If-Negative processor (around 12 TTLs) and he was
able to simulate several versions of his e-cpu, which currently only has
an 8 bit address. Together we evaluated some 16 to 20 educational
processors (including "Hack" of the "Elements" course). If there is
interest I could compile a list of links to the ones that are available
online.

The other project is a book that two professors at my university want to
write. They teach introduction to digital logic and at the end of the
course the students have traditionally built a multiplication circuit
with an adder and control logic, but recently they have been doing
simple processors. Their idea is that the book (to be in Portuguese,
which will seriously limit their audience) would be roughly the course
they have been giving followed by a course on compiler writting with a C
compiler for their processor. Their processor is a simple 16 bit RISC
with instructions for reading from the keyboard and writing to the
screen (with redefinable characters - the students have done several
classic video games with this) implemented in entry level FPGAs
development boards. They are considering adopting this board for their
text: http://www.altera.com/b/nios-bemicro-evaluation-kit.html

This option doesn't have the video output like their older boards. The
student doing the compiler has not yet taken any courses on compilers,
which I see as a major problem and they see as an advantage. Their idea,
like Etienne, is that someone who is also learning is in a far better
position to teach than some expert who doesn't remember what it was like
not to know stuff. I also advised them that they should have some
interpreted environement running on their machine, not just
cross-compiled C. Even TinyBasic would do. Otherwise their
readers/students will have an initial experience in computing more
typical of the 1960s than the late 1970s/early 1980s.

Given this context, I found the material in "The Elements of Computing
Systems" very interesting. I got a little worried when I saw that the
authors seemed confused about what "von Neumann" and "Harvard"
architectures mean, but the rest of their stuff is great. The use of
simulators instead of actual hardware lowers the cost for their
audience, but I do feel that there is some educational value in actually
being able to touch something you built. The exagerated simplicity of
Hack reduces the time spent designing the hardware but makes programming
it much more complicated (and not at all typical of assembly languages
people actually program in). But the idea is to develop only a single
program for Hack: a much nicer virtual machine. So it might be best to
think of Hack assembly as microcode.

One thing that is very hard to balance in an educational object is the
"low floor, high ceiling" thing. You have to make it simple enough to be
learned in a short time, but powerful enough to be useful in real life.
The "Elements" objects focus on the floor, so as soon as students have
learned their lesson the objects are thrown away never to be seen again.
Since they are just simulations anyway, they could hardly have any
practical value after the course so this seems like a good decision. If
I had to choose between teaching someone Logo to only later have them
replace it with something else or starting with something like C++ or
Java that they might use for years and years, I woul

Re: [fonc] paper from SPLASH 2010 - From OO to FPGA: Fitting Round Objects into Square Hardware?

2010-11-01 Thread Jecel Assumpcao Jr.
John Zabroski wrote:

> From OO to FPGA: Fitting Round Objects into Square Hardware? [1]
> was one of the interesting talks I sat in on when I attended SPLASH.

Thanks for the link!

> I was primarily interested in attending because of VPRI's long-range
> mission and the speculation that FPGA hardware will be fundamental
> to its top-to-bottom, side-to-side late-bound approach.  Ian was asked
> about compiling FONC stuff to FPGAs late 2009 [2], and he replied by
> saying FONC loves FPGA concepts but has no FPGA experts and so
> has no concrete plans for using FPGA at the moment [3].

Chuck Thacker has been doing interesting (but so far conventional)
things with FPGAs and his work is listed as part of the VPRI efforts in
the anual reports. So I would say there will eventually be results.

The group I am working with, called the "Reconfigurable Computing
Laboratory", is full of FPGA experts and publishes almost exclusively in
FPGA conferences.

> Jens Palsberg gave the talk. He proclaimed at the end that this will
> be one of the most important open, hard problems for the next 10
> years.

In his case, the bulk of the complexity is hidden inside the AutoPilot
tool created by a third party. I was very shocked that it worked so well
for him. Particularly impressive was getting the Richards benchmark to
fit into a reasonably sized FPGA.

Most of the state of the art in C-to-hardware is very disappointing. My
own proposal is that if the hardware and software versions of objects
can talk to each other, then you could use adaptive compilation to
convert just the tiny part of the application that has the most effect
on the performance (the "hot spot") into hardware and run the bulk of
the code on a processor.

> He said that there is a lot of redundancy between the front-end and
> back-end tools and we really need to collapse the two intermediate
> representations used into a single form, because too much information
> has to get reconstructed when you use a subset of the C language as
> an IR.

True, and every tool has a certain view of the hardware which might not
match an OO model that well. If you look at the Bluespec
(http://www.bluespec.com/) hardware description language, for example,
it deals with very different kinds of objects than Verilog (though it
gets compiled into that) or VHDL.
 
> Interestingly, Jens alluded during his talk that it took him 3 years to
> get this work published and he had to refine it to satisfy reviewers.

My own paper on adaptive compilation for FPGAs was rejected, but I have
to agree with the reviewer's complaints. Some here might have seen it
even so due to friends showing it to friends.

I don't like the premise that FPGAs as they are now are a given and the
software will have to deal with it, however. This related research into
a different architecture is very interesting, for example:

http://rala.cba.mit.edu/

http://fab.cba.mit.edu/classes/MIT/961.09/04.27/RALA/rala.ppt

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] On inventing the computing microscope/telescope for the dynamic semantic web

2010-10-14 Thread Jecel Assumpcao Jr.
Pascal J. Bourguignon wrote:
> No idea, but since they invented Java, they could have at a much lower  
> cost written their own implementation of Smalltalk.

or two (Self and Strongtalk).

Of course, Self had to be killed in favor of Java since Java ran in just
a few kilobytes while Self needed a 24MB workstation and most of Sun's
clients still had only 8MB (PC users were even worse, at 4MB and under).

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] goals

2010-07-13 Thread Jecel Assumpcao Jr.
Steve Dekorte wrote on Sun, 11 Jul 2010 15:33:37 -0700
> On 2010-07-11, at 06:18 PM, Jecel Assumpcao Jr. wrote:
> > It isn't
> > about making it smaller (though I also love that - ColorForth is one of
> > my favorite systems) but making it understandable so it can be built by
> > humans in such a way that it can become vast. Like the Internet.
> 
> That's a good point. I'd encourage the development of a *measure* of it. 
> With measures we can iterate. The speed of the feedback loop is only 
> as tight as our measures. Consider the last 40 years of language/tool 
> development vs hardware speed (which can be measured) improvements.

I don't know how to measure scalability except by growing something
until it breaks. One complication is that is things grow, they tend to
become more valuable and so more resources a poured into evolving it so
it can continue to grow. At some point you get diminishing returns, but
such things usually go way further than you would have initially
guessed.

Of course, none of this helps if you want to decide which of A and B is
the more scalable option. Obviously the thing is to try to identify the
most critical bottleneck. For high performance computing, for example,
there is the "roof line" model which takes into account memory bandwidth
and floating point performance under different operating conditions:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.156.756&rep=rep
1&type=pdf

The lack of scalability that I was talking about is where a system
becomes too much for any person to understand. Though lines of code is
very simplistic, Alan has compared code sizes of various projects with
different kinds of books in a few of his talks. There are texts that
almost anybody can read and there are others that nobody ever will.
Certainly you can't understand something you have not read. So that is
one bottleneck. I bet there are others - even in a system that is
reasonably short there might be too many combinations of elements to
understand:

http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] goals

2010-07-11 Thread Jecel Assumpcao Jr.
Steve Dekorte wrote on Sat, 10 Jul 2010 03:22:29 -0700
> On 2010-07-10, at 12:25 AM, Hans-Martin Mosner wrote:
> > For quite some time I've been pondering the duality of the class/instance 
> > and
> > method/context relations. In some sense, a context is an object created by
> > instantiating its method, much like a normal object is instantiated from 
> > its class...
> 
> 
> Self does just that:
> 
>   http://labs.oracle.com/self/language.html
> 
> Io (following Self's example) does as well. In this recent video:
> 
>   http://www.infoq.com/interviews/johnson-armstrong-oop

Indeed, but these two languages are, perhaps, even better examples:

http://www.daimi.au.dk/~beta/

http://www.erights.org/elang/index.html

> Ralph Johnson talks about how long it takes for computing culture to absorb 
> new
> ideas (in his example, things like OO, garbage collection and dynamic message
> passing) despite them being obvious next steps in retrospect. I think 
> prototypes
> could also be an example of this. 
> 
> It seems as if each computing culture fails to establish a measure for it's 
> own goals
> which leaves it with no means of critically analyzing it's assumptions 
> resulting in the
> technical equivalent of religious dogma. From this perspective, new technical 
> cultures
> are more like religious reform movements than new scientific theories which 
> are
> measured by agreement with experiment. e.g. had the Smalltalk community said 
> "if it
> can reduce the overall code >X without a performance cost >Y" it's better, 
> perhaps
> prototypes would have been adopted long ago.

When I think about the issue of FONC goals, I always remember Alan's
"supersized dog house vs Empire State Building" illustration. It isn't
about making it smaller (though I also love that - ColorForth is one of
my favorite systems) but making it understandable so it can be built by
humans in such a way that it can become vast. Like the Internet.

The other day I saw on the local news a three story building collapse as
if it had been imploded on purpose. The people who built it had
initially made just one floor, and it worked great. Then they added a
second floor, and it was nice. Now they were shocked that having a third
floor, which looked exactly like the original two, brought down the
whole thing. Neither architecture nor engineering were part of this
story, as far as I could tell. But this could be said of essentially all
programmers, computer scientists and "software engineers" that I know.

Weinberg's Second Law: If builders built buildings the way programmers
write programs, then the first woodpecker that came along would destroy
civilization.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Re: [squeak-dev] Sketchpad? (off-topic)

2010-06-19 Thread Jecel Assumpcao Jr.
Casey Ransberger wrote on Sat, 19 Jun 2010 12:44:01 -0700

> Apologies for the off-topic question, but does anyone know if the actual bits
> for Sketchpad are still extant somewhere? Is there any documentation that
> anyone has for the Lincoln TX-2? It'd sure be neat to emulate it.

This first place to look for information about old computers is
BitSavers, such as

http://www.bitsavers.org/pdf/mit/tx-2/

> Someone was asking me about object oriented programming with a head full
> of Java yesterday and I wanted to point at some things. Recently I had a 
> really
> great experience using an Apple IIgs emulator to explain HyperCard (kind of
> an odd version, but it worked,)

I had no idea that HyperCard ever ran on machines other than the classic
Mac.

> and it dawned on me that if we wanted to learn history by doing, we really
> ought to grab at whatever bits are out there and curate the damned things.
> And: selfish childish desire to play with Sketchpad! 

Now that there is (or seems to be - I was unable to test) a reasonable
emulator for the Alto, it would be nice if some disk images for it were
publicly available. Specially any Smalltalk related ones.

http://www.bitsavers.org/bits/Xerox/Alto/simulator/salto/

Since Sketchpad only ran on a single computer in the whole world, even
in its day most people exposed to it experienced it as movies instead of
a live demo. These movies are available today. Certainly an emulator
would be nice, but the i/o devices on PCs are different enough from the
TX-2 that I don't know how realistic the experience would be. Even a
Vectrex (an interesting videogame from 1982) emulator is rather
different from the real thing (which I had the pleasure of seeing in
action live about four years ago)..

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] hardware

2010-06-18 Thread Jecel Assumpcao Jr.
Steve Dekorte wrote on Thu, 17 Jun 2010 16:06:45 -0700
> > [Self compiler technologies vs hardware]
> 
> It seems to me that those compiler tricks make assumptions about usage 
> patterns
> (they assume you're really not doing too much dynamic stuff) and aren't 
> generalizable.
> e.g. an associative memory can be used for generic databases (there is, after 
> all, a
> reason why routers use them) raising the possibility of a greater unification 
> of data
> and behavior.

Though Self and Io are often lumped together as "prototype based
languages", there are significant differences in the programming styles.
Self is very much a Smalltalk and it is normal for programmers to start
out programming in a Smalltalk style before evolving to something more
optimized for Self (doing more refactoring and exploration instead of up
front design). The compiler technology certainly was created to support
that more static style. The reverse was also true - programmers quickly
learnd to avoid what the compiler didn't do well (dynamic inheritance,
for example). 

> Also, as you mentioned, there are a number of levels on which associative 
> memory like
> functionality is currently used. Could a fully associative memory system 
> unify all these
> usage patterns?

The problem of the von Neuman architecture is that you have millions or
billions of transistors on one side (superscalar CPU) and more billions
on the other side (RAM) but just a tiny bridge connecting them. This is
not very efficient in terms of power, though it has been less than a
decade since we started worrying about this below the supercomputer
level. With the distributed memory parallel machines you have at least
lots of little bridges connecting smaller groups of transistors.

A single, huge associative memory would have the same problems as the
von Neuman architecture even though you have moved some of the smarts to
the memory side. Our brains are proof that you can have highly parallel
associative memories and my RNA proposal is an attempt to see what the
extreme case of that might be. Some hybrid scheme which replaced a
single 1 GB associative memory with 1000 such memories of 1 MB each
would be worth investigating. How would they collaborate?

While I mentioned Ian's paper, the "context" (or "worlds") papers from
VPRI are equally relevant for this discussion. There the normal
 tuple used as the key for accessing the associative
memory is replaced by  where you might be
interested in partial matches to reduce the duplication of information.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] hardware

2010-06-17 Thread Jecel Assumpcao Jr.
Steve Dekorte wrote on Thu, 17 Jun 2010 12:42:11 -0700

> Does anyone know of any projects that have used associative memories (which
> are now large and relatively cheap) for implementing dynamic runtimes? Could
> such an approach give us single cycle dynamic lookups and (for the most part)
> eliminate the need for complicated compilers?

I investigated this issue in 1990 when I had a scholarship to design an
object oriented processor as I was graduating in chip design.
Associative memories were particularly fascinating to me since they
aren't practical to implement with discrete components or even FPGAs.
After becoming available as standard components in the early 1980s, they
seemed to have disappeared by the time of this project. Even in custom
chips they don't seem to be used much - at least caches moved from fully
associative in early designs to two or four way associative today. MMUs
(the TLB, translation lookaside buffer, part) and network packet
matching seem to be their most popular current use.

Given that you are asking on this list, I imagine you have read Ian
Piumarta's "Quantum Object Dynamics" paper?

http://www.vpri.org/pdf/m2009002_qod.pdf

That is the kind of issues I was looking into back then, with a lot of
inspiration from Self. These days I see these ideas used in practice in
Io, Lua and Javascript (and probably others that I am not aware of). The
wonderful results that the Self group achieved with compilation
techniques on conventional architectures made me interrupt this line of
research. Even a one clock message send is no match for message sends
that are entirely compiled away!

A few years later I came up with the idea of turning the associative
memory on its head - instead of having the fastest possible match for a
single result, why not have many simultaneous searches even if each one
is the slowest possible algorithm (a linear search)? Like traditional
CAMs (content associative memories), this hardware is also impractical
to simulate on FPGAs or discrete parts. I plan to work on this after my
current SiliconSqueak project, and have a very brief description of it
at

http://www.merlintec.com:8080/hardware/19

-- Jecel
P.S.: I don't know if this will make it to the list - my last few posts
bounced


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] movies list (was: Figuring out what you all want to hear)

2010-03-16 Thread Jecel Assumpcao Jr
Thiago Silva wrote on Wed, 10 Mar 2010 05:06:53 -0300
> you might be interested in the following transcript of the '97 oopsla speech:
> http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html
> 
> I also have some material in my disks. Doing a little scanning on
> jecel's list and my own, I think I've found 3 other speeches of Alan:
> 
> The Ceremony of Awarding the Honorary Doctorate of Kyoto University to
> Dr. Alan Kay
> http://www.youtube.com/watch?v=4yxXk9IUs6Q
> 
> Program for the future (Andries Van Dam and Alan Kay on Engelbart's vision)
> https://admin.adobe.acrobat.com/_a295153/p99875217/
> 
> How Simply and Understandably Could The "Personal Computing
> Experience" Be Programmed?
> http://irbseminars.intel-research.net/AlanKay.wmv

Thanks for the tip! I have added these, and two more from Youtube, to

http://www.smalltalk.org.br/movies/

This really should be a wiki page so other people could add the stuff
they find, though keeping the spam out is always a problem.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Figuring out what you all want to hear

2010-03-09 Thread Jecel Assumpcao Jr
John,

you might find my list of Smalltalk related movies interesting:

http://www.smalltalk.org.br/movies/

One problem is that several people I know who really should see these
movies have trouble with English. I could create subtitles, but then
would have to host the modified versions somewhere and it would be a
huge effort and I would have to get the proper permissions. A transcript
would make all this a lot easier and would allow people who don't have
time to sit through a whole movie to quickly read the same information.

Another issue is that if you put all of Alan's talks back to back, there
is a lot of repetition. Since each one was for a different audience, it
couldn't be otherwise. But it might be interesting to extract the best
parts from each one and edit them into a reasonably short movie.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Smalltalk hardware (was: Reading Maxwell's Equations)

2010-02-28 Thread Jecel Assumpcao Jr
Kurt Stephens wrote:

> Smalltalk did not spawn an entire industry of specialized hardware like 
> Lisp.

There was a lot more development in that area than most people are aware
of:

http://www.merlintec.com:8080/hardware/26

> However Lisp hardware is a collector's item now. :)

Only two architectures from that era have modern implementations. Being
*the* C/Unix machine for years and years didn't save the VAX, for
example. So I am for trying again to see what happens.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Re: [Ometa] FPGA targets?

2009-03-03 Thread Jecel Assumpcao Jr
Gerardo Richarte wrote on Tue, 03 Mar 2009 09:14:50 -0200
> Hi Ian,
> 
> Ian Piumarta wrote:
> > I'd love to see somebody figure out how to dynamically generate bit
> > files from an intermediate representation (Jolt ASTs, for example) to
> > allow reprogramming of the hardware on the fly.
> Take a look at project Madeo
> (http://www.esug.org/Conferences/2008/Innovation+Technology+Awards/Submissions)
> If I correctly recall from their ESUG presentation, they could generate
> bit files for some specific FPGAs, and all their work was Smalltalk based.

This is a very impressive project! A more limited one, also implemented
in VisualWorks Smalltalk, is the "Interactive Design and Simulation
System"

http://www.xs4all.nl/~averschu/idass/

This lets you describe and simulate digital systems with a combination
of graphical and simple text notations. When you are done, you can use
the "alien" translator to generate Verilog files that can be used, for
example, to program an FPGA development board. The rules-based text to
text translator used for this could probably be done far more
efficiently in OMeta.

A friend of mine, Reinaldo Silveira, was very interested in the
parallels between hardware blocks wired together and software objects
sending messages to each other. He initially investigated Java for his
PhD thesis, then played around with Squeak and did some complete system
simulations and then finally decided to extend Self into a SelfHDL.
Unfortunately, all of his work is in Portuguese. The pictures in his
thesis or on page 6 of this paper can give even those who can't read his
texts an idea of how he exteded Morphic:

http://www.iberchip.org/IX/Articles/PAP-046.pdf

He was particularly fascinated with how all the process related stuff in
Self is implemented in the language with a single primitive (TWAINS -
transfer and wait for interrupt or signal) so that he could integrate
the operation of "normal" code with his simulation framework.

My own master's project is called "Adaptive Compilation for
Reconfigurable Computers in Mobile Robotics" and is also based on the
idea that hardware and objects can be made to look the same. The initial
hardware configuration will be six SiliconSqueak processors running a
software-only implementation of the application. Note that this is not a
sequential Smalltalk-80 program, but is divided into parts that
communicate with each other using a given protocol (see stream based
programming). After the app is running for a while, the most critical
parts of the code can be recompiled (using type feedback) to increase
their performance. If some part of even the optimized code remains a
major hotspot, then it would be recompiled a second time but now into
dedicated hardware which would replace one of the six SiliconSqueak
cores (this is a Virtex-4 FPGA which can be partially reconfigured while
the rest continues to operate normally). Eventually this hardware might
get changed back to a general processor or else a second processor might
get replaced with a different dedicated hardware block (leaving just
four processors for the software part), depending on the current needs
of the application.

Sadly all my texts (except for the robot vision parts) are also in
Portuguese. There are two obvious problems with compiling software
blocks into hardware: it would take a very long time and the actual bits
for the FPGA can only be generated with closed tools running on a PC
(and not on the FPGA hardware itself). So I will have to cheat and have
to pre-compile a library of key blocks which the FPGA will then load as
needed. In this case it isn't really adaptive compilation anymore but
more like the hardware "executables" as in Borph -

http://www.eee.hku.hk/~hso/publications.html

The key thing is to have some model of parallelism which is used by the
source code. If you try to extract automatically parallelism from
"normal" application code you will just make things needlessly
complicated for yourself.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] fpga (was: other projects)

2007-11-27 Thread Jecel Assumpcao Jr
Ian Piumarta wrote:
> Several of us here (at VPRI) are interested in the potential of  
> better synergy between hardware and software.  Something we'd love to  
> see is the *OLA back-end generating netlists for FPGAs, or better  
> still reprogramming them on the fly (which I'm told is possible, but  
> I've yet to see an example).  Just-in-time software deserves just-in- 
> time hardware, no? ;-)

I was able to convince my advisor that my master's thesis (due next
March) would be much more interesting if I did something like this
rather than simply implementing a collision avoidance algorithm (robotic
vision) in hardware.

The idea is to extend the Self/Hotspot/Strongtalk compilation technology
an extra step. Code is initially interpreted or compiled with a very
simple (but fast) compiler. When a method is determined to be a hot spot
in the system, a second compiler generates much better code for it. If
it proves to be really critical to system performance then a third
compiler would generate hardware for it.

Unfortunately there are complications. The most critical one is the
black box nature of modern FPGAs described in detail by Toby Watson in
another message in this thread. Xilinx bought the 62xx technology (also
mentioned by Antoine van Gelder) and killed it, while the Atmel AT40K
series hasn't been seriously updated since the late 1990s. So given that
some vendor's proprietary tools will have to be used as the back end of
the third compiler and these tools won't run on the FPGA machine itself,
I will have to content myself with loading previously compiled modules
instead.

Another complication is that partial reconfiguration is currently very
complicated to do, when it is available at all. The Virtex-4 that the
development kit I bought for this project uses is the state of the art
in this regard. A design in the FPGA itself can include a special
component (called ICAP, if I remember correctly) that allows it to
examine and change the programming of other regions of the FPGA even as
it continues to run. The lower end Spartan 3 chips that I plan to use
for the children's computer don't have this ability, but they can be
programmed from an external source such that part of the FPGA's
configuration is changed while another part is still running. In either
case, actually generating the correct bits to do this requires a lot of
work due to poor support from the current tools.

The idea in my project is that the inital configuration will have a
number of processors (nine, for example) all alike and connected via an
onboard network. When code generated by the second compiler for some
method takes up more than 100% of one of these processors and it happens
to have a corresponding pre-compiled hardware alternative, then one of
the processors (it doesn't have to be the same one) is replaced by the
hardware block. Figuring out when a given hardware block isn't being
used enough and could be replaced by a generic processor is a little
more complicated. Since the hardware block presents exactly the same
interface to the rest of the system (messages over the network) that the
software it replaced did, this switching back and forth is completely
transparent to the rest of the application.

Steven H. Rogers mentioned the slowness of reconfiguring the FPGA which
would make a software based multiprocessor like Chuck Moore's Intellasys
chip more desirable. Partial reconfiguration would help with that since
the reconfiguration time would then be proportional to the area being
changed rather than the size of the FPGA being used. In my project it is
unlikely that I will have time to implement partial reconfiguration and
so will probably reprogram the whole FPGA every time, even though it
might take nearly a second to do so. But my base configuration is very
similar to the SEAForth design (but with fewer and slower processors)
and the reconfiguration time will be taken into account by the
scheduler, so hardware generation won't be used unless it will make a
significant difference. There are some guys doing interesting things
with reconfigurable processors that can have their instructions changed
in just a few microseconds:

http://www.stretchinc.com/technology/

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] productivity

2007-11-23 Thread Jecel Assumpcao Jr
Waldemar Kornewald wrote on Fri, 23 Nov 2007 12:56:48 +0100
> On Nov 23, 2007 1:36 AM, Wm Annis wrote:
> > (Oops.  Sent only to WK first.)
> 
> I hope the mailing list settings get fixed at some point.

Well, this is just another example of how things aren't as simple as you
think they are. The current settings cause a set of problems but
changing them won't "fix things" - it would just cause a different set
of problems (people sending messages that they wanted to keep private to
the whole list, for example).
 
> > *Everything* we do to communicate with computers is pretty unnatural.
> > I don't see how learning different precedence rules to program is any
> > different from learning that "if (a == 3 or 4 or 5) ..." doesn't mean the
> > same thing to a computer that it means in English.
> 
> Is this an excuse to make computer suck? What are we talking about here?
> 
> Let's stop thinking in terms of implementation complexity (math
> precedence won't add much, anyway) and start thinking in terms of how
> to make computers easier, more natural, and less error-prone for
> end-users (in this case, programmers using COLA).

Having no operator precendence causes some problems, having them causes
other problems. Some people will always have problems with one option,
others with the opposite option and still others will have problems with
all options at different times. This is like the email list options or
having arrays indexed starting with 0 vs starting with 1.

Note that APL was originally created as a math notation and it abandoned
operator precedence as too error prone for humans. Lisp was mentioned
and Forth is another example, but APL uses infix notation and so is more
relevant for this discussion.

It is interesting that Apple Logo had reasonably conventional
precendence but TI Logo didn't. When I created SuperLogo in 1983 and
then NeoLogo in 1994 this issue took up a significant part of my
language design and implementation time. Like you said, the
implementation complexity is not a problem at all. Anything we can
imagine can be done rather easily. The real problem is having a scheme
that makes sense to the users.

One possible answer is "to each his own". Letting everyone change
everything is one of the whole points of COLA, after all. But if people
need to work together then choices have to be made, as the mess that was
created in Smalltalk-72 and -74 (where everyone defined their own
syntaxes) proved so well.

The C precedence rules are very popular right now, but they are often
confusing. And these rules only go so far in matching conventional math
notation (of which there are several incompatible variations, hence the
APL effort). Personally, I think that if you are going to follow that
path then what the Fortress guys (no pun intended) are doing is a good
option.

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] other projects (was: productivity)

2007-11-22 Thread Jecel Assumpcao Jr
Waldemar Kornewald wrote: on Wed, 21 Nov 2007 07:57:20 +0100:
> There are more of such projects? I haven't found anything as serious
> as this one.

I consider my own Neo Smalltalk project (previously known as Self/R and
before that as Merlin OS) to have a lot in common with this one. I also
want to have a small, self contained and self described system that can
be understood by a single person (which in my case might be an extremely
gifted young person and this requires some thought during the design).
The self description includes the hardware, so in this regard Neo is a
bit more ambitious than Fonc though that makes some things much simpler.

A major difference between the two projects (not counting how much there
is to download and to read - practically nothing yet, in my case) is
that my descriptions are created by direct manipulation of persistent
objects while Fonc is about translations between abstraction levels. The
latter approach is far, far more powerful but the former is more
accessible to a larger group of people by being more concrete.

http://www.merlintec.com:8080/software/
http://www;merlintec.com:8080/hardware/

The above Swikis, messed up as they are, have the most recent
information about my project.

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc