Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-26 Thread Tony Bowden

On Sat, May 25, 2002 at 05:44:21PM +0100, Greg McCarroll wrote:
> What do they need this power for? I agree its a shame that its not
> being used, but what is the average company going to do with it?
> Sorry If I'm being thick, but I just can't think of anything that an
> average company (i'm ignoring scientific companies and major financial
> organisations) can parallelise efficiently across lots of PCs.

They don't need to be parallelising things. Just doing things on the
desktop that would usually need a server.

> Hmm, I'm not sure the web is what I'd traditionally think of as a C/S
> model, ymmv.

I mean this merely to the extent that the client is reasonably dumb and
the 'heavy lifting' is being done somewhere on a server. And that your
data isn't really available to you other than in the way that the server
chooses. (Your average financial info site, for example, doesn't allow
you to play with your share portfolio in ways that they haven't
programmed for you.) 

> Are you thinking about something like a supermarket site sending lots
> of product information in one go and then letting the customer browse
> using Javascript on their local machine?

Possibly. As an example, take DigiGuide. You can access their
television listings either through a web interface, or through software
which downloads the latest info daily, and lets you play with it on your
local machine. I *much* prefer the software version, mostly because the
web version is just too slow, but also because the interface is much
better. But, one of the reasons the interface is better, is that until
recently doing a decent interface in the browser was practically
impossible. As browser-based applications become more realistic, this
means that someone like DigiGuide can stop splitting their development
effort and work on one interface, which can either be run off their
server, using their copy of the data, or of your own desktop server,
with your downloaded copy of the data. 

I believe there are many more places where this sort of thing will start
to happen, or become more common / practical / useful / whatever.

> Never underestimate the power of taking some old technology and adding
> a liberal dose of pseudo technical marketting bullshit and this isn't
> a bad thing in my book.

*grin*

Tony





Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-26 Thread Tony Bowden

On Sun, May 26, 2002 at 06:12:47AM -0700, Paul Makepeace wrote:
> In a typical large LDAP directory there will be a lot of information
> that the end user does not have access to (passwords, or info about the
> physical location of company documents). Or data they access to but
> cannot change (e.g. who there manager is). 
> Even having a subset of data being spread around (name, dept, tel, fax,
> email) still doesn't get around this problem, AFAICS.

I'm not sure why the subset doesn't get around it? Information that is
tightly controlled will probably need a centralised point that controls
that access. Everything else can be spread. Or, I suppose you could
build the ACL stuff into the distributed server in some way - depends on
just how secure you need to be?

> Additionally there is of course the standard data integrity issue,
> either malicious or accidental interference.

*nod*. Definitely an issue, but doesn't seem to be insurmountable. In
fact the fact that the information is distributed means that some of the
problems there are lessened. 

> Updates ("replication"): this isn't a nicely solved problem with today's
> centralised LDAP stores. Multi-master updates are hard. Freenet is
> struggling with the updates idea and still there is essentially no
> search capability (last I checked, which was a while ago).

Agreed. I don't think any of this is trivial, by any means. But I also
don't think it's completely unsolvable. Current systems aren't perfect
either, by a long way. 

But I think a lot of stuff is going to happen in this area. It's not a
panacea, and there's lots of things that won't lend itself to it. But
some things will, and some of them will probably be easier than we
expect.

In Northern Ireland, at least, 90% of businesses have fewer than 20
staff. The sorts of systems that they need are very different from those
needed by organisations with 20,000 staff. A lot of the traditional
computing models are just too expensive for them. If people can write
relatively simple applications that do some of these things in a
distributed way, so that they don't really need to invest servers etc,
I'd say there'd be quite a lot of interest.

Tony





Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-26 Thread Paul Makepeace

On Fri, May 24, 2002 at 06:08:58PM +0100, Tony Bowden wrote:
> As an example, think of a web-based corporate 'address book' application.
> Currently anyone looking up an address would load a page of a central web
> server. But it's relatively simple to have that move to each employee's PC
> (which may well be more powerful than the server), at least for querying
> - but still as a web-based application. For updates, for now it would
> probably still submit to a central database, which would then get sent
> out to the individual PCs. But it wouldn't be too difficult to have
> this percolate in a P2P manner, removing the need for the centralised
> server altogether.

I'm curious how you propose to get around:
1) access control lists
2) distributed updates

In a typical large LDAP directory there will be a lot of information
that the end user does not have access to (passwords, or info about the
physical location of company documents). Or data they access to but
cannot change (e.g. who there manager is). Or data that is accessible
from a subset of IP addresses. Additionally there is of course the
standard data integrity issue, either malicious or accidental
interference.

Even having a subset of data being spread around (name, dept, tel, fax,
email) still doesn't get around this problem, AFAICS.

Updates ("replication"): this isn't a nicely solved problem with today's
centralised LDAP stores. Multi-master updates are hard. Freenet is
struggling with the updates idea and still there is essentially no
search capability (last I checked, which was a while ago).

Is there some light here?

Paul (hasn't really looked at this stuff for a year or so now)

-- 
Paul Makepeace ... http://paulm.com/

"What is moist and chewy and chocolaty and has no calories? Pigs flying
 is a relative concept."
   -- http://paulm.com/toys/surrealism/




Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-25 Thread Rob Partington

In message <[EMAIL PROTECTED]>,
Greg McCarroll <[EMAIL PROTECTED]> writes:
> What do they need this power for? I agree its a shame that its not
> being used, but what is the average company going to do with it?
> Sorry If I'm being thick, but I just can't think of anything that an
> average company (i'm ignoring scientific companies and major financial
> organisations) can parallelise efficiently across lots of PCs.

Speaking from an ISP perspective, you could distribute many things across
them -- from LDAP lookups (each desktop contains, say, half the entries
distributed according to a hash) to news readers (each desktop handles,
say, 50 readers).  I know the news reader idea is workable, not sure 
about the LDAP one...

Anything that distributes work over multiple machines is a big win for
redundancy and performance, both of which you need in an ISP.
-- 
rob partington % [EMAIL PROTECTED] % http://lynx.browser.org/




Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-25 Thread David Cantrell

On Sat, May 25, 2002 at 05:44:21PM +0100, Greg McCarroll wrote:

> What do they need this power for? I agree its a shame that its not
> being used, but what is the average company going to do with it?
> Sorry If I'm being thick, but I just can't think of anything that an
> average company (i'm ignoring scientific companies and major financial
> organisations) can parallelise efficiently across lots of PCs.

Genetic algorithms for tuning parameters in a product, perhaps?  Like what
is used for tuning the parameters for spamassassin.

-- 
David Cantrell | Benevolent Dictator | http://www.cantrell.org.uk/david

  Vegetables are what food eats




Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-25 Thread Greg McCarroll

* Tony Bowden ([EMAIL PROTECTED]) wrote:
> On Fri, May 24, 2002 at 10:39:53PM +0100, Chris Ball wrote:
> > It sounds like the model you're proposing is centralised-with-caching,
> > rather than p2p.  
> 
> In the short/medium term, absolutely. AS I said in the original:
>   My personal take is that we'll start seeing more and more web-based
>   applications move, at least partially, to the desktop, co-ordinating
>   initially with centralised servers, but then gradually moving to a purer
>   'P2P' type set-up.
> 
> For most things, I see this happening as an intermediate stage. Most
> desktop machines spend most of their time (even when being used) doing
> next to nothing. Organisations have a huge amount of potential
> processing power on the desktop, but aren't using it. 

What do they need this power for? I agree its a shame that its not
being used, but what is the average company going to do with it?
Sorry If I'm being thick, but I just can't think of anything that an
average company (i'm ignoring scientific companies and major financial
organisations) can parallelise efficiently across lots of PCs.

> More and more work will be pushed back to the desktop. This was
> the trend in computing for a long time. Until the web explosion pushed
> everything back to a client/server model again. 

Hmm, I'm not sure the web is what I'd traditionally think of as a C/S
model, ymmv.
 
> Even your average commercial web site doesn't take full advantage yet of
> the power in most of the recent browsers, that would allow so much more
> to be achieved on the client before needing to send another request back
> to the server. As people start discovering what can be done with this, I
> think we'll start to see quite a major reduction in the number of
> round-trip requests needed to achieve many web-based tasks. Which is
> good for everyone (except maybe the bandwidth companies!). Less load on
> the server is good. Less waiting on the part of the user. 

Are you thinking about something like a supermarket site sending lots
of product information in one go and then letting the customer browse
using Javascript on their local machine?

> > Isn't this all just adding latency to the client->server model without
> > really taking away any centralisation or adding any benefits?

Never underestimate the power of taking some old technology and adding
a liberal dose of pseudo technical marketting bullshit and this isn't
a bad thing in my book.

Greg 


-- 
Greg McCarroll http://www.mccarroll.org.uk/~gem/
   jabber:[EMAIL PROTECTED]
msn:[EMAIL PROTECTED]




Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-25 Thread Tony Bowden

On Fri, May 24, 2002 at 10:39:53PM +0100, Chris Ball wrote:
> It sounds like the model you're proposing is centralised-with-caching,
> rather than p2p.  

In the short/medium term, absolutely. AS I said in the original:
  My personal take is that we'll start seeing more and more web-based
  applications move, at least partially, to the desktop, co-ordinating
  initially with centralised servers, but then gradually moving to a purer
  'P2P' type set-up.

For most things, I see this happening as an intermediate stage. Most
desktop machines spend most of their time (even when being used) doing
next to nothing. Organisations have a huge amount of potential
processing power on the desktop, but aren't using it. That's not going
to be acceptable for much longer. And attempts to move back to dumber
terminals have mostly failed. Rather than fighting Moore's Law - live
with it. More and more work will be pushed back to the desktop. This was
the trend in computing for a long time. Until the web explosion pushed
everything back to a client/server model again. 

Even your average commercial web site doesn't take full advantage yet of
the power in most of the recent browsers, that would allow so much more
to be achieved on the client before needing to send another request back
to the server. As people start discovering what can be done with this, I
think we'll start to see quite a major reduction in the number of
round-trip requests needed to achieve many web-based tasks. Which is
good for everyone (except maybe the bandwidth companies!). Less load on
the server is good. Less waiting on the part of the user. 

I was in Boston last week, using a 50k dial-up. I've been so used to
big corporate pipes that I'd forgotten how fundamentally slow the
internet really is. 

> Also, who keeps track of whether the data is up to date?  If my client
> machine is asking one of the new desktop servers for a contact record,
> does the desktop server then go back and check hashes against the
> main server?  Does it return old information?

Dunno. That's an implementation issue :)

I don't really know enough about that sort of stuff, but I don't believe
that you need to be constantly checking with a central server. Even if
it takes a few hours (days?) for the information to reliably make its
way around the network, I don't think that's unworkable - not for an
address book anyway. 

> Isn't this all just adding latency to the client->server model without
> really taking away any centralisation or adding any benefits?

At the very least it 
  a) makes querying faster
   - maybe not that important in this instance, but this was only an
example, after all
  b) reduces reliance on a server being constantly up
   - probably more important 

If it can make it the whole way to having no central server, then you've
of course got the other major benefit of not needing a server...

Tony





Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Chris Ball

> "Tony" == Tony Bowden <[EMAIL PROTECTED]> writes:

>> But, it's not.  That's a potentially huge amount of data; people
>> aren't going to want to transfer that data to every machine they
>> might want to use, they're going to want a nice, central,
>> authoritative LDAP server to talk to.

Tony> They don't have to transfer it to every machine. Their
Tony> sys-admin can have the initial version set up on their
Tony> machine. If they need to use a machine that doesn't have it
Tony> installed, then it can ask the server - or any of the many
Tony> desktop servers that now exist. And, it could locally cache
Tony> all the information that it retrieves, so that it gradually
Tony> builds up the information that it needs.

It sounds like the model you're proposing is centralised-with-caching,
rather than p2p.  Also, who keeps track of whether the data is up to
date?  If my client machine is asking one of the new desktop servers for
a contact record, does the desktop server then go back and check hashes
against the main server?  Does it return old information?  Isn't this
all just adding latency to the client->server model without really
taking away any centralisation or adding any benefits?

Etc.  :-)

- Chris.
-- 
$a="printf.net"; Chris Ball | chris@void.$a | www.$a | finger: chris@$a
 chris@lexis:~$ perl -le'@a=($^O eq 'darwin')?qw(100453 81289 9159):qw
 (23152 19246 2040);while(<>){chomp;push @b,$_ if grep {$.==$_}@a}push
 @b,$^X;print ucfirst join(" ",@b[2,0,3,1]).","'


Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Tony Bowden

On Fri, May 24, 2002 at 06:23:41PM +0100, Chris Ball wrote:
> Tony> As an example, think of a web-based corporate 'address book'
> Tony> application.  ... it's relatively
> Tony> simple to have that move to each employee's PC (which may well
> Tony> be more powerful than the server), at least for querying - but
> Tony> still as a web-based application.
> But, it's not.  That's a potentially huge amount of data; people aren't
> going to want to transfer that data to every machine they might want to
> use, they're going to want a nice, central, authoritative LDAP server to
> talk to.

They don't have to transfer it to every machine. Their sys-admin can
have the initial version set up on their machine. If they need to use a
machine that doesn't have it installed, then it can ask the server - or
any of the many desktop servers that now exist. And, it could locally
cache all the information that it retrieves, so that it gradually builds
up the information that it needs.

> There's lots of talk of making decentralised search engines, too, which
> I also don't see the point of.  

I also don't see the point of these. 

> I like the idea of p2p a lot, but there are so many scenarios in which
> it cripples the service it's trying to decentralise..

I agree completely. I just don't think an address book is one of them...

Tony




Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Chris Ball

> "Tony" == Tony Bowden <[EMAIL PROTECTED]> writes:

Tony> As an example, think of a web-based corporate 'address book'
Tony> application.  Currently anyone looking up an address would
Tony> load a page of a central web server. But it's relatively
Tony> simple to have that move to each employee's PC (which may well
Tony> be more powerful than the server), at least for querying - but
Tony> still as a web-based application.

But, it's not.  That's a potentially huge amount of data; people aren't
going to want to transfer that data to every machine they might want to
use, they're going to want a nice, central, authoritative LDAP server to
talk to.

There's lots of talk of making decentralised search engines, too, which
I also don't see the point of.  The reason Google[1] works so well is
that it has _all_ of the data, and can run inference on pages based on
how many other pages mention them, and where they fit in to the grand
scheme of things for a particular search request.

I like the idea of p2p a lot, but there are so many scenarios in which
it cripples the service it's trying to decentralise..

- Chris.

[1]: Or, for example, http://www.alltheweb.com/ *cough*.  
-- 
$a="printf.net"; Chris Ball | chris@void.$a | www.$a | finger: chris@$a
 chris@lexis:~$ perl -le'@a=($^O eq 'darwin')?qw(100453 81289 9159):qw
 (23152 19246 2040);while(<>){chomp;push @b,$_ if grep {$.==$_}@a}push
 @b,$^X;print ucfirst join(" ",@b[2,0,3,1]).","'


Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Tony Bowden

On Fri, May 24, 2002 at 04:53:06PM +0100, Leon Brocard wrote:
> Tony Bowden sent the following bits through the ether:
> > He did a keynote at the O'Reilly Emerging Technology Conference last
> > week, based roughly on the book.
> 
> Did you go?

Yes.

> Was it an interesting conference? 

Yes.

> Is p2p the future?

No. It's the present :)

My personal take is that we'll start seeing more and more web-based
applications move, at least partially, to the desktop, co-ordinating
initially with centralised servers, but then gradually moving to a purer
'P2P' type set-up.

As an example, think of a web-based corporate 'address book' application.
Currently anyone looking up an address would load a page of a central web
server. But it's relatively simple to have that move to each employee's PC
(which may well be more powerful than the server), at least for querying
- but still as a web-based application. For updates, for now it would
probably still submit to a central database, which would then get sent
out to the individual PCs. But it wouldn't be too difficult to have
this percolate in a P2P manner, removing the need for the centralised
server altogether.

There's obviously a lot of problems in making something like this really
work in a large organisation, but they're mostly not 'pure' technology
issues at this point.

I think that most of this stuff will follow the maxim that people
are massively over-estimating the short-term effects, but massively
under-estimating the long-term effects.

> I looked at some of the presentations and they seemed quite
> interesting:
> http://conferences.oreillynet.com/pub/w/18/presentations.html

There were a lot of interesting presentations. I've only just got back,
as I was in Boston afterwards, so I haven't gotten around to writing
them up yet, but as I do they'll appear on my weblog:
 http://www.tmtm.com/insanity

Tony






Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Leon Brocard

Tony Bowden sent the following bits through the ether:

> He did a keynote at the O'Reilly Emerging Technology Conference last
> week, based roughly on the book.

Did you go? Was it an interesting conference? Is p2p the future?
I looked at some of the presentations and they seemed quite
interesting:
http://conferences.oreillynet.com/pub/w/18/presentations.html

Leon
-- 
Leon Brocard.http://www.astray.com/
Nanoware...http://www.nanoware.org/

... Drink wet cement and get really stoned




Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Tony Bowden

On Fri, May 24, 2002 at 01:50:38PM +0100, Simon Wistow wrote:
> Customers who liked this book may also like "Emergence" by Stephen
> Johnson. Not that I've read it yet but he's done lots of articles for
> Wired and Salon. Like this one ..

He did a keynote at the O'Reilly Emerging Technology Conference last
week, based roughly on the book. It almost piqued my interest enough to
buy the book, but not quite enough. If I'd noticed it in a Barnes and
Noble I may have sat their for an hour or two and read it though...

Tony





Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Greg McCarroll

* Simon Wistow ([EMAIL PROTECTED]) wrote:
> 
> > Red Dwarf 
> 
> But it's actually a biblical quote (specifically Mark chapter 5, vs 9)
> 
> "And he asked him, What is thy name? And he answered, saying, My name is
> Legion: for we are many."
> 

Its a concept that appears repeatedly in sci-fi/horror, unsurprisingly
it makes an apperance in one of the exorcist films (unsurprisingly as
the biblical story is about an exorcism, i'd look up a quote to
support this but using that site is a swine ;-)). Surprisingly i can't
think of any BtVS that makes use of it, however Being John Malkovich
is similar although thats probably by coincidence. Also I'm sure i've
read a 2000AD prog that used it. Needless to say that a lot of the
horror films that use it can't help but use the quote.

On the subject of the exorcist movies, I thoroughly recommend the
book, the exorcist, its a very enjoyable well written book. As far as
the films go, the 1st and 3rd are good, but the 2nd is apalling and
looks so so dated nowadays. Although neither make my top ten all time
films.

Greg

p.s. my current top ten film list looks like 

#1 It's a Wonderful Life, #2 Cool Hand Luke, #3 Withnail and I, 
#4 The Great Escape, #5 Butch Cassidy and the Sundance Kid,
#6 Star Wars, #7 The Sting, #8 One Flew Over the Cuckoo's Nest,
#9 Mallrats, #10 Spartacus

sorry (i already had it in a file after a recent IRC discussion)
 

-- 
Greg McCarroll http://www.mccarroll.org.uk/~gem/
   jabber:[EMAIL PROTECTED]
msn:[EMAIL PROTECTED]




Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Simon Wistow


> Red Dwarf 

I should point out that this is tongue in cheek, the quote was used in
Red Dwarf (Series 6, Ep 2 - Legion)

CAT: What the hell are you, buddy?
LEGION: (Replacing his mask.) Kryten knows.
KRYTEN: I do?
LEGION: You suspect the truth.
KRYTEN: You mean that you are a gestalt entity, not a single creature
but a combination of individuals melded together to form one?
LEGION: "My name is Legion, for we are many"

http://www.geocities.com/TelevisionCity/8889/text/legion.txt

 

But it's actually a biblical quote (specifically Mark chapter 5, vs 9)

"And he asked him, What is thy name? And he answered, saying, My name is
Legion: for we are many."

- http://bible.christiansunite.com/bible2.cgi?v=kjv&b=Mk&c=5

-- 
: it's not the heat, it's the humanity





Re: [REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Simon Wistow

[snip]

Customers who liked this book may also like "Emergence" by Stephen
Johnson. Not that I've read it yet but he's done lots of articles for
Wired and Salon. Like this one ..

http://www.wired.com/wired/archive/10.03/aigames_pr.html

Review here ...
http://www.wired.com/wired/archive/9.09/streetcred.html?pg=4


-- 
: it's not the heat, it's the humanity





[REVIEW] Creation - Life and how to make it by Steve Grand

2002-05-24 Thread Simon Wistow


I originally wrote this a couple of days ago but some weird combination
of key presses in Nifty Telnet on the Mac, Screen, Mutt and Nano
conspired to wipe it when I was nearly finished and then switch to my
sent folder and delete 4 mails from that. Go figure. As a consequence
it's been rewritten but it's a little stilted. Sorry.



Creation - life and how to make it
Steve Grand
ISBN 0-75381-277-0

Steve Grand is a digital god. 

I'm not just saying that as some sort of hero worship or as a glowing
reccomendation of this book (although you could probably accuse me of
both) but because he concieves and creates binary creatures and imbues
them with life.

Steve Grand begat the Norns - the impossibly cute, wide eyed inhabitants
of the game Creatures (and all its sequels). Norns were not just some
sophisticated Tamgotchi, nor were they a clever hack designed to appear
life like whilst a Wizard like figure behind the screen pull all the
strings - Norns were designed to be alive.

Despite (or, possibly, because of) having Igor Aleksander as a lecturer
at college I've remained skeptical about the possibility of artificial
life and machine intelligence (I shy away from saying the more normal,
and loaded, Artificial Intelligence) but find the Intelligent agents in
games to be a fascinating field and so I bought this book expecting it
to be a HOWTO guide for building seemingly intelligent characters mixed
in with some Neural Net theory.

However I was pleasantly surprised to find a wide ranging and incredibly
complete (especially considering that it's only 263 pages long including
diagrams) book which was vaguely reminiscent of classic Feynman (but
without the misogony).

This book debunks the last 50 years of AI research but is cautiously
optimistic for the next 50 years claiming, that we've been looking in
the wrong place, that AI Research has failed its own Turing Test, and
that the solutions to the problems lie within this book.

Like any good theorem it starts off with the basics, by defining the
axioms that the rest of the proof can be built upon. Only in this case
it's more like redefining the basics - there is no such thing, argues
Grand, as matter.

I blinked hard a couple of times too.
 
Essentially the argument goes like this - matter is no more than a
disturbance in the universe just like a wave is nothing more than a
disturbance in water. Sitting typing this on a laptop that I've just
found out wasn't worth one and a half grand because, essentially, it's
not there, is a little disconcerting but after a short think and a nice
hot cup of tea it becomes a more palatable idea - think of it like this
: the same difficulty one has with believing that a thing is just a
disturbance in the universe is the same difficulty someone watching an
incoming tidal wave has with believing that it's 'just' a disturbance in
the sea and not a coherent thing.

For more on this read "The Matter Myth" by Paul Davies and John Gribbin.

>From there Grand goes onto explain clumping - subatomic particles clump
together to form atoms, to form molecules to form chemicals. At this
point, I have to admit, I was wondering what, precisely this has to do
with games and intelligence. The answer is positive and negative 
feedback loops which drive evolution and give rise to emergent
behaviour.

What Grand shows is that some seemingly intelligent behaviour, such as
ants storing all their dead in a mass grave is little more than emergent
behaviour. Ditto their path finding ability. By attempting to reproduce
piece meal pure learning or intelligence or reasoning classical AI
research has missed the point - what is needed is a combination of
emergent behaviour, learning and emotions and then you get the
intelligence 'for free'.

It works in the same way as writing the dynamics in a game. Pacman
didn't implement a physics engine - there was a simple rule that said
"you can't move into the walls". However the same rule would come for
free if it had been done with a physics engine. It would have Just
Worked [tm].

Still with me? Good, we're only half way through. And I've missed out
lots. 

Armed with these concepts we plunge into "God's Lego Set" - kind of
design patterns for the universe - which examines the tools at our
disposal before swerving neatly into "The whole Iguana" were we start
plumbing everything so that in "Igor hand me that screwdriver" we can
start building our virtual creature. Which is named Ron - Grand
unconvincingly tries to claim that it's named Ron because that was the
name of King Arthur's spear and also because of the infamous (mostly due
to Fight Club) series of Readers' Digest articles about body parts such
as "I am Jane's spleen" and, more relevantly, "I am Ron's brain".

With the equivalent of some virtual neurons and a small hormone factory
Ron quickly starts to take shape - what I liked best is the fact that
everyt time Grand describes a problem (and he's not afraid to admit the
mistakes he made and the cor