On Mon, 28 Jun 2004, Brad Wyble wrote:
> [...]
> This is usually the case in new technological domains.
> The first innovators get wiped out by the next generation
> that learns from their success.
>
> Nothing wrong with this (apart from being unfair), just
> capitalism at work. Someone will ste
Hi,
Because we use a lot of evolutionary learning methods, it will work more
like:
A
whole populatoin of Novamentes (10 or so for starters, later perhaps much more)
repeatedly try out new MindAgents (cognitive-control objects) on some
test-cognitive-problems and see how well it does.
Ben, I hope you are going to keep a human in the
loop.
Human in the loop scenario:
The alpha Novamente makes a suggestion about some
change to its software.
The human implements the change on the beta
Novamente running on a separate machine, and tests it.
If it seems to be an improvem
Hi John,
Initially Novamente will not know anything about its underlying hardware
architecture.
Rather, it will learn procedures that are represented in a fairly abstract
mathematical form (combinatory logic) and that manipulate Novamente nodes
and links as primitives alongside ints, floats and
Hi Ben,
If the AI "knows" the machine as its natural context (stacks, registers,
etc., ie, "world"), then the supercompiled code should be the only code
it can comprehend and self modify. The code produced by the C++
compiler would be orders of magnitude more complex. Imagine an article
in yo
The idea is to maintain two versions of each
Novamente-internal procedure:
-- a version that's amenable to learning
(and generally highly compact), but not necessarily rapid to
execute
-- a version that's rapid to execute
(produced by supercompiling the former version)
As learning prod