On 3/20/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
There is one way you can form a coherent, working system from a congeries
of
random agents: put them in a marketplace. This has a fairly rigorous
discipline of its own and most of them will not survive... and of course
the
system has
YKY On 3/20/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
There is one way you can form a coherent, working system from a
congeries
YKY of
random agents: put them in a marketplace. This has a fairly
rigorous discipline of its own and most of them will not
survive... and of course
YKY the
This response will cover points raised by several previous posts in
the emergence/agenda/structure of mind threads, by Goertzel, Hall,
Wallace, etc.
What makes an intelligence general, to the extent that is possible,
is that it does the right thing on new tasks or new situations, which
it hadn't
Eric Baum wrote:
Hayek doesn't directly scale from random start
to an AGI architecture in as much as the
learning is too slow. But the same is true of any other means of
EC or learning that doesn't start with some huge head start.
It seems entirely reasonable to merge a Hayek like architecture
On 3/20/07, Eric Baum [EMAIL PROTECTED] wrote:
This is the problem with Wallace's complaints. You actually want the
machine [to do] something unpredicted, namely the right thing in
unpredicted circumstances. Its true that its hard and expensive to
engineer/find an underlying compact
As has been pointed out in this thread (I believe by Goertzel and Hall)
Minsky's approach in Society of Mind et seq of adding large numbers
of systems then begs the question: how will these things ever work
together, and why should the system generalize?
How does adding auditory modules
On Tue, Mar 20, 2007 at 06:34:25PM +, Russell Wallace wrote:
wouldn't exist unless it generalized to new experiences. So while
its hard to engineer this, which might be called emergence,
It's not emergence, but rather failing gracefully and doing the
right thing.
you will
Russell On 3/20/07, Eric Baum [EMAIL PROTECTED] wrote:
This is the problem with Wallace's complaints. You actually want
the machine [to do] something unpredicted, namely the right thing
in unpredicted circumstances. Its true that its hard and expensive
to engineer/find an underlying compact
As has been pointed out in this thread (I believe by Goertzel and
Hall) Minsky's approach in Society of Mind et seq of adding large
numbers of systems then begs the question: how will these things
ever work together, and why should the system generalize?
rooftop How does adding auditory
I think that the concept that many of you are struggling to voice is
Credit attribution is a really hard problem in AGI. Market
economies solve that problem (with various difficulties, but . . . . :-)
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
On Monday 19 March 2007 17:30, Ben Goertzel wrote:
...
My own view these days is that a wild combination of agents is
probably not the right approach, in terms of building AGI.
Novamente consists of a set of agents that have been very carefully
sculpted to work together in such a way as to
J. Storrs Hall, PhD. wrote:
On Monday 19 March 2007 17:30, Ben Goertzel wrote:
...
My own view these days is that a wild combination of agents is
probably not the right approach, in terms of building AGI.
Novamente consists of a set of agents that have been very carefully
sculpted to work
12 matches
Mail list logo