I wrote a book about the emergence of spontaneous creativity from
underlying complex dynamics.  It was published in 1997 with the title
"From Complexity to Creativity."  Some of the material is dated but I
still believe the basic ideas make sense.  Some of the main ideas were
reviewed in "The Hidden Pattern" (2006).  I don't have time to review
the ideas right now (I'm in an airport during a flight change doing a
quick email check) but suffice to say that I did put a lot of thought
and analysis into how spontaneous creativity emerges from complex
cognitive systems.  So have others.  It is not a total mystery, as
mysterious as the experience can seem subjectively.

-- Ben G

On Mon, Jun 30, 2008 at 1:32 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Ben,
>
> I agree, an evolved design has limits too, but the key difference between a 
> contrived design and one that is allowed to evolve is that the evolved 
> critter's intelligence is grounded in the context of its own 'experience', 
> whereas the contrived one's intelligence is grounded in the experience of its 
> creator, and subject to the limitations built into that conception of 
> intelligence. For example, we really have no idea how we arrive at 
> spontaneous insights (in the shower, for example). A chess master suddenly 
> sees the game-winning move. We can be fairly certain that often, these 
> insights are not the product of logical analysis. So if our conception of 
> intelligence fails to explain these important aspects, our designs based on 
> those conceptions will fail to exhibit them. An evolved intelligence, on the 
> other hand, is not limited in this way, and has the potential to exhibit 
> intelligence in ways we're not capable of comprehending.
>
> [btw, I'm using the scare quotes around the word experience as it applies to 
> AGI because it's a controversial word and I hope to convey the basic idea 
> about experience without getting into technical details about it. I can get 
> into that, if anyone thinks it necessary, just didn't want to get bogged 
> down.]
>
> Furthermore, there are deeper epistemological issues with the difference 
> between design and self-organization that get into the notion of autonomy as 
> well (i.e., designs lack autonomy to the degree they are specified), but I'll 
> save that for when I feel like putting everyone to sleep :-]
>
> Terren
>
> PS. As an aside, I believe spontaneous insight is likely to be an example of 
> self-organized criticality, which is a description of the behavior of 
> earthquakes, avalanches, and the punctuated equilibrium model of evolution. 
> Which is to say, a sudden insight is like an avalanche of mental 
> transformations, triggered by some minor event but the result of a build-up 
> of dynamic tension. Self-organized criticality is
> explained by the late Per Bak in _How Nature Works_, a short, excellent read 
> and an brilliant example of scientific and mathematical progress in the realm 
> of complexity.
>
> --- On Mon, 6/30/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> I agree that all designed systems have limitations, but I
>> also suggest
>> that all evolved systems have limitations.
>>
>> This is just the "no free lunch theorem" -- in
>> order to perform better
>> than random search at certain optimization tasks, a system
>> needs to
>> have some biases built in, and these biases will cause it
>> to work
>> WORSE than random search on some other optimization tasks.
>>
>> No AGI based on finite resources will ever be **truly**
>> general, be it
>> an engineered or evolved systems
>>
>> Evolved systems are far from being beyond running into dead
>> ends ...
>> their adaptability is far from infinite ... the
>> evolutionary process
>> itself may be endlessly creative, but in that sense so may
>> be the
>> self-modifying process of an engineered AGI ...
>>
>> -- Ben G
>>
>> On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam
>> <[EMAIL PROTECTED]> wrote:
>> >
>> > --- On Mon, 6/30/08, Ben Goertzel
>> <[EMAIL PROTECTED]> wrote:
>> >> but I don't agree that predicting **which**
>> AGI designs can lead
>> >> to the emergent properties corresponding to
>> general intelligence,
>> >> is pragmatically impossible to do in an analytical
>> and rational way ...
>> >
>> > OK, I grant you that you may be able to do that. I
>> believe that we can be extremely clever in this regard. An
>> example of that is an implementation of a Turing Machine
>> within the Game of Life:
>> >
>> > http://rendell-attic.org/gol/tm.htm
>> >
>> > What a beautiful construction. But it's completely
>> contrived. What you're suggesting is equivalent, because
>> your design is contrived by your own intelligence. [I
>> understand that within the Novamente idea is room for
>> non-deterministic (for practical purposes) behavior, so it
>> doesn't suffer from the usual complexity-inspired
>> criticisms of purely logical systems.]
>> >
>> > But whatever achievement you make, it's just one
>> particular design that may prove effective in some set of
>> domains. And there's the rub - the fact that your
>> design is at least partially static will limit its
>> applicability in some set of domains. I make this argument
>> more completely here:
>> >
>> >
>> http://www.machineslikeus.com/cms/news/design-bad-or-why-artificial-intelligence-needs-artificial-life
>> > or http://tinyurl.com/3coavb
>> >
>> > If you design a robot, you limit its degrees of
>> freedom. And there will be environments it cannot get
>> around in. By contrast, if you have a design that is
>> capable of changing itself (even if that means from
>> generation to generation), then creative configurations can
>> be discovered. The same basic idea works in the mental arena
>> as well. If you specify the mental machinery, there will be
>> environments it cannot get around in, so to speak. There
>> will be important ways in which it is unable to adapt. You
>> are limiting your design by your own intelligence, which
>> though considerable, is no match for the creativity
>> manifest in a single biological cell.
>> >
>> > Terren
>> >
>> >
>> >
>> >
>> >
>> > -------------------------------------------
>> > agi
>> > Archives:
>> http://www.listbox.com/member/archive/303/=now
>> > RSS Feed:
>> http://www.listbox.com/member/archive/rss/303/
>> > Modify Your Subscription:
>> http://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>> >
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> Director of Research, SIAI
>> [EMAIL PROTECTED]
>>
>> "Nothing will ever be attempted if all possible
>> objections must be
>> first overcome " - Dr Samuel Johnson
>>
>>
>> -------------------------------------------
>> agi
>> Archives: http://www.listbox.com/member/archive/303/=now
>> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> http://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be
first overcome " - Dr Samuel Johnson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to