Well, as the system builds up and resolves deductions from the original
information, these deductions will themselves become fodder for additional
deductions. The general structure of a car can be represented relationally,
with the placement of the engine interdependent on the placement of the
front wheels, the passenger compartment, etc. The system would drift
through various configurations, shifting components around until a
reasonably consistent design emerged -- a local maximum in the space of
design permutations. Each configuration might be violate particular
constraints observed in general car design to different degrees, and the
system would iteratively adjust the positions of other elements to minimize
these constraint violations. In this way, the relational representation of
the car's structure would evolve until it made sufficient sense according
to the observed constraints. The entire problem space need not be searched,
because the system only searches and evolves locally in directions that
reduce inconsistencies. This would not guarantee a globally optimal
solution, but would guarantee that a locally optimal solution could be
found, which I think is consistent with my observations of short-term human
thought processes.

You are familiar with Boolean
networks<http://en.wikipedia.org/wiki/Boolean_network>,
correct? A Boolean network can be represented as a directed graph, where
vertices represent Boolean functions, and edges represent the sources of
inputs of each vertex's Boolean function from the previous iteration's
state. Such a network, when initialized to a particular state, will evolve
until it stumbles into an attractor. By letting each vertex correspond to
an observed feature of (or proposition satisfied by) an input sample, and
then adjusting the topology to make the network state consistent only (or
primarily) for observed input samples, a Boolean network can be used an a
primitive autoassociative memory. (BTW, I see no reason why we cannot
generalize from Boolean values to the real interval [0, 1] here, allowing
for soft computing.) Once the network has been so trained, it should be
possible to fix the values of a few vertices, those corresponding to an
incomplete description, and allow the rest of the network to evolve
normally, and thereby retrieve a more complete description that is
maximally consistent with earlier observations, once it has converged to an
attractor state. The attractor state that is reached by the system given a
counterfactual input may be one that is never reached from an actual
observed input, meaning the system has found its way into a new stable
configuration, something that has never actually been observed but makes
the most sense to the system according to the principles it has observed
from real inputs.

I think I may build one, as a proof of concept.



On Wed, Jan 1, 2014 at 8:54 AM, John Rose <[email protected]> wrote:

> A part here that is left unknown is “and then feeding them into A to see
> what would happen”. How is that described?
>
>
>
> I imagine some sort of reasoning engine, which accepts a partial
> description and then adds to that description based on what it can infer
> from the initial information, combined with knowledge derived from
> experience. So for example, say the system consistently observes that cars
> have engines. Then, when you feed it the initial description, "a car with
> five wheels," it recognizes the fact that the entity described has the
> property, "is a car," and that this is highly predictive of the entity also
> having the property, "has an engine." Combining the descriptive
> information, "is a car," with the experience-based probabilistic deductive
> rule, "is a car => has an engine", the system then chooses to add, "has an
> engine," with a probability proportional to the observed likelihood of this
> outcome. Other deductions could then be triggered by this new addition to
> the description, perhaps in combination with other elements already in the
> description.
>
>
>
> Would this be where a “self” might be involved?, put simplistically, for
> more complex creative attempts? Or, I can imagine many of these processes
> running in various levels and combinations systematically working on pieces
> of some creative challenge in unison. Or maybe a hierarchy of similar
> processes where the creative complexly would rely on the structure of the
> hierarchy graph.
>
>
>
> I'm not sure I follow you here. Could you explain?
>
>
>
>
>
>
>
> I just mean how would it generate a creative potential solution to a
> problem that had a solution space of a vast amount of possibilities. So in
> this case if a fifth wheel was added how does that change the positioning
> of all the wheels from the known positioning of the typical four?
> Creatively and realistically it cannot just randomly choose. There are many
> physical factors involved.
>
>
>
> John
>
>
>
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to