Hi,
s.
1) Would anyone currently putting energy into the foundations of
probability discussion be willing to say that this hypothetical
human mechanism could *still* be meaningfully described in terms of
a tractable probabilistic formalism (by, e.g., transforming or
approximating all the nasty nonlinearity I just introduced into a
simpler, more analytic form, without losing anything)?
[My intuition on this question: no way.]
My intuition is completely opposite yours on this issue.
I think that a system like you described above could likely be
described, in terms of its **action selections** and patterns
therein, using probability theory.
This is why, in the theoretical hypotheses I proposed, I was talking
only about probabilistic rules observed by an external agent M2 to
govern a given agent M1's action-selections. I was not making any
committments that M1 has to explicitly use probability theory
internally. I think that explicitly using probability theory
internally is only one among many ways to wind up approximately using
probability theory on the level of patterns in one's action-selections.
Also, I don't know why you contrast "analytic" with "nonlinear."
Nonlinear equations are analytic constructs, just as surely as
probabilistic equations. And probabilistic relationships can be
nonlinear.
2) Suppose that this really *is* the way the human cognitive
system works, and that the reason it works this way is that
evolution has figured out (pardon the teleology: you know what I
mean) that any attempt to build systems that manipulate more
tractable types of "concepts," using simpler types of reasoning
formalisms that actually do allow things to be interpreted in a
high level way, simply do not work. In other words, such system
just do not get to be intelligent (for whatever reason.... but
probably because they can never learn those horribly vague, messy-
looking concepts that don't fit very nicely into logical
formalisms, but which are vital to the development of the system)?
My actual question, then: Suppose it just happens not to be
possible to do it any other way than with all the messy, nonlinear
mechanisms described above: what, in that case, would be the use
in trying to keep as close as you can to a formal, tractable
approach to AGI, of the sort that would allow you to prove at least
something about the way the not-quite-probabilities are handled?
It **could** be that the only way a system can give rise to
probabilistically sensible patterns of action-selection, given
limited computational resources, is to do stuff internally that is
based on nonlinear dynamics rather than probability theory.
But, I doubt it...
The human brain may work that way, but it is not the only (nor the
ideal!) cognitive system...
Novamente actually combines probabilistic inference with nonlinear
dynamics internally, which I have reason to believe is the most
effective way to give rise to probabilistically sensible patterns of
action-selection.
-- Ben
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303