Russell Wallace wrote:

I don't think this is an accurate paraphrase of Mike's statement. "X
is secret sauce" implies X to be _both necessary and sufficient_ (or
at least that the other ingredients are trivial compared to X) - a
type of claim AI has certainly seen plenty of. But Mike's claim, if I
understand it correctly, is that visual/spatial capability is
necessary but not sufficient for AGI. (A position I also hold.)
There is no clear distinction between perceptual and symbolic paradigms. The more intelligent the AI gets, it would probably contain more perceptual-deductive methods.

Suppose if you want to program a symbolic AI to move and rotate a robotic arm. It would be difficult and computationally intensive if it is purely implemented by discrete, atomic symbols. All the angles must be expressed by each individual symbol.

The solution would be to use an abstraction, such as a floating-point number to represent the angle of the arm. This improvement, however, represents data continuously and uniformly like perception; contrary to a discrete symbolic representation. This improvement is a step towards perceptual processing.

So there is no clear distinction between symbolic and perceptual representations. The more perceptual-like the system gets (the more numeric fuzzy variables it has), the more efficient it would represent some tasks (like angles).

Discrete symbolic representations are chosen to represent the "important" characteristics continuous and ambiguous objects. Discrete symbolic representations are *inductive* because they categorize only the "common" attributes of a continuous representation. Symbolic representations are not exhaustive and only represent a *subset* of the "common characteristics" of continuous and ambiguous data.

Probabilistic symbolic reasoning is an improvement because it is more perceptual-like as it represents data numerically like perception. As in the robotic arm example, increasing the angle would be easy if the angle is represented numerically, instead of convoluted symbolic representations representing every case and combination. The angle can easily increment/decrement just like humans can move objects in their mind.

Thus, if the symbolic/associational AI stores some temporary non-symbolic variables that it could modify, the variables are analogous to perceptual reasoning because they can be easily imagined/modified just like human imagining and modifying their visual perception.

If you make a symbolic AGI to use mathematics to represent three-dimensional data, it is *already* a perceptual-like representation because the mathematical variables or coordinates can be easily incremented/decremented in a domain-specific way just like humans imagining objects.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to