In
principle --- of course -- once we have an AGI, the AGI will be able to build
narrow AI systems better than we can... for those cases where narrow AI systems
are still appropriate...
Lacking the AGI, however, one has to design these hacks based on one's
knowledge of the application domain, as well as one's knowledge of the PTL
framework into which the hacks are being fit...
This
kind of hacking is standard narrow-AI practice. What was an interesting
realization for me was that the math of PTL is more nicely and easily applicable
when one has grounded relationships rather than ungrounded ones....
Of course this isn't sooooo shocking, since PTL was designed to serve as the
inference component of a general intelligence system, and we're just applying it
to narrow-AI projects to earn bucks along the way
Short-term practical app areas involving grounded relationships would be
robotics (which we're not working on, though it would be fun) and scientific
data analysis (which we are working on, but much of our work in this area
involves analyzing quantitative data together with ungrounded knowledge from
databases, so that the quantitative data provides partial grounding for the
database knowledge)
ben
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] |
- RE: [agi] Probabilistic inference in grounded and ungrounde... Ben Goertzel
- Re: [agi] Probabilistic inference in grounded and ungr... j.Maxwell Legg
- RE: [agi] Probabilistic inference in grounded and ... Ben Goertzel
- Re: [agi] Probabilistic inference in grounded ... j.Maxwell Legg
- Re: [agi] Probabilistic inference in grounded ... j.Maxwell Legg