Jim,
A crucial part of the mistake you’re making – which is standard here – is 
looking at conceptualisation (and indeed all AGI) from a position of 
“omniscience”  - FULL AND (damn near) PERFECT KNOWLEDGE – from the POV of s.o. 
who has already got to know everything there is about, say, patterns or indeed 
any other category and is now merely getting his thoughts in order to solve the 
odd further problem.
This is how narrow AI operates -  s.o. has already worked out all the aspects 
of solving a rational problem, (like chess),  and the programmer has to order 
and organize them.
That’s not AGI – that’s not real life for any real world agent.
An AGI works from MINIMAL - EXTREMELY  PARTIAL, LIMITED KNOWLEDGE of any given 
subject - is s.o. who is ***getting to know the world*** -  who has to form 
concepts about the world from v. limited information – and “find his way around 
the world” as he goes along.
You think an AGI is merely a more complex narrow AI – an agent who a la T.S.P. 
has a complete map of the town or area – and merely has to work out the most 
efficient routes.
An AGI is instead s.o. like you who has to get to know every town or territory 
as he goes along – and has to be able to find his way around without a map.
An AGI is s.o. who can do a CREATIVE task about wh. he knows v. little to begin 
with – s.o.  I can ask,  : “go into that room you’ve never been in before, and 
clear it up...” You don’t start with a full knowledge of all the factors 
involved. 
You think AGI is no different from narrow AI – you give your robot beforehand a 
complete map of that room and every piece of furniture – and, like existing 
programs, he merely works out the most efficient ways of navigating and 
stacking the furniture.
Hey, we’ve got narrow AI’s that can do that.
AGI is about our sending robots into uncharted territory and having them sort 
things out as they go along – without our having to program their every step 
beforehand. We want robots that like living creatures can *explore* the world 
from a v. limited brief and a few at most instructions
Only with FULL KNOWLEDGE of the complete set of factors involved in a problem, 
can there be complexity.  AGI always proceeds from MINIMAL KNOWLEDGE about a 
problem – and nothing like a definitive set of factors,  and there is no 
complexity.
As would-be AGI systembuilders here, we are all proceeding from v. partial 
knowledge of what an AGI machine will entail – zero complexity. Building a 
mechanical AGI is merely a more difficult form of walking around a new room or 
field or town, or handling a new object.
Rational/narrow ai problems -  full knowledge (of the set of factors)  & 
complexity. Creative/AGI problems – minimal knowledge (& no set of factors)  & 
no complexity 
P.S. “A fundamental assumption made by classical AI planners is that there is 
no uncertainty in the world; the planner has full knowledge of the conditions 
under which the plan will be executed.”
http://arxiv.org/pdf/cs.AI/9605106.pdf 
https://www.google.co.uk/search?q=A.I.+%22full+knowledge%22+%22limited+knowledge%22&sugexp=chrome,mod=4&sourceid=chrome&ie=UTF-8
 
About 11,500 results 
 


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to