comments inserted below (always appreciate Matt's posts.)

On 2/20/20 8:05 PM, Matt Mahoney wrote:
The goal of AGI is to automate human labor. It requires solving hard
problems like vision, language, robotics, art, and modeling human
behavior.
We may have different ideas of the goal of AGI.  The definition of "automate human labor" means that we have been doing AGI for hundreds of years - steamshovel replaces human labor; telegraph replaces horse and rider ...

The "I" in AGI means intelligence, right?  Intelligence has the attribute of rightness.  That is, a positive connotation that intelligence does the right thing.  We may not like this attribute, because it is what gives us so much problem - I notice it wasn't included in the the "hard problems" listed above.  Actually, it IS the problem for AGI.  All the rest is engineering, mechanical and electrical - actuators under coordinated control, chefs working out the recipes.


We don't need to define intelligence to solve these problems. If you
want to define intelligence as survival, then that is not what we want
to build. We want the type of intelligence that can produce
antibiotics, not the type that learns to evade them. It is also not
the Turing test, because that requires reproducing human weaknesses as
well as skills. If you want to define intelligence as the goal, then
the appropriate measure is dollars per hour.
The appropriate measure is (drum roll...) "better choices per moment."  We don't like that measure because it is hard to say what is better.  Better for who?  I would be thrilled to converse with a "wise" intelligence that can explain why one way is better than another.  Even to ask "how do you begin to know what is better?"  Of course the answer starts with "it depends". 

The solution to AGI will need to start with "this is how I determine what is better..."

Stan


Reply via email to