J. Andrew Rogers wrote:
On Apr 6, 2008, at 8:55 AM, Richard Loosemore wrote:
What could be "compelling" about a project? (Novamente or any other).
Artificial Intelligence is not a field that rests on a firm
theoretical basis, because there is no science that says "this design
should produce an intelligent machine because intelligence is KNOWN to
be x and y and z, and this design unambiguously will produce something
that satisfies x and y and z".
Every single AGI design in existence is a Suck It And See design. We
will know if the design is correct if it is built and it works.
Before that, the best that any outside investor can do is use their
gut instinct to decide whether they think that it will work.
Even if every single AGI design in existence is fundamentally broken
(and I would argue that a fair amount of AGI design is theoretically
correct and merely unavoidably intractable), this is a false
characterization. And at a minimum, it should be "no mathematics"
rather than "no science".
Mathematical proof of validity of a new technology is largely
superfluous with respect to whether or not a venture gets funded.
Investors are not mathematicians, at least not in the sense that
mathematical certainty of the correctness of the model would be
compelling. If they trust the person enough to invest in them, they
will generally trust that the esoteric mathematics behind the venture
are correct as well. No one tries to actually understand the
mathematics even if though they will give them a cursory glance -- that
is your job.
Having had to sell breakthroughs in theoretical computer science before
(unrelated to AGI), I would make the observation that investors in
speculative technology do not really put much weight on what you "know"
about the technology. After all, who are they going to ask if you are
the presumptive leading authority in that field? They will verify that
the current limitations you claim to be addressing exist and will want
concise qualitative answers as to how these are being addressed that
comport with their model of reality, but no one is going to dig through
the mathematics and derive the result for themselves. Or at least, I am
not familiar with cases that worked differently than this. The real
problem is that most AGI designers cannot answer these basic questions
in a satisfactory manner, which may or may not reflect what they "know".
You are addressing (interesting and valid) issues that lie well above
the level at which I was making my argument, so unfortnately they miss
the point.
I was arguing that whenever a project claims to be doing "engineering"
there is always a background reference that is some kind of science or
mathematics or prescription that justifies what the project is trying to
achieve:
1) Want to build a system to manage the baggage handling in a large
airport? Background prescription = a set of requirements that the flow
of baggage should satisfy.
2) Want to build an aircraft wing? Background science = the physics of
air flow first, along with specific criteria that must be satisfied.
3) Want to send people on an optimal trip around a set of cities?
Background mathematics = a precise statement of the travelling salesman
problem.
No matter how many other cases you care to list, there is always some
credible science or mathematics or common sense prescription lying at
the back of the engineering project.
Here, for contrast, is an example of an engineering project behind which
there was NO credible science or mathematics or prescription:
4*) Find an alchemical process that will lead to the philosophers' stone.
Alchemists knew what they wanted - kind of - but there was no credible
science behind what they did. They were just hacking.
Artificial Intelligence research does not have a credible science behind
it. There is no clear definition of what intelligence is, there is only
the living example of the human mind that tells us that some things are
"intelligent".
This is not about mathematical proof, it is about having a credible,
accepted framework that allows us to say that we have already come to an
agreement that intelligence is X, and so, starting from that position we
are able to do some engineering to build a system that satisfies the
criteria inherent in X, so we can build an intellgence.
Instead what we have are AI researchers who have gut instincts about
what intelligence is, and from that gut instinct they proceed to hack.
They are, in short, alchemists.
And in case you are tempted to do what (e.g.) Russell and Norvig do in
their textbook, and claim that the Rational Agents framework plus
logical reasoning is the scientific framework on which an idealized
intelligent system can be designed, I should point out that this concept
is completely rejected by most cognitive psychologists: they point out
that the "intelligence" to be found in the only example of an
intelligent system looks very much like it does not conform to this
theory of what intelligence is. Many of them take the position that
logical reasoning is actually a very high-level process that is
dependent on a vast number of lower level processes. That would mean
that the Rational Agent framework fails the test of being *accepted* as
valid for the one example of a functioning intelligence.
All your points about what investors do or do not want to hear is at a
level way above this point. Investors know (gut instinct) a bunch of
crazy alchemists ;-) when they see them, and that is why they are not
throwing money at AGI projects.
It is not that these investors understand the abstract ideas I just
described, it is that they have a gut feel for the rate of progress and
the signs of progress and the type of talk that they should be
encountering if AGI had mature science behind it. Instead, what they
get is a feeling from AGI researchers that each one is doing the following:
1) Resorting to a bottom line that amounts to "I have a really good
personal feeling that my project really will get there", and
2) Examples of progress that look like an attempt to dress a doughnut
up as a wedding cake.
Hence the problem.
Richard Loosemore
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com