Matt wrote,

> There seems to be a lot of effort to implement reasoning in knowledge
> representation systems, even though it has little to do with how we actually
> think.


Please note that not all of us in the AGI field are trying to closely
emulate human thought.  Human-level thought does not imply closely
human-like thought



> We focus on problems like:
>
> All men are mortal. Socrates is a man. Therefore ___?
>
> The assumed solution is to convert it to a formal representation and apply
> the rules of logic:
>
> For all x: man(x) -> mortal(x)
> man(Socrates)
> => mortal(Socrates)
>
> which has 3 steps: convert English to a formal representation (hard AI),
> solve the problem (easy), and convert back to English (hard AI).


This is a silly example, because it is already solvable using existing AI
systems.  We solved problems like this using RelEx+PLN, in a prototype
system built on top of the NCE, a couple years ago.  Soon OpenCog will have
the  mechanisms to do that sort of thing too.



>
>
> Sorry, that is not a solution. Consider how you learned to convert natural
> language to formal logic. You were given lots of examples and induced a
> pattern:
>
> Frogs are green = for all x: frog(x) -> green(x).
> Fish are animals = for all x: fish(x) -> animal(x).
> ...
> Y are Z: for all x: Y(x) -> Z(x).
>
> along with many other patterns. (Of course, this requires learning
> semantics first, so you don't confuse examples like "they are coming").
>
> But if you can learn these types of patterns then with no additional effort
> you can learn patterns that directly solve the problem...
>
> Frogs are green. Kermit is a frog. Therefore Kermit is green.
> Fish are animals. A minnow is a fish. Therefore a minnow is an animal.
> ...
> Y are Z. X is a Y. Therefore X is a Z.
> ...
> Men are mortal. Socrates is a man. Therefore Socrates is mortal.
>
> without ever going to a formal representation. People who haven't studied
> logic or its notation can certainly learn to do this type of reasoning.



One hypothesis is that the **unconscious** human mind is carrying out
operations that are roughly analogous to logical reasoning steps.  If this
is the case, then even humans who have never studied logic or its notation
would unconsciously and implicitly be doing "logic-like stuff".  See e.g. my
talk at

http://www.acceleratingfuture.com/people-blog/?p=2199

(which has a corresponding online paper as well)


>
>
> So perhaps someone can explain why we need formal knowledge representations
> to reason in AI.
>

I for one don't claim that we need it for AGI, only that it's one
potentially very useful strategy.

IMO, formal logic is a cleaner and simpler way of doing part of what the
brain does via Hebbian-type modification of synaptic bundles btw neural
clusters

Google does not need anything like formal logic (or formal-logic-like
Hebbian learning, etc.) because it is not trying to understand, reason,
generalize, etc.  It is just trying to find information in a large
knowledge-store, which is a much narrower and very different problem.

-- Ben G



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to