Stan,
I am sure that I can annotate simple ordinary text in such a way to
make the computer use it as if it was understanding it. At first I
expect to be spending almost as much time thinking about how to
annotate the text as I will spend writing the program. But I expect
(and I hope) that I will then start spending a much greater percentage
of my time working on the actual programming.

So I believe I will be able to find a way to annotate simple input
text in order to convey interesting ideas to the computer, but, just
as I know that the program will not 'understand' the text in a way a
person could I also know that it would not be able to understand it
without all the detailed annotations. That would indicate that it was
not capable of true learning the way human beings are. If I am
correct, it will be able to do a lot of learning, but it will tend to
be shallow learning. Let's suppose that I got my program to this
point. I would say that the program is a little different from other
AI programs so it is innovative. Furthermore, I would also say that
the annotations combined with the programming should constitute a good
basis for further research. If I can get my program to this point then
I will be able to get it to go further.

Why is it a simulation? Let's say that it can't figure a new problem
out. If I have to figure out how to annotate the new text in order to
get it to interpret the text adequately then I would say that it is a
simulation. In order to show that the program was not simulating this
knowledge I would have to show that the new annotations and
programming did not completely overrule other insightful
interpretations that had been used earlier and that most of the
annotations could be programmatically deduced or derived for the input
based on some knowledge that the program could acquire. It might only
be possible to write the program to derive a crude form of the
annotations but that could still be sufficient to demonstrate that the
program was capable of actual thought.

Jim Bromer


On Thu, Nov 20, 2014 at 7:29 PM, Stanley Nilsen via AGI <[email protected]> 
wrote:
> On 11/20/2014 04:25 PM, Jim Bromer wrote:
>>
>> I was admitting that my plan to teach a computer to interpret simple
>> text by heavily annotating it to assist the program to relate new
>> input to previous text would be equivalent to saying that I would be
>> 'programming' it to respond appropriately to the text.
>
>
> Jim, in the text above you mention "annotating it."  Do you mean heavily
> annotating the text that you are supposed to understand?  If so, then, is
> the purpose to see how much needs to be added to text to make it
> "understandable"?
>
> Do you see your development time being spent on annotating text, or on
> making the mechanism that will relate this heavily annotated text to
> existing data?
>
> Below you mention that the programming would simulate "understanding."  In
> my thinking, understanding is the result of having pertinent rules and facts
> in a functioning logic system - like a Clips project.  Since I see
> understanding as this "stuff" that you acquired in your data, I'm not sure
> how it can be a simulation?  Do you mean that the annotated text and it's
> processing will simulate an intelligent process (AI) that can break down
> this new text and turn it into rules and facts?
> curious as to what you are seeing here...
> Stan
>
>> But the
>> question is: would this method work to give such a program more
>> traction than the typical AGI effort? Because if it did then then even
>> though my programming (using annotated text) would only simulate
>> 'understanding' the fact that it would work would mean that it might
>> be used as a model for developing the program further. My goal then is
>> to create a weak general AI (a weak-AGI) program that was capable of
>> initially doing a tiny bit of thinking for itself but which I could
>> gradually improve over time.
>>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to