I watched the video and your ideas are interesting although I am not quite
sure what you are getting at.

I think that AI needs learning rules. But some people might say that all
modern AI paradigms have learning rules. So some disambiguation is needed
right away. You seem to be saying rather than looking at a great many
representations of objects (in one venue or within a narrow range of
variations) an AI program needs to be able to creatively decide that some
map, for instance, can refer to some particular place or kind of thing
using learning rules and a collection of seemingly simple pieces of
knowledge about the place or object. Your example is that a car has wheels
and a compartment so a picture that shows a box with 2 circles on either
side along the undercarriage can be interpreted by a human being as
a representation of a car . But then again it can also be interpreted as a
table with 2 chairs. I agree with that and I think that conceptual
projection and creative problem solving is something that is absolutely
necessary and it is something that should be easy to implement. But, part
of the problem in my opinion, is that conceptual integration has to be more
intricate than (for example) just projecting concepts onto other
conceptualizations. So to make this quick I am working on an AI program
that will allow me to make simulations of what I want the program to be
able to do so I can better understand the types of events that will happen
or need to happen to implement conceptual integration. I also think that
learning through communication is important and my AI program is going to
be designed to allow the program to learn through communication and through
trial and experience. I am not certain that I via communication and you
mentioned that this kind of learning could be evolutionarily fast.

My idea is that ideas or concepts can play different kinds of roles when
used with other ideas or concepts. These relationships can be used to
narrow and/or shape the trial and error processes of learning (including
creative problem solving.) In human experience there is not a clear
distinction between experiencing an event and learning about it and this
points to a problem with most contemporary AI methodologies. I am
pretty sure you agree with this. But this also points to a methodology that
should be implemented and it suggests that learning rules are not only
employed while in training mode but in the every day situations where
intelligent response is required.

Jim Bromer

On Sun, Jan 10, 2016 at 5:59 AM, Danko Nikolic <
[email protected]> wrote:

> Dear Jim,
>
>   I agree with your point that deep learning machines cannot think outside
> the box.
>
>   However, there may be already some conceptual progress in this respect.
> I have recently made a proposal how to make machines that think more in a
> biological-like manner: The suggestion is that the machines do not store
> their knowledge in synapses or by similar means but instead, in a set of
> specialized learning rules. That way, when machines think, they literally
> must think outside the box because they have to think by applying (very
> fast) learning. That is, thinking does not occur "through internal
> computations" but through interaction with the environment. The argument is
> that this will enable machines to achieve understanding that J. Searle was
> asking for.
>
> This proposal is described in this recent writing at IEET:
>
>    http://ieet.org/index.php/IEET/more/nikolic20160108
>
> And a more condensed version is in this TEDx talk:
>
>   https://www.youtube.com/watch?v=zZMlzMTR6l8
>
>
> Would you think this effort is helpful for the problem that you are
> pointing out?
>
> Thank you.
>
> Danko
>
> On 09/01/16 21:31, [email protected] wrote:
>
> This is a digest of messages to AGI.
> Digest Contents
>
>    1. If Deep Learning is It then Why Are Search Engines Incapable
>    ofThinking (Outside the Box or Otherwise)?
>    <#-1559021165_20160109152915:A46324DA-B70F-11E5-AEF6-CFF8EF10038B>
>
> If Deep Learning is It then Why Are Search Engines Incapable ofThinking
> (Outside the Box or Otherwise)?
> <https://www.listbox.com/member/archive/303/2016/01/20160109152915:A46324DA-B70F-11E5-AEF6-CFF8EF10038B>
>
> *Sent by Jim Bromer <[email protected]> <[email protected]>* at Sat,
> 9 Jan 2016 15:29:08 -0500
> If industry has AI pretty well figured out then why are search engines so
> incapable of thinking outside the box? The conclusion looks inescapable to
> me. Yes there will be a day when someone makes a significant achievement
> while the rest of us might miss it completely but the idea that
> contemporary deep search (or some other AI method) has achieved the hype or
> the implied conquest that winning at chess and jeopardy seems to imply just
> does not jive with the computing power Google, Bing or IBM have. There is a
> substantial disconnect between low level -almost- human reasoning and deep
> learning. Jim Bromer
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/27154149-3c484689> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to