So the program has to achieve some creativity and by using various methods
to determine similarity a sense of the 'range' of that creativity can be
found. There are many situations where variation of similarity are
necessary. The program is supposed to solve some problem 'the problem'
should be similar in different attempts to solve it but the 'possible
solution methods' should vary to search for better 'solutions' or
'solutions' that can work in accord with 'solutions' to other co-existing
problems.

Although I haven't expressed this well and we have of course seen similar
points made in other contexts, it is, for anyone who cares to see it,
interesting. This parsing of problem (parts) and possible solutions (and
solution parts) might be a fundamental part of detecting the basis for
Stronger AI.

Jim Bromer

On Sat, May 2, 2015 at 10:09 AM, Jim Bromer <jimbro...@gmail.com> wrote:

> I think of adaptive process control as a method that combines
> numerical evaluations based on a strong list of predetermined constraints.
> (Those constraints may be stated or unstated by merely leaving them off the
> predetermined range of functionality of the controller.) The
> potential relation to stronger AI becomes murky.  At the least we could see
> them as being tools for some Stronger AI system. But when ideas like
> numerical controllers are used in AGI theories they are defined as if they
> might be called by some other system under some conditions.  I don't feel
> that this hypothetical application truly rises to the level of stronger AI
> without defining the implementation of the more creative aspects such an
> application.  For example, we might see an adaptive controller as being
> implemented at different levels where one can call on the other. While
> animals have some constraints on their behaviors, the typical application
> of an adaptive controller deals with situations where the processes being
> controlled have many strong constraints. You don't want your plane to start
> to wonder about dipping its wing into a body of water that it sees on the
> ground. I can only imagine adaptive process controllers in Stronger AI as
> working on different levels where one (controller) might create the
> boundaries and framework that another would operate in and then implement
> that the other be called in some appropriate situations.
>
> Does throwing some range of adaptation onto a numerical process to turn a
> Narrow AI method into Stronger AI? I don't think so. I pointed out that
> variation in the size of the fields being used to represent knowledge
> objects has to be part of Stronger AI to prevent the application from
> interpolation insipidness. Even though the ranges of numerical methods may
> extend way past any ability to count them Interpolation Insipidness occurs
> because the fluctuations of behaviors may seem like trivial variations of
> behaviors that had been produced before. But here is something different
> for me to think of. The potential of variations of the fields of knowledge
> objects have to be accompanied by some method of determining that the range
> of variation is not being used to only produce insipid variants. That is a
> little difficult to define as a simple pure abstraction.
>
> Jim Bromer
>
> On Sat, May 2, 2015 at 2:50 AM, Steve Richfield <steve.richfi...@gmail.com
> > wrote:
>
>> Jim,
>>
>> Again, I think I see the POV to solve this. All animals, from single
>> cells to us, are fundamentally adaptive process control systems. We use our
>> intelligence to live better and more reliably, procreate, etc., much as
>> single-celled animals, only with MUCH richer functionality. Everything fits
>> this hierarchy of function leading to intelligence.
>>
>> Then, people like those on this forum start by ignoring this and trying
>> to create intelligence from whole cloth. This may be possible, but there is
>> NO existence proof for this, no data to guide the effort, etc. In short,
>> there is NO reason to expect a whole-cloth approach to work anytime during
>> the next century (or two).
>>
>> However, some of the mathematics of adaptive process control is known,
>> and I suspect the rest wouldn't be all that tough - if only SOMEONE were
>> working on it.
>>
>> I suspect that when the answers are known, it will be a bit like spread
>> spectrum communications, where there is a payoff for complexity, but where
>> ultimately there is a substitute for designed-in complexity, e.g. like the
>> pseudo-random operation of spread spectrum systems. Genetics seems to
>> prefer designed-in complexity (like our brains) but there is NO need for
>> computers to have such limitations.
>>
>> Whatever path you take, you must "see a path" to have ANY chance of
>> succeeding. You must have a POV that helps you to "cut the crap" in pursuit
>> of your goal. Others here are working on whole-cloth approaches, yet
>> bristle when challenged for lacking a guiding POV. I see some hope in
>> adaptive control math. Perhaps you see something else, but it MUST have an
>> associated guiding POV for you to have any hope of succeeding - more than a
>> simple list of what it does NOT have.
>>
>> Steve
>>
>> On Fri, May 1, 2015 at 5:20 PM, Jim Bromer <jimbro...@gmail.com> wrote:
>>
>>> The other views that I forgot to add is that the knowledge objects
>>> have to components that can be combined in various ways and that there
>>> are no absolute elementary knowledge object. Every kind of knowledge
>>> object including the particles that more coherent objects are made of
>>> have the potential to be opened, explored and related to the greater
>>> world of concepts (knowledge objects.)
>>>
>>> Jim Bromer
>>>
>>>
>>> On Fri, May 1, 2015 at 7:01 AM, Jim Bromer <jimbro...@gmail.com> wrote:
>>> > How can I describe the features and behaviors of a group of
>>> > hypothetical algorithms that would contain the potential to achieve
>>> > advances in AI so that I have some basis for actually designing them?
>>> > The first step is to describe some algorithms from narrow AI and then
>>> > show that my algorithms should, hypothetically, be stronger than them.
>>> >
>>> > You might just say that stronger AI is going to need all kinds of
>>> > algorithms but that does not give you enough to start thinking about
>>> > how you might design an advanced AI program.
>>> >
>>> > First of all, stronger AI has to be more than learning to associate
>>> > particular responses to particular inputs. It also has to be more than
>>> > mere numerical extrapolation or interpolation based on the use of a
>>> > numerical method that represents some particular problem.
>>> >
>>> > So then simplistic reinforcement, for example, is not - in itself -
>>> > enough. Numerical methods - in themselves - are not going to be
>>> > enough.
>>> >
>>> > One thing that I realized while trying to talk to Mike Tintner was
>>> > that true AI needs varying field sizes in order to hold enough
>>> > variation to avoid degenerating into simplistic extrapolations and
>>> > interpolations. (The data type does not have to be variable as long as
>>> > they can be used in strings and fields which are.)
>>> >
>>> > You need some kind of trial and error in stronger AI. You also need to
>>> > recognize that knowledge objects are not typically commensurate. So
>>> > your program needs to be able to fit the pieces of knowledge together
>>> > to see what makes sense and what does not. It needs to discover what
>>> > might be relevant to some situation and what needs to be 'translated'
>>> > from one knowledge object to another. It needs to recognize that even
>>> > though two or more knowledge objects may be relevant, the features may
>>> > not fit against the relevant features of the others. You might use
>>> > simple association or numerical correlation to designate these poorly
>>> > fitting parts but it is often going to take greater insight to
>>> > effectively integrate the different kinds of relevant knowledge
>>> > objects.
>>> >
>>> > So finally I think that Stronger AI is going to need
>>> > reason-based-reasoning. (I still cannot understand how people in these
>>> > AI discussion groups have actually denied that.) In order to learn how
>>> > to use reasons effectively they will need to be integrated with the
>>> > knowledge objects that they are being used with.
>>> >
>>> > Jim Bromer
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed:
>>> https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>>
>> --
>> Full employment can be had with the stoke of a pen. Simply institute a
>> six hour workday. That will easily create enough new jobs to bring back
>> full employment.
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription <http://www.listbox.com>
>>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to