I don't find myself doing many conceptual prototyping in my head but I
do think about things and I make adjustments to my 'theories' about
things and these adjustments are integrated into the greater
structures of the thoughts about these subjects. The structure is not
only based on sequential processes and general processes (as many of
my simple 'theories' seem to be at first) but there are extensive and
meaningful connections to other 'theories' and knowledge (as can be
seen in one of these messages.) So what I am saying is that the
conceptual relations that might be used in a thought cannot be all
prototyped by the programmer.
Jim Bromer


On Sun, Jan 4, 2015 at 9:30 PM, Jim Bromer <[email protected]> wrote:
> I have to talk about some of the mechanisms.  I can't help myself. I
> would expect the program, if I got it to some level of fundamental
> feasibility, to handle numerous kinds of situations as long as the
> knowledge it had built up was useable for those situations. The
> question is how could it be able to support the kind of reasoning that
> I think should be possible? If I was able to teach the program
> something about a simple world model it should be able to subsequently
> answer something about that model. And I also should be able to use
> generalizations and figures of speech that could be applied to that
> simple model but which could potentially be applied to situations of
> greater complexity as well.  But the problem is (of course) that as it
> learns more the number of possibilities should increase sufficiently
> to eventually slow it down and befuddle it.
>
> I am hoping to get back to working on a text-based program. However,
> if I was able to get it to work I think it would be simple to one day
> expand it to include some kind of visual processing as well. The
> combination of imagery and text would be interesting.
>
> Although I will program it to initially look for superficial relations
> in the text and to recombine them in different ways, I want it to be
> able to derive concepts through trial and error. From there it has to
> build further knowledge partly based on the way the user (me) reacts
> to the program. So it would have a slight tendency to draw conclusions
> about the basic relations between words (and other parts of text) by
> the way the user responds to its expression of how it combines them.
> (The use of fundamental kinds of linguistic behavior to indicate how
> words might be related may need to be learned.)
>
> I believe that a simple piece of information, like a simple concept,
> has to be associated with hundreds or thousands of other simple pieces
> of information. I also believe that the analysis of some input has to
> be matched against an imaginative projection (including the projection
> of previously learned knowledge) in order to build a better foundation
> of what the meaning of the input is and how it should be responded to.
> This is a complexity problem so I also believe that extensive indexing
> also has to be developed for the acquired knowledge. The indexing
> might, for example, be based on generalizations derived from the
> knowledge that it had acquired.
>
> Ben's example of a child learning about a pet is a good one. Of course
> a text only AI/AGI program is not going to have the experiences a
> child can have with a pet. However, the program can be exposed to a
> lot of information about pets. I think this extensive knowledge,
> combined with trial and error interactions with a user-teacher should
> make the program capable of good concept formation even though it will
> be different from a child's.
>
> Human beings often seem to deal with opposing and contradictory
> theories about the world with little bother. It is only when a
> contradictory theory leads directly to some obstacle or the study of a
> situation starts to highlight the conflict in theories does it become
> a problem. So I think this is a situation that can be described best
> with conceptual relativism. Even when we discover a contradiction we
> usually first explain it away as a variation that can occur. It takes
> some hard headedness to assume that an unexpected variation might
> represent a contradiction in theories.
>
> I believe that reason-based reasoning is also important. So a pet-like
> object might be visually noticed in a room based on its features and
> actions.  If the animal or object is seen frequently and it stands out
> against the background, a concept about it will be developed using
> concepts about the features and actions of other pets.
>
> Finally, let me add one more thing. Concepts may represent or refer to
> objects but they can also play functional roles. So while a conceptual
> function prototype might be sufficient to potentially represent any
> kind of conceptual relation, I believe it is more to the point to say
> that that the program must be capable of deriving conceptual function
> prototypes in response to the events it observes in the IO data
> environment. Let me draw a parallel. The argument can be made that any
> program is a system of yes-no questions and responses. But that
> doesn't mean that programmers could effectively use a programming
> language that was designed solely on that principle. Similarly, I
> believe that an AGI program has to be designed to implement the
> eventual formation of conceptual function prototypes and to be
> prepared to handle their application and development. Even if I am
> unable to figure out how the program could soundly derive functional
> prototypes (dynamically) I can use the idea in imaginative
> projections. The reason dynamic functional prototypes is so important
> is because if concepts become structurally (or abstractly)
> specialized, which is part of my theory, then there will probably be a
> need to new kinds of conceptual relations to generalize across them. I
> think this makes sense and this kind of reasoning comes almost
> directly from speculation about the consequences of conceptual
> relativism as I see it.
>
>
>
> Jim Bromer
>
>
> On Sun, Jan 4, 2015 at 2:25 PM, Peter Voss <[email protected]> wrote:
>> I would find it useful if you could provide one or two specific examples of 
>> concepts being derived using existing concepts -- not the mechanics, but 
>> situations.
>>
>> Best,
>>
>> Peter
>>
>> -----Original Message-----
>> From: Jim Bromer via AGI [mailto:[email protected]]
>> Sent: Sunday, January 04, 2015 10:52 AM
>> ...
>> I was asked if the differences of my theories from the mainstream theories 
>> and the theories behind the AI / AGI Frameworks that are being devised are 
>> just a matter of semantics. I don't think they are....
>>
>> A true AGI program will need to derive concepts about its interactions with 
>> the IO data environment that it is exposed to.
>> It is going to take other concepts to interpret a concept....
>>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to