When I design an object-oriented system, I first try to identify the kinds
of objects that must be represented, and then I try to identify the kinds
of interactions those objects can have. (There are of course multiple
iterations, with the kinds of interactions further informing and reshaping
my original decisions regarding the kinds of objects to be represented.)
Through this process, I gradually move from my abstract conception of the
problem space towards a more concrete representation, which helps me to not
only better clarify my own understanding, but also to implement the system
in workable code.

I think we could benefit from applying this process to your ideas of
conceptual relativism, conceptual structure, reason-based reasoning, etc.
Some of your ideas and observations resonate with my own, but you are
operating at such a level of abstraction that it is difficult to be sure we
are talking about the same things. What are the primitive components of
your theory? What are the primitive interactions they are capable of? In
order to convert such abstract ideas into code, we have to move towards
concreteness. Flying high into conceptual space helps us to identify common
patterns, but ultimately we must connect those concepts back to
well-grounded ones for them to be usefully engaged.

In my own system, I have some very specific components that work together
to form concepts. I keep iteratively refining these components and their
interactions by writing and rewriting code for them, identifying
shortcomings, and starting over with a better foundation from the knowledge
I have gained. Here is my current conceptual structure for conceptual
structures:


   - *Entities*: These are objects and events, whether abstract or
   concrete. They are defined only implicitly by the glomming together of
   manifestations, to be described below.
   - *Manifestations*: These are individual snapshots of entities at a
   given time and place. In natural language, they correspond to specific
   mentions of objects and events. In vision, they correspond to specific
   perceptions of objects and events. In the thinking process, they may also
   be generated through reasoning based on expectations of
   existence/occurrence. They can also be hypothetically generated during
   speculation. Manifestations clump together to form entities based on
   similarity, locality, and other topological factors.
   - *Attributes*: These are like tags that can be attached to
   manifestations/entities to represent their unique features or states. They
   are typically represented in natural language with adjectives and adverbs.
   - *Kinds*: These roughly correspond to classes in the object-oriented
   paradigm, serving to abstract common features among entities into a shared
   template that helps to shape expectations for the entities that are
   associated with them.
   - *Relationships*: These are like directed edges in a (multi)graph,
   linking two (or more) entities together, tagged with some label to identify
   the type of relationship between the objects.


The conceptual elements above can be snapped together to form more complex
structures, building a model of a real or imagined situation or story,
which can then be used to simulate that scenario and make predictions about
its past/present/future behavior using expectations generated from supposed
or observed probabilistic rules of interaction between the entities in
question. Like the object-oriented paradigm after which it was initially
modeled (but has since diverged from), this system is capable of building
models of arbitrary systems to any level of detail or abstraction. Unlike
the object-oriented paradigm, this system is designed to avoid the implicit
assumptions OO carries of absolute knowledge regarding the truth of
predicates and the identities/states/relationships of objects.

Your concept of conceptual relativism, as I understand it from the highly
abstract statements you have made about it, sounds to me like the notion
that the meanings of the various elements of conceptual structure should be
determined by their interconnections with each other, as is the case with
my system. In the case of my preceding attempt to make my abstract
conceptions of conceptual structure more concrete, this is tantamount to
saying that the meaning of each component is defined by how the components
are snapped together to form a cognitive model, and how the particular
component fits into this larger integrated whole. Reason-based reasoning
would take the form of a heuristic that attributes and relationships that
are unexpected must be further analyzed until they become expected, with
changes being made to the surrounding model to make them more reasonable
according to the system's learned rules for model consistency. (In other
words, the system takes an inconsistent local configuration to be a cue for
local refinement of the model.)

Does this accurately capture the insights you have been attempting to
convey? If not, can you make your expressions of them more concrete? How
would you modify my above characterization of conceptual structures to
better suit your own theory?


On Mon, Jan 5, 2015 at 10:46 AM, Jim Bromer via AGI <[email protected]> wrote:

> I don't find myself doing many conceptual prototyping in my head but I
> do think about things and I make adjustments to my 'theories' about
> things and these adjustments are integrated into the greater
> structures of the thoughts about these subjects. The structure is not
> only based on sequential processes and general processes (as many of
> my simple 'theories' seem to be at first) but there are extensive and
> meaningful connections to other 'theories' and knowledge (as can be
> seen in one of these messages.) So what I am saying is that the
> conceptual relations that might be used in a thought cannot be all
> prototyped by the programmer.
> Jim Bromer
>
>
> On Sun, Jan 4, 2015 at 9:30 PM, Jim Bromer <[email protected]> wrote:
> > I have to talk about some of the mechanisms.  I can't help myself. I
> > would expect the program, if I got it to some level of fundamental
> > feasibility, to handle numerous kinds of situations as long as the
> > knowledge it had built up was useable for those situations. The
> > question is how could it be able to support the kind of reasoning that
> > I think should be possible? If I was able to teach the program
> > something about a simple world model it should be able to subsequently
> > answer something about that model. And I also should be able to use
> > generalizations and figures of speech that could be applied to that
> > simple model but which could potentially be applied to situations of
> > greater complexity as well.  But the problem is (of course) that as it
> > learns more the number of possibilities should increase sufficiently
> > to eventually slow it down and befuddle it.
> >
> > I am hoping to get back to working on a text-based program. However,
> > if I was able to get it to work I think it would be simple to one day
> > expand it to include some kind of visual processing as well. The
> > combination of imagery and text would be interesting.
> >
> > Although I will program it to initially look for superficial relations
> > in the text and to recombine them in different ways, I want it to be
> > able to derive concepts through trial and error. From there it has to
> > build further knowledge partly based on the way the user (me) reacts
> > to the program. So it would have a slight tendency to draw conclusions
> > about the basic relations between words (and other parts of text) by
> > the way the user responds to its expression of how it combines them.
> > (The use of fundamental kinds of linguistic behavior to indicate how
> > words might be related may need to be learned.)
> >
> > I believe that a simple piece of information, like a simple concept,
> > has to be associated with hundreds or thousands of other simple pieces
> > of information. I also believe that the analysis of some input has to
> > be matched against an imaginative projection (including the projection
> > of previously learned knowledge) in order to build a better foundation
> > of what the meaning of the input is and how it should be responded to.
> > This is a complexity problem so I also believe that extensive indexing
> > also has to be developed for the acquired knowledge. The indexing
> > might, for example, be based on generalizations derived from the
> > knowledge that it had acquired.
> >
> > Ben's example of a child learning about a pet is a good one. Of course
> > a text only AI/AGI program is not going to have the experiences a
> > child can have with a pet. However, the program can be exposed to a
> > lot of information about pets. I think this extensive knowledge,
> > combined with trial and error interactions with a user-teacher should
> > make the program capable of good concept formation even though it will
> > be different from a child's.
> >
> > Human beings often seem to deal with opposing and contradictory
> > theories about the world with little bother. It is only when a
> > contradictory theory leads directly to some obstacle or the study of a
> > situation starts to highlight the conflict in theories does it become
> > a problem. So I think this is a situation that can be described best
> > with conceptual relativism. Even when we discover a contradiction we
> > usually first explain it away as a variation that can occur. It takes
> > some hard headedness to assume that an unexpected variation might
> > represent a contradiction in theories.
> >
> > I believe that reason-based reasoning is also important. So a pet-like
> > object might be visually noticed in a room based on its features and
> > actions.  If the animal or object is seen frequently and it stands out
> > against the background, a concept about it will be developed using
> > concepts about the features and actions of other pets.
> >
> > Finally, let me add one more thing. Concepts may represent or refer to
> > objects but they can also play functional roles. So while a conceptual
> > function prototype might be sufficient to potentially represent any
> > kind of conceptual relation, I believe it is more to the point to say
> > that that the program must be capable of deriving conceptual function
> > prototypes in response to the events it observes in the IO data
> > environment. Let me draw a parallel. The argument can be made that any
> > program is a system of yes-no questions and responses. But that
> > doesn't mean that programmers could effectively use a programming
> > language that was designed solely on that principle. Similarly, I
> > believe that an AGI program has to be designed to implement the
> > eventual formation of conceptual function prototypes and to be
> > prepared to handle their application and development. Even if I am
> > unable to figure out how the program could soundly derive functional
> > prototypes (dynamically) I can use the idea in imaginative
> > projections. The reason dynamic functional prototypes is so important
> > is because if concepts become structurally (or abstractly)
> > specialized, which is part of my theory, then there will probably be a
> > need to new kinds of conceptual relations to generalize across them. I
> > think this makes sense and this kind of reasoning comes almost
> > directly from speculation about the consequences of conceptual
> > relativism as I see it.
> >
> >
> >
> > Jim Bromer
> >
> >
> > On Sun, Jan 4, 2015 at 2:25 PM, Peter Voss <[email protected]> wrote:
> >> I would find it useful if you could provide one or two specific
> examples of concepts being derived using existing concepts -- not the
> mechanics, but situations.
> >>
> >> Best,
> >>
> >> Peter
> >>
> >> -----Original Message-----
> >> From: Jim Bromer via AGI [mailto:[email protected]]
> >> Sent: Sunday, January 04, 2015 10:52 AM
> >> ...
> >> I was asked if the differences of my theories from the mainstream
> theories and the theories behind the AI / AGI Frameworks that are being
> devised are just a matter of semantics. I don't think they are....
> >>
> >> A true AGI program will need to derive concepts about its interactions
> with the IO data environment that it is exposed to.
> >> It is going to take other concepts to interpret a concept....
> >>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to