Jim,

I'm sharing this with the rest of the group. I must have replied to only
you in the first place, and so everything else has just been between us.

On Mon, Jan 12, 2015 at 4:52 PM, Jim Bromer <[email protected]> wrote:

> When I said that a concept could be derived from other concepts I was
> really thinking of the referents of the
> conceptual data structures. I don't have a plan to use derived classes
> for derived referents. I was just pointing that out because that is
> the basis for a non-
>
> I was just pointing out that a concept may be derived from another
> concept which may itself be derived from the first concept because
> that is the basis for a coherentist model of knowledge (which is
> saying that it is not a logical model of knowledge.) However a
> coherent model does not have to be radical, it can be built with logic
> and probability and other logical-mathematical systems. I am saying
> that this is the reality so we should be thinking about it.
> Jim Bromer
>
>
> On Mon, Jan 12, 2015 at 5:46 PM, Jim Bromer <[email protected]> wrote:
> > It took me a few minutes to understand what you meant by saying that
> > what I was talking about sounded like a kind of reflection. A computer
> > program is a Inductive-Deductive system. As long as the effects of a
> > statement are constrained in some ways the program will behave as
> > expected. So an input to a word processor does not have the same
> > computational potential as the input to a programming language. I
> > don't mean that programs are actually defined this way but that
> > programs which are tested and improved on will tend toward something
> > that is roughly equivalent to this.
> >
> > The question of whether the program is running within a well defined
> > set of logical level of constraints would (for example) be relative to
> > the vantage point of an engineer (like a reflection engineer). But
> > this too is relative of course. If an analyst sets the bounds in one
> > way then even a well tested subprogram might easily go beyond those
> > bounds. (I meant that an obvious statement.)
> >
> > When I said that a concept could be derived from other concepts I was
> > really thinking of the referents of the
> > conceptual data structures. I don't have a plan to use derived classes
> > for derived referents. I was just pointing that out because that is
> > the basis for a non-
> >
> > I do think we can write working programs which represent relativist
> > concepts. Actually, I think most programs with a lot of IO actually
> > work that way regardless of how tightly they are engineered. I just
> > don't think people recognize that the concepts they have their
> > programs represent are relativistic and so they miss the clues on how
> > this can be better used for AGI.
> >
> > For example, the concepts used in a statement have meaning based on
> > previous uses. We can use words to direct other people to interpret
> > these statements. So a single statement does not contain all the
> > information that is needed to interpret it. (That is obviously true.)
> > So a statement may have different interpretations based on previous
> > knowledge and based on different directions on how it should be used.
> > People can even set up systems of interpretations similar to the
> > results of a probabilistic ordering on how a word or phrase is used
> > (but people can even set up systems on how to interpret something
> > through linguistic direction in conversation or by both referring to
> > the same information.
> >
> > I just looked the term, "computable function," up
> > http://en.wikipedia.org/wiki/Computable_function and found that
> > Enderton gave a list of characteristics of a computable function and
> > the meaning of one of them was,
> > "There must be exact instructions (i.e. a program), finite in length,
> > for the procedure."
> > This was interpreted as meaning,
> > "Thus every computable function must have a finite program that
> > completely describes how the function is to be computed. It is
> > possible to compute the function by just following the instructions;
> > no guessing or special insight is required."
> >
> > I am saying that some guessing or special insight is expected to be
> > necessary. Although this may just be a referent discrepancy (how the
> > words are used to refer to something), I think the discrepancy is so
> > important that it should be emphasized. So the program would have to
> > have something that effectively acted like computable function to
> > interpret a statement well, but it wouldn't always interpret
> > statements well (it can learn at an appropriate level) and there is an
> > element of selection based on guessing (it has to figure some things
> > out by making educated guesses.)
> >
> > So the referents and the emphasis of meaning (encoding) are the things
> > that I am getting at.
> >
> > I didn't realize I sent my previous message to you rather than the agi
> > list. The reply menu thing on gmail seems to try to send messages to
> > individuals rather than groups and I don't always catch it since it
> > doesn't always act that way and I expect it to act like most other
> > email programs.
> >
> >
> > Jim Bromer
> >
> >
> > On Mon, Jan 12, 2015 at 10:51 AM, Aaron Hosford <[email protected]>
> wrote:
> >>> One of the issues that this opens up is that a concept may be derived
> from
> >>> a concept which is
> >>> derived from it.
> >>
> >>
> >> Derived in what sense? As in class inheritance? Or as in logical
> derivations
> >> (proofs)?
> >>
> >>
> >>> So something that might seem like it is a reference to a fundamental
> >>> element of reality (or of the imagination) might at other times be
> >>> recognized to be a complex object composed of other parts.
> >>
> >>
> >> It sounds like what you are talking about is a semantic version of
> >> reflection.
> >> (http://en.wikipedia.org/wiki/Reflection_%28computer_programming%29)
> At some
> >> point, the code has to have a bottom -- a fundamental set of elements
> that
> >> it operates on -- to its representation, or it won't be computable. This
> >> does not imply that these elements should be in some way fundamental to
> >> reality, only that the software knows how to work with these primitive
> >> constructs to build more complex ones. If we are able to construct
> complex
> >> representations that represent these primitive constructs, essentially
> using
> >> them to reason about themselves, and then apply the results of that
> >> meta-level reasoning back to the original constructs, then we will have
> what
> >> amounts to virtualization of conceptual relativism; we can convert any
> >> element that at face value is fundamental into something that is not,
> and
> >> back again. This leaves us in the position of having a computable system
> >> that is nonetheless capable of completely reparsing its understanding
> of any
> >> arbitrary concept, however fundamental, to express it in terms of other
> >> concepts.
> >>
> >>
> >> On Sat, Jan 10, 2015 at 10:09 AM, Jim Bromer <[email protected]>
> wrote:
> >>>
> >>> Aaron,
> >>> Much of your description sounds reasonable to me. The major difference
> >>> is that I see conceptual relativism as a description of reality - as
> >>> best I see it. There is no such thing as a conceptual element even
> >>> though we need to use data objects as if they were elemental at times.
> >>> So something that might seem like it is a reference to a fundamental
> >>> element of reality (or of the imagination) might at other times be
> >>> recognized to be a complex object composed of other parts. For
> >>> example, it might only be understood using different knowledge which
> >>> means that the nature of the thing is really dependent on other kinds
> >>> of referents. Or take a particle of a rock. It might seem like a
> >>> fundamental element but it is not really. It is a complex system of
> >>> events and reactions. So while in most thoughts the particle might be
> >>> best treated as a simple referent, in other thoughts it can be treated
> >>> as a key to better understanding much of the universe.
> >>>
> >>> This idea of conceptual relativism shows that concepts can be
> >>> introduced with all sorts of potential complexities. Therefore, a
> >>> computational system that is capable of handling them must be capable
> >>> of dealing with complex referents even if they are treated as
> >>> elemental.  Your note that Entities can be both abstract or concrete
> >>> is the sort of thing that is needed for conceptual relativism - that
> >>> is, if thoughts are comprised of relativistic concepts as I believe
> >>> the are.
> >>>
> >>> Of course our computational system has to deal with data objects as if
> >>> they were elemental. But at the same time it has to be able to deal
> >>> with the potential for greater complexity. One of the issues that this
> >>> opens up is that a concept may be derived from a concept which is
> >>> derived from it. Although this may make a dedicated logician or
> >>> programmer a little uneasy, it is obvious that we deal with situations
> >>> like that frequently. Often this kind of problem can be resolved by
> >>> recognizing that there are common parts related to the two objects or
> >>> that one or both of the concepts is illogical or that it is just
> >>> paradoxical. But at other times it might be a perfectly reasonable
> >>> situation that does not need to be resolved even if it seems like it
> >>> might be further resolvable on further thought.
> >>> Jim Bromer
> >>>
> >>>
> >>> On Fri, Jan 9, 2015 at 4:47 PM, Aaron Hosford <[email protected]>
> wrote:
> >>> > When I design an object-oriented system, I first try to identify the
> >>> > kinds
> >>> > of objects that must be represented, and then I try to identify the
> >>> > kinds of
> >>> > interactions those objects can have. (There are of course multiple
> >>> > iterations, with the kinds of interactions further informing and
> >>> > reshaping
> >>> > my original decisions regarding the kinds of objects to be
> represented.)
> >>> > Through this process, I gradually move from my abstract conception of
> >>> > the
> >>> > problem space towards a more concrete representation, which helps me
> to
> >>> > not
> >>> > only better clarify my own understanding, but also to implement the
> >>> > system
> >>> > in workable code.
> >>> >
> >>> > I think we could benefit from applying this process to your ideas of
> >>> > conceptual relativism, conceptual structure, reason-based reasoning,
> >>> > etc.
> >>> > Some of your ideas and observations resonate with my own, but you are
> >>> > operating at such a level of abstraction that it is difficult to be
> sure
> >>> > we
> >>> > are talking about the same things. What are the primitive components
> of
> >>> > your
> >>> > theory? What are the primitive interactions they are capable of? In
> >>> > order to
> >>> > convert such abstract ideas into code, we have to move towards
> >>> > concreteness.
> >>> > Flying high into conceptual space helps us to identify common
> patterns,
> >>> > but
> >>> > ultimately we must connect those concepts back to well-grounded ones
> for
> >>> > them to be usefully engaged.
> >>> >
> >>> > In my own system, I have some very specific components that work
> >>> > together to
> >>> > form concepts. I keep iteratively refining these components and their
> >>> > interactions by writing and rewriting code for them, identifying
> >>> > shortcomings, and starting over with a better foundation from the
> >>> > knowledge
> >>> > I have gained. Here is my current conceptual structure for conceptual
> >>> > structures:
> >>> >
> >>> > Entities: These are objects and events, whether abstract or concrete.
> >>> > They
> >>> > are defined only implicitly by the glomming together of
> manifestations,
> >>> > to
> >>> > be described below.
> >>> > Manifestations: These are individual snapshots of entities at a given
> >>> > time
> >>> > and place. In natural language, they correspond to specific mentions
> of
> >>> > objects and events. In vision, they correspond to specific
> perceptions
> >>> > of
> >>> > objects and events. In the thinking process, they may also be
> generated
> >>> > through reasoning based on expectations of existence/occurrence. They
> >>> > can
> >>> > also be hypothetically generated during speculation. Manifestations
> >>> > clump
> >>> > together to form entities based on similarity, locality, and other
> >>> > topological factors.
> >>> > Attributes: These are like tags that can be attached to
> >>> > manifestations/entities to represent their unique features or states.
> >>> > They
> >>> > are typically represented in natural language with adjectives and
> >>> > adverbs.
> >>> > Kinds: These roughly correspond to classes in the object-oriented
> >>> > paradigm,
> >>> > serving to abstract common features among entities into a shared
> >>> > template
> >>> > that helps to shape expectations for the entities that are associated
> >>> > with
> >>> > them.
> >>> > Relationships: These are like directed edges in a (multi)graph,
> linking
> >>> > two
> >>> > (or more) entities together, tagged with some label to identify the
> type
> >>> > of
> >>> > relationship between the objects.
> >>> >
> >>> >
> >>> > The conceptual elements above can be snapped together to form more
> >>> > complex
> >>> > structures, building a model of a real or imagined situation or
> story,
> >>> > which
> >>> > can then be used to simulate that scenario and make predictions about
> >>> > its
> >>> > past/present/future behavior using expectations generated from
> supposed
> >>> > or
> >>> > observed probabilistic rules of interaction between the entities in
> >>> > question. Like the object-oriented paradigm after which it was
> initially
> >>> > modeled (but has since diverged from), this system is capable of
> >>> > building
> >>> > models of arbitrary systems to any level of detail or abstraction.
> >>> > Unlike
> >>> > the object-oriented paradigm, this system is designed to avoid the
> >>> > implicit
> >>> > assumptions OO carries of absolute knowledge regarding the truth of
> >>> > predicates and the identities/states/relationships of objects.
> >>> >
> >>> > Your concept of conceptual relativism, as I understand it from the
> >>> > highly
> >>> > abstract statements you have made about it, sounds to me like the
> notion
> >>> > that the meanings of the various elements of conceptual structure
> should
> >>> > be
> >>> > determined by their interconnections with each other, as is the case
> >>> > with my
> >>> > system. In the case of my preceding attempt to make my abstract
> >>> > conceptions
> >>> > of conceptual structure more concrete, this is tantamount to saying
> that
> >>> > the
> >>> > meaning of each component is defined by how the components are
> snapped
> >>> > together to form a cognitive model, and how the particular component
> >>> > fits
> >>> > into this larger integrated whole. Reason-based reasoning would take
> the
> >>> > form of a heuristic that attributes and relationships that are
> >>> > unexpected
> >>> > must be further analyzed until they become expected, with changes
> being
> >>> > made
> >>> > to the surrounding model to make them more reasonable according to
> the
> >>> > system's learned rules for model consistency. (In other words, the
> >>> > system
> >>> > takes an inconsistent local configuration to be a cue for local
> >>> > refinement
> >>> > of the model.)
> >>> >
> >>> > Does this accurately capture the insights you have been attempting to
> >>> > convey? If not, can you make your expressions of them more concrete?
> How
> >>> > would you modify my above characterization of conceptual structures
> to
> >>> > better suit your own theory?
> >>> >
> >>> >
> >>> > On Mon, Jan 5, 2015 at 10:46 AM, Jim Bromer via AGI <[email protected]
> >
> >>> > wrote:
> >>> >>
> >>> >> I don't find myself doing many conceptual prototyping in my head
> but I
> >>> >> do think about things and I make adjustments to my 'theories' about
> >>> >> things and these adjustments are integrated into the greater
> >>> >> structures of the thoughts about these subjects. The structure is
> not
> >>> >> only based on sequential processes and general processes (as many of
> >>> >> my simple 'theories' seem to be at first) but there are extensive
> and
> >>> >> meaningful connections to other 'theories' and knowledge (as can be
> >>> >> seen in one of these messages.) So what I am saying is that the
> >>> >> conceptual relations that might be used in a thought cannot be all
> >>> >> prototyped by the programmer.
> >>> >> Jim Bromer
> >>> >>
> >>> >>
> >>> >> On Sun, Jan 4, 2015 at 9:30 PM, Jim Bromer <[email protected]>
> wrote:
> >>> >> > I have to talk about some of the mechanisms.  I can't help
> myself. I
> >>> >> > would expect the program, if I got it to some level of fundamental
> >>> >> > feasibility, to handle numerous kinds of situations as long as the
> >>> >> > knowledge it had built up was useable for those situations. The
> >>> >> > question is how could it be able to support the kind of reasoning
> >>> >> > that
> >>> >> > I think should be possible? If I was able to teach the program
> >>> >> > something about a simple world model it should be able to
> >>> >> > subsequently
> >>> >> > answer something about that model. And I also should be able to
> use
> >>> >> > generalizations and figures of speech that could be applied to
> that
> >>> >> > simple model but which could potentially be applied to situations
> of
> >>> >> > greater complexity as well.  But the problem is (of course) that
> as
> >>> >> > it
> >>> >> > learns more the number of possibilities should increase
> sufficiently
> >>> >> > to eventually slow it down and befuddle it.
> >>> >> >
> >>> >> > I am hoping to get back to working on a text-based program.
> However,
> >>> >> > if I was able to get it to work I think it would be simple to one
> day
> >>> >> > expand it to include some kind of visual processing as well. The
> >>> >> > combination of imagery and text would be interesting.
> >>> >> >
> >>> >> > Although I will program it to initially look for superficial
> >>> >> > relations
> >>> >> > in the text and to recombine them in different ways, I want it to
> be
> >>> >> > able to derive concepts through trial and error. From there it
> has to
> >>> >> > build further knowledge partly based on the way the user (me)
> reacts
> >>> >> > to the program. So it would have a slight tendency to draw
> >>> >> > conclusions
> >>> >> > about the basic relations between words (and other parts of text)
> by
> >>> >> > the way the user responds to its expression of how it combines
> them.
> >>> >> > (The use of fundamental kinds of linguistic behavior to indicate
> how
> >>> >> > words might be related may need to be learned.)
> >>> >> >
> >>> >> > I believe that a simple piece of information, like a simple
> concept,
> >>> >> > has to be associated with hundreds or thousands of other simple
> >>> >> > pieces
> >>> >> > of information. I also believe that the analysis of some input
> has to
> >>> >> > be matched against an imaginative projection (including the
> >>> >> > projection
> >>> >> > of previously learned knowledge) in order to build a better
> >>> >> > foundation
> >>> >> > of what the meaning of the input is and how it should be responded
> >>> >> > to.
> >>> >> > This is a complexity problem so I also believe that extensive
> >>> >> > indexing
> >>> >> > also has to be developed for the acquired knowledge. The indexing
> >>> >> > might, for example, be based on generalizations derived from the
> >>> >> > knowledge that it had acquired.
> >>> >> >
> >>> >> > Ben's example of a child learning about a pet is a good one. Of
> >>> >> > course
> >>> >> > a text only AI/AGI program is not going to have the experiences a
> >>> >> > child can have with a pet. However, the program can be exposed to
> a
> >>> >> > lot of information about pets. I think this extensive knowledge,
> >>> >> > combined with trial and error interactions with a user-teacher
> should
> >>> >> > make the program capable of good concept formation even though it
> >>> >> > will
> >>> >> > be different from a child's.
> >>> >> >
> >>> >> > Human beings often seem to deal with opposing and contradictory
> >>> >> > theories about the world with little bother. It is only when a
> >>> >> > contradictory theory leads directly to some obstacle or the study
> of
> >>> >> > a
> >>> >> > situation starts to highlight the conflict in theories does it
> become
> >>> >> > a problem. So I think this is a situation that can be described
> best
> >>> >> > with conceptual relativism. Even when we discover a contradiction
> we
> >>> >> > usually first explain it away as a variation that can occur. It
> takes
> >>> >> > some hard headedness to assume that an unexpected variation might
> >>> >> > represent a contradiction in theories.
> >>> >> >
> >>> >> > I believe that reason-based reasoning is also important. So a
> >>> >> > pet-like
> >>> >> > object might be visually noticed in a room based on its features
> and
> >>> >> > actions.  If the animal or object is seen frequently and it stands
> >>> >> > out
> >>> >> > against the background, a concept about it will be developed using
> >>> >> > concepts about the features and actions of other pets.
> >>> >> >
> >>> >> > Finally, let me add one more thing. Concepts may represent or
> refer
> >>> >> > to
> >>> >> > objects but they can also play functional roles. So while a
> >>> >> > conceptual
> >>> >> > function prototype might be sufficient to potentially represent
> any
> >>> >> > kind of conceptual relation, I believe it is more to the point to
> say
> >>> >> > that that the program must be capable of deriving conceptual
> function
> >>> >> > prototypes in response to the events it observes in the IO data
> >>> >> > environment. Let me draw a parallel. The argument can be made that
> >>> >> > any
> >>> >> > program is a system of yes-no questions and responses. But that
> >>> >> > doesn't mean that programmers could effectively use a programming
> >>> >> > language that was designed solely on that principle. Similarly, I
> >>> >> > believe that an AGI program has to be designed to implement the
> >>> >> > eventual formation of conceptual function prototypes and to be
> >>> >> > prepared to handle their application and development. Even if I am
> >>> >> > unable to figure out how the program could soundly derive
> functional
> >>> >> > prototypes (dynamically) I can use the idea in imaginative
> >>> >> > projections. The reason dynamic functional prototypes is so
> important
> >>> >> > is because if concepts become structurally (or abstractly)
> >>> >> > specialized, which is part of my theory, then there will probably
> be
> >>> >> > a
> >>> >> > need to new kinds of conceptual relations to generalize across
> them.
> >>> >> > I
> >>> >> > think this makes sense and this kind of reasoning comes almost
> >>> >> > directly from speculation about the consequences of conceptual
> >>> >> > relativism as I see it.
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> > Jim Bromer
> >>> >> >
> >>> >> >
> >>> >> > On Sun, Jan 4, 2015 at 2:25 PM, Peter Voss <[email protected]>
> wrote:
> >>> >> >> I would find it useful if you could provide one or two specific
> >>> >> >> examples of concepts being derived using existing concepts -- not
> >>> >> >> the
> >>> >> >> mechanics, but situations.
> >>> >> >>
> >>> >> >> Best,
> >>> >> >>
> >>> >> >> Peter
> >>> >> >>
> >>> >> >> -----Original Message-----
> >>> >> >> From: Jim Bromer via AGI [mailto:[email protected]]
> >>> >> >> Sent: Sunday, January 04, 2015 10:52 AM
> >>> >> >> ...
> >>> >> >> I was asked if the differences of my theories from the mainstream
> >>> >> >> theories and the theories behind the AI / AGI Frameworks that are
> >>> >> >> being
> >>> >> >> devised are just a matter of semantics. I don't think they
> are....
> >>> >> >>
> >>> >> >> A true AGI program will need to derive concepts about its
> >>> >> >> interactions
> >>> >> >> with the IO data environment that it is exposed to.
> >>> >> >> It is going to take other concepts to interpret a concept....
> >>> >> >>
> >>> >>
> >>> >>
> >>> >> -------------------------------------------
> >>> >> AGI
> >>> >> Archives: https://www.listbox.com/member/archive/303/=now
> >>> >> RSS Feed:
> >>> >> https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
> >>> >> Modify Your Subscription:
> >>> >>
> >>> >>
> https://www.listbox.com/member/?&;
> >>> >> Powered by Listbox: http://www.listbox.com
> >>> >
> >>> >
> >>
> >>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to