I am not a member of "research gate".

Jim Bromer

On Tue, Jun 26, 2018 at 2:15 AM, Nanograte Knowledge Technologies via AGI <
agi@agi.topicbox.com> wrote:

> https://www.researchgate.net/publication/4363016_Last-mile_
> knowledge_engineering_Quest_for_the_holy_grail
>
> <https://www.researchgate.net/publication/4363016_Last-mile_knowledge_engineering_Quest_for_the_holy_grail>
> Last-mile knowledge engineering: Quest for the holy grail?
> <https://www.researchgate.net/publication/4363016_Last-mile_knowledge_engineering_Quest_for_the_holy_grail>
> Download citation | Last-mile knowledge... | The problem of reliably
> structuring unseen knowledge, at scale, persists within systems
> engineering. An emergence-based method was developed to test the theory of
> applying de-abstraction reasoning to tacit-knowledge engineering.
> www.researchgate.net
>
> ------------------------------
> *From:* Jim Bromer via AGI <agi@agi.topicbox.com>
> *Sent:* Monday, 25 June 2018 10:32 PM
>
> *To:* AGI
> *Subject:* Re: [agi] Discrete Methods are Not the Same as Logic
>
> Please provide a link to the method you are talking about.
> Jim Bromer
>
>
> On Sun, Jun 24, 2018 at 6:59 AM, Nanograte Knowledge Technologies via
> AGI <agi@agi.topicbox.com> wrote:
> > Convention is a language. The program has to find a way to understand
> what
> > people say, and not say. It has to be able to learn the deeper meaning
> > within the human conversation, and systemically get to the heart of every
> > matter. It has to do this in the most-true manner, utilizing
> evidence-based
> > objectivity where possible. That method exists. It's the one I shared
> here.
> > That's exactly what it does. What it needs to become automated is the
> > development of its own GUI, as translator.
> > ________________________________
> > From: Jim Bromer via AGI <agi@agi.topicbox.com>
> > Sent: Saturday, 23 June 2018 3:55 PM
> >
> > To: AGI
> > Subject: Re: [agi] Discrete Methods are Not the Same as Logic
> >
> > I am thinking of a program that would learn by communicating with
> > people using language. It would learn from interacting with people.
> > The problem with that strategy is that it would tend to acquire
> > superficial knowledge. It would however be required to do some true
> > learning. One reason is that a person cannot think of all the
> > relations and implicit categories that an intelligent entity would
> > have to rely on. Secomdly, we cannot, at this time, understand all the
> > sorts of a knowledge items that it would need to gain greater
> > understanding.
> > It would not be given predetermined categories other than some default
> > second level abstract categories. These second level abstractions
> > might be concerned with abstractions of relations found in discrete
> > relationships that would be expected to found in networks of related
> > information. It would have to work around the complexities that might
> > develop. I am not talking about pure logic but discrete learning so
> > the np problem is not a problem. The "discrete networks" would also
> > include weighted reasoning of course. I am just saying that weighted
> > reasoning isn't necessary but that discrete learning, learning by
> > using ideas and developing principles of thought is important.
> > But I have to be able to develop this as an extremely simple
> > programming project that will quickly show some simple results (like
> > feasibility tests) or else I am not going to have anything to start
> > with.
> >
> > Jim Bromer
> >
> >
> > On Sat, Jun 23, 2018 at 8:51 AM, Nanograte Knowledge Technologies via
> > AGI <agi@agi.topicbox.com> wrote:
> >> Jim
> >>
> >> I agree with making things simple, but one should not make it more
> simple
> >> than necessary. Any algorithm relying on deabstraction to provide proof
> of
> >> true learning would be highly complex. There's no simple solution to
> that
> >> problem. However, I'm enjoying your sentiment how, within deabstraction,
> >> even complexity should become relative over time. Maybe one day, the
> >> machine
> >> would've learned how to invent deabstraction algorithms until it became
> a
> >> simple matter of instinct.
> >>
> >> Since when do human beings discover all its learning by itself? That's a
> >> fallacy. An AGI platform also does not have to discover all of its
> >> learning
> >> by itself. It can be taught until such time it can learn how to organize
> >> resources in order to teach itself and learn via reflection.
> >>
> >> Rob
> >> ________________________________
> >> From: Jim Bromer via AGI <agi@agi.topicbox.com>
> >> Sent: Saturday, 23 June 2018 2:41 AM
> >>
> >> To: AGI
> >> Subject: Re: [agi] Discrete Methods are Not the Same as Logic
> >>
> >> Maybe I should use a name different than judgement. Reflection?
> >> Insightful reflection. The depth of the insight would be relative to
> >> how much knowledge, related to the questions being examined, was
> >> available. So in the primitive model this insight would not be very
> >> good and the program would have to be dependent on what the teacher
> >> could convey to it. But insight would have to be based on putting
> >> different kinds of information together. Novel insight might be
> >> reinforced simply by being in the ballpark, it would not have to be
> >> perfect as long as it was tagging along somewhere within the subject
> >> matter being discussed, described or within the boundaries of
> >> understanding something about a situation that was occurring. I think
> >> different agi's would have to be different if they were thinking for
> >> themselves - to some extent.
> >> Jim Bromer
> >>
> >>
> >> On Fri, Jun 22, 2018 at 3:10 PM, Mike Archbold via AGI
> >> <agi@agi.topicbox.com> wrote:
> >>> Judgments are fascinating. It seems like most approaches are some
> >>> variation of reinforcement learning. What have you got in mind? One
> >>> thought from Hegel which always sticks in my mind is that a "judgment
> >>> could be other than what it is." So just think about that last
> >>> sentence. How on earth could anyone automate that? But, more so, two
> >>> distinct AGI's would always be different on that account.
> >>>
> >>> On 6/22/18, Jim Bromer via AGI <agi@agi.topicbox.com> wrote:
> >>>> I need to start with something that is extremely simple and which will
> >>>> produce some kind of result pretty quickly. I have had various ideas
> >>>> about it for some time but what I see now is that a necessary
> >>>> advancement for AI would have to exhibit some kind of judgment about
> >>>> what it learns about. I realized the importance of making a program
> >>>> that could learn new ways of thinking. Since I believe that
> >>>> categorical reasoning is important then that means that it would not
> >>>> only have to use abstractions but it would also have to be able to
> >>>> discover abstractions of its own. This does not seem too difficult
> >>>> because I am not being unreasonable about requiring it to be a
> >>>> historical singularity inflexion point.  I need to start with
> >>>> something simple that demonstrates an ability for true learning. What
> >>>> I see now is that it also has to exhibit some kind of simple
> >>>> judgement. I need to come up with simple judgement algorithms. I
> >>>> cannot get started unless I can come up with simple feasible models
> >>>> that I can test.
> >>>> I respectfully disagree with you about one thing. The elaboration of
> >>>> an extensive framework and management system is, in my opinion, a
> >>>> waste of time. It is like planning an AI program that will create AGI
> >>>> for you completely on its own. It might be ok to think about such a
> >>>> thing but it is nowhere to start out for an actual programming
> >>>> project. I have to start with something that is very simple and which
> >>>> can show some immediate results. For me, simplification is a necessity
> >>>> but it is also necessary to avoid the wrong kinds of simplification.
> >>>> Jim Bromer
> >>>>
> >>>>
> >>>> On Fri, Jun 22, 2018 at 12:13 AM, Nanograte Knowledge Technologies via
> >>>> AGI <agi@agi.topicbox.com> wrote:
> >>>>> Jim, I think for this kind of reasoning to evolve, one would always
> >>>>> have
> >>>>> to
> >>>>> return to an ontological platform. For example, for reasoning, one
> >>>>> would
> >>>>> require a meta-methodology for reasoning effectively with. For
> >>>>> selectively
> >>>>> forgetting and learning, an evolution-based methodology is required.
> >>>>> For
> >>>>> managing Logic, one would need a suitable framework and management
> >>>>> system,
> >>>>> and so on. These are all critical components, or nodes, that would
> have
> >>>>> to
> >>>>> exist for self-optimized reasoning functionality to become
> >>>>> spontaneous.The
> >>>>> real IP lie not only in the methods, in the sense of AI apps.
> >>>>>
> >>>>> Yuu stated: "...DL story is compelling it is not paying out to
> stronger
> >>>>> AI
> >>>>> (Near AGI)..."
> >>>>>>>>Is it possible that AGI is an outcome, an act of becoming, and not
> a
> >>>>>>>> discrete objective at all?
> >>>>>
> >>>>> Rob
> >>>>> ________________________________
> >>>>> From: Jim Bromer via AGI <agi@agi.topicbox.com>
> >>>>> Sent: Thursday, 21 June 2018 5:20 PM
> >>>>> To: AGI
> >>>>> Subject: Re: [agi] Discrete Methods are Not the Same as Logic
> >>>>>
> >>>>> Symbol Based Reasoning is discrete, but a computer can use discrete
> >>>>> data that would not make sense to us so the term symbolic might be
> >>>>> misleading. I am not opposed to weighted reasoning (like neural
> >>>>> networks or Bayesian Networks) and I think reasoning has to use
> >>>>> networks of relations. If weighted networks can be thought of as a
> >>>>> symbolic network then that suggests that symbols may not be discrete
> >>>>> (as different from Neural Networks.) I just think that there is
> >>>>> something missing with DL, and while the Hinton...DL story is
> >>>>> compelling it is not paying out to stronger AI (Near AGI). For
> >>>>> example, I think that symbolic reasoning which is able to change its
> >>>>> categorical bases of reasoning is something that is badly lacking in
> >>>>> Discrete Learning. You don't want your program to forget everything
> it
> >>>>> has learned just because some doofus tells it to, and you do not want
> >>>>> it to write over the most effective methods it uses to learn just to
> >>>>> deal with some new method of learning. So, that, in my opinion is
> >>>>> where the secret may have been hiding. A program that is capable of
> >>>>> learning something new must be capable of losing its more primitive
> >>>>> learning techniques without wiping out the good stuff that it had
> >>>>> previously acquired. This requires some working wisdom.
> >>>>> I have been thinking about these ideas for a long time but now I feel
> >>>>> that I have a better understanding of how this insight might be used
> >>>>> to point to simple jumping off point.
> >>>>> Jim Bromer
> >>>>>
> >>>>>
> >>>>> On Thu, Jun 21, 2018 at 2:48 AM, Mike Archbold via AGI
> >>>>> <agi@agi.topicbox.com> wrote:
> >>>>>> So, by "discrete reasoning" I think you kind of mean more or less
> "not
> >>>>>> neural networks" or I think some people say, or used to say NOT
> "soft
> >>>>>> computing" to mean, oh hell!, we aren't really sure how it works, or
> >>>>>> we can't create what looks like a clear, more or less deterministic
> >>>>>> program like in the old days etc....  Really, the challenge a lot of
> >>>>>> people, myself included, have taken up is how to fuse discrete (I
> >>>>>> simply call it "symbolic", although nn have symbols, typically you
> >>>>>> don't see them except as input and output) and DL which is such a
> good
> >>>>>> way to approach combinatorial explosion.
> >>>>>>
> >>>>>> To me reasoning is mostly conscious, and kind of like the way an
> >>>>>> expert  system chains, logically. The understanding is something
> else
> >>>>>> riding kind of below it and less conscious but it has all the common
> >>>>>> sense rules of reality which constrain the upper level reasoning
> which
> >>>>>> I think is logical, like "if car won't start battery is dead" would
> be
> >>>>>> the conscious part but the understanding would include such mundane
> >>>>>> details as "a car has one battery" and "you can see the car but it
> is
> >>>>>> in space which is not the same thing as you" and "if you turn around
> >>>>>> to look at the battery the car is still there" and all such details
> >>>>>> which lead to an understanding. But understanding is an incredibly
> >>>>>> tough thing to make a science out of, although I see papers lately
> and
> >>>>>> conference topics on it.
> >>>>>>
> >>>>>> On 6/20/18, Jim Bromer via AGI <agi@agi.topicbox.com> wrote:
> >>>>>>> I was just reading something about the strong disconnect between
> our
> >>>>>>> actions and our thoughts about the principles and reasons we use to
> >>>>>>> describe why we react the way we do. This may be so, but this does
> >>>>>>> not
> >>>>>>> show
> >>>>>>> how we come to understand basic ideas about the world. This attempt
> >>>>>>> to
> >>>>>>> make
> >>>>>>> a nearly total disconnect between reasons and our actual reactions
> >>>>>>> misses
> >>>>>>> something when it comes to explaining how we know anything,
> including
> >>>>>>> how
> >>>>>>> we learn to make decisions about something. One way to get around
> >>>>>>> this
> >>>>>>> problem is to say that it all takes place in neural networks which
> >>>>>>> are
> >>>>>>> not
> >>>>>>> open to insight about the details. But there is another explanation
> >>>>>>> which
> >>>>>>> credits discrete reasoning with the ability to provide insight and
> >>>>>>> direction and that is we are not able to consciously analyze all
> the
> >>>>>>> different events that are occurring at a moment and so we probably
> >>>>>>> are
> >>>>>>> reacting to many different events which we could discuss as
> discrete
> >>>>>>> events
> >>>>>>> if we had the luxury to have them all brought to our conscious
> >>>>>>> attention.
> >>>>>>> So logic and personal principles are ideals which we can use to
> >>>>>>> examine
> >>>>>>> our
> >>>>>>> reactions - and our insights - about the what is going on around us
> >>>>>>> but
> >>>>>>> it
> >>>>>>> is unlikely that we can catalogue all the events that surround us
> and
> >>>>>>> (partly) cause us to react the way we do.
> >>>>>>>
> >>>>>>> Jim Bromer
> >>>>>>>
> >>>>>>> On Wed, Jun 20, 2018 at 6:06 AM, Nanograte Knowledge Technologies
> via
> >>>>>>> AGI
> >>>>>>> <
> >>>>>>> agi@agi.topicbox.com> wrote:
> >>>>>>>
> >>>>>>>> "As Julian Jaynes put it in his iconic book *The Origin of
> >>>>>>>> Consciousness
> >>>>>>>> in the Breakdown of the Bicameral Mind*
> >>>>>>>>
> >>>>>>>> Reasoning and logic are to each other as health is to medicine,
> or —
> >>>>>>>> better — as conduct is to morality. Reasoning refers to a gamut of
> >>>>>>>> natural
> >>>>>>>> thought processes in the everyday world. Logic is how we ought to
> >>>>>>>> think
> >>>>>>>> if
> >>>>>>>> objective truth is our goal — and the everyday world is very
> little
> >>>>>>>> concerned with objective truth. Logic is the science of the
> >>>>>>>> justification
> >>>>>>>> of conclusions we have reached by natural reasoning. My point here
> >>>>>>>> is
> >>>>>>>> that,
> >>>>>>>> for such natural reasoning to occur, consciousness is not
> necessary.
> >>>>>>>> The
> >>>>>>>> very reason we need logic at all is because most reasoning is not
> >>>>>>>> conscious
> >>>>>>>> at all."
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> https://cameroncounts.wordpress.com/2010/01/03/
> mathematics-and-logic/
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> <https://cameroncounts.wordpress.com/2010/01/03/
> mathematics-and-logic/>
> >>>>>>>> Mathematics and logic | Peter Cameron's Blog
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> <https://cameroncounts.wordpress.com/2010/01/03/
> mathematics-and-logic/>
> >>>>>>>> Apologies: this will be a long post, and there will be more to
> come.
> >>>>>>>> But
> >>>>>>>> it may be useful to you if you are getting to grips with logic: I
> >>>>>>>> have
> >>>>>>>> tried to keep the overall picture in view.
> >>>>>>>> cameroncounts.wordpress.com
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> ------------------------------
> >>>>>>>> *From:* Jim Bromer via AGI <agi@agi.topicbox.com>
> >>>>>>>> *Sent:* Wednesday, 20 June 2018 12:01 PM
> >>>>>>>> *To:* AGI
> >>>>>>>> *Subject:* Re: [agi] Discrete Methods are Not the Same as Logic
> >>>>>>>>
> >>>>>>>> Discrete statements are used in programming languages. So a symbol
> >>>>>>>> (a
> >>>>>>>> symbol phrase or sentence) can be used to represent both data and
> >>>>>>>> programming actions. Discrete Reasoning might be compared to
> >>>>>>>> something
> >>>>>>>> that has the potential to be more like an algorithm. (Of course,
> >>>>>>>> operational statements may be retained as data which can be run
> when
> >>>>>>>> needed)
> >>>>>>>> For an example of the value of Discrete Methods, let's suppose
> >>>>>>>> someone
> >>>>>>>> wanted more control over a neural network. Trying to look for
> logic
> >>>>>>>> in
> >>>>>>>> a neural network does not really make all that much sense if you
> >>>>>>>> want
> >>>>>>>> to find relationships between actions on the net and output. Using
> >>>>>>>> Discrete Methods makes a lot of sense. You might want to try
> >>>>>>>> fiddling
> >>>>>>>> with the weights of some of the nodes as the nn is running. If
> >>>>>>>> certain
> >>>>>>>> effects can be described (or sensed by some algorithm) then
> >>>>>>>> describing
> >>>>>>>> what was done and what effects were observed would be the next
> step
> >>>>>>>> in
> >>>>>>>> the research. Researchers are not usually able to start with
> >>>>>>>> detailed
> >>>>>>>> knowledge of exactly what is going on. So they need to start with
> >>>>>>>> descriptions of some actions they took and of what effects were
> >>>>>>>> observed. If these actions and effects can be categorized in some
> >>>>>>>> way
> >>>>>>>> then the chance that more effective observations will be obtained
> >>>>>>>> will
> >>>>>>>> increase.
> >>>>>>>> Jim Bromer
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Tue, Jun 19, 2018 at 11:12 PM, Mike Archbold via AGI
> >>>>>>>> <agi@agi.topicbox.com> wrote:
> >>>>>>>> > It sounds like you need both for AI, certainly there is always a
> >>>>>>>> > place
> >>>>>>>> > for logic. What's "discrete reasoning"?
> >>>>>>>> >
> >>>>>>>> > On 6/18/18, Jim Bromer via AGI <agi@agi.topicbox.com> wrote:
> >>>>>>>> >> I am wondering about how Discrete Reasoning is different than
> >>>>>>>> >> Logic.
> >>>>>>>> >> I
> >>>>>>>> >> assume that Discrete Reasoning could be described, modelled or
> >>>>>>>> >> represented by Logic, but as a more practical method, logic
> would
> >>>>>>>> >> be
> >>>>>>>> >> a
> >>>>>>>> >> tool to use with Discrete Reasoning rather than as a
> >>>>>>>> >> representational
> >>>>>>>> >> substrate.
> >>>>>>>> >>
> >>>>>>>> >> Discrete Reasons and Discrete Reasoning can have meaning over
> and
> >>>>>>>> >> above the True False values of Logic (and the True False
> >>>>>>>> >> Relationships
> >>>>>>>> >> between combinations of Propositions.)
> >>>>>>>> >>
> >>>>>>>> >> Discrete Reasoning can have combinations that do not have a
> >>>>>>>> >> meaning
> >>>>>>>> >> or
> >>>>>>>> >> which do not have a clear meaning. This is one of the most
> >>>>>>>> >> important
> >>>>>>>> >> distinctions.
> >>>>>>>> >>
> >>>>>>>> >> It can be used in various combinations of hierarchies and/or in
> >>>>>>>> >> non-hierarchies.
> >>>>>>>> >>
> >>>>>>>> >> It can, for the most part, be used more freely with other
> >>>>>>>> >> modelling
> >>>>>>>> >> methods.
> >>>>>>>> >>
> >>>>>>>> >> Discrete Reasoning may be Context Sensitive in ways that
> produce
> >>>>>>>> >> ambiguities, both useful and confusing.
> >>>>>>>> >>
> >>>>>>>> >> Discrete Reasoning can be Active. So a statement about some
> >>>>>>>> >> subject
> >>>>>>>> >> might, for one example, suggest that you should change your
> >>>>>>>> >> thinking
> >>>>>>>> >> about (or representation of) the subject in a way that goes
> >>>>>>>> >> beyond
> >>>>>>>> >> some explicit propositional description about some object.
> >>>>>>>> >>
> >>>>>>>> >> You may be able to show that Logic can be used in a way to
> allow
> >>>>>>>> >> for
> >>>>>>>> >> all these effects, but I believe that there is a strong
> argument
> >>>>>>>> >> for
> >>>>>>>> >> focusing on Discrete Reasoning, as opposed to Logic, when you
> are
> >>>>>>>> >> working directly on AI.
> >>>>>>>> >>
> >>>>>>>> >> Jim Bromer
> >>>>>>>> *Artificial General Intelligence List
> >>>>>>>> <https://agi.topicbox.com/latest>*
> >>>>>>>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> >>>>>>>> participants <https://agi.topicbox.com/groups/agi/members> +
> >>>>>>>> delivery
> >>>>>>>> options <https://agi.topicbox.com/groups> Permalink
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> <https://agi.topicbox.com/groups/agi/Tcc2adcdd20e1add4-
> >>>>> Artificial General Intelligence List / AGI / see discussions +
> >>>>> participants
> >>>>> + delivery options Permalink
> >> Artificial General Intelligence List / AGI / see discussions +
> >> participants
> >> + delivery options Permalink
> > Artificial General Intelligence List / AGI / see discussions +
> participants
> > + delivery options Permalink
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups> Permalink
> <https://agi.topicbox.com/groups/agi/Tcc2adcdd20e1add4-M2cb6fbb9cc9ea093627d9334>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcc2adcdd20e1add4-Ma68c03e3db0b5ed46981a86b
Delivery options: https://agi.topicbox.com/groups

Reply via email to