Re: [agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread A.T. Murray via AGI
John Rose said:
> Prob. is only he can read the code!  LOL
Today I have been porting more JavaScript AI into Forth.

MP said:
> I’ve honestly tried reading his source and explanations.
Thank you for looking into it.

Mike Archbold said:
> Roll on Arthur...
http://ai.neocites.org/var.html -- now is variables for "First Working AGI".

John Rose then said:
> I agree. Arthur, you need to elevate yourself man. The Elon Musk's of the
world are stealing all the thunder.

Like Abbie Hoffman saying "Steal this book," I say, "Steal this code".

Rolling on,

Arthur


> -Original Message-
>
> From: Mike Archbold via AGI 
> >
> > At least A.T. Murray is in the trenches chunking out code, unlike all of
> our
> > celebrities like Elon Musk and Bill Gates who, while they may have more
> > money, just write about it! Roll on Arthur...
> >
> 
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-M9035f7a65317c8d478017772
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Blockchainifying Conscious Awareness

2018-06-21 Thread Nanograte Knowledge Technologies via AGI
Most interesting. So, the timespace continuum is effectively being split? I 
don't buy it. You cannot sustainably entangle one and not the other, not even 
as an antithesical exercise. You're just begging for your data to disappear 
into the Void.

From: johnr...@polyplexic.com 
Sent: Thursday, 21 June 2018 11:11 PM
To: AGI
Subject: Re: [agi] Blockchainifying Conscious Awareness

Oohh now this is what I'm talkin 'bout, get a little AGI PSI action goin' on in 
that blockchain consciousness proposal:

"Quantum Blockchain using entanglement in time"
https://arxiv.org/abs/1804.05979

John


Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9353b0b8fd3894d8-Me694a1f7bfb12fc6024e4ef0
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Discrete Methods are Not the Same as Logic

2018-06-21 Thread Nanograte Knowledge Technologies via AGI
Jim, I think for this kind of reasoning to evolve, one would always have to 
return to an ontological platform. For example, for reasoning, one would 
require a meta-methodology for reasoning effectively with. For selectively 
forgetting and learning, an evolution-based methodology is required. For 
managing Logic, one would need a suitable framework and management system, and 
so on. These are all critical components, or nodes, that would have to exist 
for self-optimized reasoning functionality to become spontaneous.The real IP 
lie not only in the methods, in the sense of AI apps.

Yuu stated: "...DL story is compelling it is not paying out to stronger AI 
(Near AGI)..."
>>>Is it possible that AGI is an outcome, an act of becoming, and not a 
>>>discrete objective at all?

Rob

From: Jim Bromer via AGI 
Sent: Thursday, 21 June 2018 5:20 PM
To: AGI
Subject: Re: [agi] Discrete Methods are Not the Same as Logic

Symbol Based Reasoning is discrete, but a computer can use discrete
data that would not make sense to us so the term symbolic might be
misleading. I am not opposed to weighted reasoning (like neural
networks or Bayesian Networks) and I think reasoning has to use
networks of relations. If weighted networks can be thought of as a
symbolic network then that suggests that symbols may not be discrete
(as different from Neural Networks.) I just think that there is
something missing with DL, and while the Hinton...DL story is
compelling it is not paying out to stronger AI (Near AGI). For
example, I think that symbolic reasoning which is able to change its
categorical bases of reasoning is something that is badly lacking in
Discrete Learning. You don't want your program to forget everything it
has learned just because some doofus tells it to, and you do not want
it to write over the most effective methods it uses to learn just to
deal with some new method of learning. So, that, in my opinion is
where the secret may have been hiding. A program that is capable of
learning something new must be capable of losing its more primitive
learning techniques without wiping out the good stuff that it had
previously acquired. This requires some working wisdom.
I have been thinking about these ideas for a long time but now I feel
that I have a better understanding of how this insight might be used
to point to simple jumping off point.
Jim Bromer


On Thu, Jun 21, 2018 at 2:48 AM, Mike Archbold via AGI
 wrote:
> So, by "discrete reasoning" I think you kind of mean more or less "not
> neural networks" or I think some people say, or used to say NOT  "soft
> computing" to mean, oh hell!, we aren't really sure how it works, or
> we can't create what looks like a clear, more or less deterministic
> program like in the old days etc  Really, the challenge a lot of
> people, myself included, have taken up is how to fuse discrete (I
> simply call it "symbolic", although nn have symbols, typically you
> don't see them except as input and output) and DL which is such a good
> way to approach combinatorial explosion.
>
> To me reasoning is mostly conscious, and kind of like the way an
> expert  system chains, logically. The understanding is something else
> riding kind of below it and less conscious but it has all the common
> sense rules of reality which constrain the upper level reasoning which
> I think is logical, like "if car won't start battery is dead" would be
> the conscious part but the understanding would include such mundane
> details as "a car has one battery" and "you can see the car but it is
> in space which is not the same thing as you" and "if you turn around
> to look at the battery the car is still there" and all such details
> which lead to an understanding. But understanding is an incredibly
> tough thing to make a science out of, although I see papers lately and
> conference topics on it.
>
> On 6/20/18, Jim Bromer via AGI  wrote:
>> I was just reading something about the strong disconnect between our
>> actions and our thoughts about the principles and reasons we use to
>> describe why we react the way we do. This may be so, but this does not show
>> how we come to understand basic ideas about the world. This attempt to make
>> a nearly total disconnect between reasons and our actual reactions misses
>> something when it comes to explaining how we know anything, including how
>> we learn to make decisions about something. One way to get around this
>> problem is to say that it all takes place in neural networks which are not
>> open to insight about the details. But there is another explanation which
>> credits discrete reasoning with the ability to provide insight and
>> direction and that is we are not able to consciously analyze all the
>> different events that are occurring at a moment and so we probably are
>> reacting to many different events which we could discuss as discrete events
>> if we had the luxury to have them all brought to our conscious attention.
>> So 

RE: [agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread John Rose
Ehm, "chunking out code"...that's ah, yeah good way to describe it 

I agree. Arthur, you need to elevate yourself man. The Elon Musk's of the world 
are stealing all the thunder.

John

> -Original Message-
> From: Mike Archbold via AGI 
> 
> At least A.T. Murray is in the trenches chunking out code, unlike all of our
> celebrities like Elon Musk and Bill Gates who, while they may have more
> money, just write about it! Roll on Arthur...
> 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-Mfd8f1a0f69610c6b54b592c0
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread Mike Archbold via AGI
At least A.T. Murray is in the trenches chunking out code, unlike all
of our celebrities like Elon Musk and Bill Gates who, while they may
have more money, just write about it! Roll on Arthur...

On 6/21/18, MP via AGI  wrote:
> I’ve honestly tried reading his source and explanations.
>
> He loses me at these "perpendicular mental fiber" stuff.
>
> Even with my cruddy JavaScript to Java translation I still don’t get it...
> but it’s something. At least it talks through a ton of weird code.
>
> Sent from ProtonMail Mobile
>
> On Thu, Jun 21, 2018 at 4:48 PM,  wrote:
>
>> Oh OK everybody you can throw away your keyboards, Mentifex created the
>> first AGI...
>>
>> Prob. is only he can read the code!  LOL
>>
>> John
>> [Artificial General Intelligence List](https://agi.topicbox.com/latest) /
>> AGI / see [discussions](https://agi.topicbox.com/groups/agi) +
>> [participants](https://agi.topicbox.com/groups/agi/members) + [delivery
>> options](https://agi.topicbox.com/groups)
>> [Permalink](https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-Md4b0bb7b18473f7a90c179c3)
> --
> Artificial General Intelligence List: AGI
> Permalink:
> https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-M552ad20e36b456bc6dcfc6a8
> Delivery options: https://agi.topicbox.com/groups
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-Ma5f8d804ba7b5274d418f44e
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Discrete Methods are Not the Same as Logic

2018-06-21 Thread Mike Archbold via AGI
Jim, I see what you mean about symbols vs other discrete data and
structures etc. I suppose an "unreadable" piece of data could still be
converted into symbols, though. An analogy would be that when we input
a sound, it is converted to something, a percept... an AGI could input
some unreadable data and the converted to a symbolic percept. So by
the time the program is running it is symbolic. I suppose my
conviction here is that, like I think you are leading up to, is that
there still is a huge place for this type of processing

On 6/21/18, Jim Bromer via AGI  wrote:
> Symbol Based Reasoning is discrete, but a computer can use discrete
> data that would not make sense to us so the term symbolic might be
> misleading. I am not opposed to weighted reasoning (like neural
> networks or Bayesian Networks) and I think reasoning has to use
> networks of relations. If weighted networks can be thought of as a
> symbolic network then that suggests that symbols may not be discrete
> (as different from Neural Networks.) I just think that there is
> something missing with DL, and while the Hinton...DL story is
> compelling it is not paying out to stronger AI (Near AGI). For
> example, I think that symbolic reasoning which is able to change its
> categorical bases of reasoning is something that is badly lacking in
> Discrete Learning. You don't want your program to forget everything it
> has learned just because some doofus tells it to, and you do not want
> it to write over the most effective methods it uses to learn just to
> deal with some new method of learning. So, that, in my opinion is
> where the secret may have been hiding. A program that is capable of
> learning something new must be capable of losing its more primitive
> learning techniques without wiping out the good stuff that it had
> previously acquired. This requires some working wisdom.
> I have been thinking about these ideas for a long time but now I feel
> that I have a better understanding of how this insight might be used
> to point to simple jumping off point.
> Jim Bromer
>
>
> On Thu, Jun 21, 2018 at 2:48 AM, Mike Archbold via AGI
>  wrote:
>> So, by "discrete reasoning" I think you kind of mean more or less "not
>> neural networks" or I think some people say, or used to say NOT  "soft
>> computing" to mean, oh hell!, we aren't really sure how it works, or
>> we can't create what looks like a clear, more or less deterministic
>> program like in the old days etc  Really, the challenge a lot of
>> people, myself included, have taken up is how to fuse discrete (I
>> simply call it "symbolic", although nn have symbols, typically you
>> don't see them except as input and output) and DL which is such a good
>> way to approach combinatorial explosion.
>>
>> To me reasoning is mostly conscious, and kind of like the way an
>> expert  system chains, logically. The understanding is something else
>> riding kind of below it and less conscious but it has all the common
>> sense rules of reality which constrain the upper level reasoning which
>> I think is logical, like "if car won't start battery is dead" would be
>> the conscious part but the understanding would include such mundane
>> details as "a car has one battery" and "you can see the car but it is
>> in space which is not the same thing as you" and "if you turn around
>> to look at the battery the car is still there" and all such details
>> which lead to an understanding. But understanding is an incredibly
>> tough thing to make a science out of, although I see papers lately and
>> conference topics on it.
>>
>> On 6/20/18, Jim Bromer via AGI  wrote:
>>> I was just reading something about the strong disconnect between our
>>> actions and our thoughts about the principles and reasons we use to
>>> describe why we react the way we do. This may be so, but this does not
>>> show
>>> how we come to understand basic ideas about the world. This attempt to
>>> make
>>> a nearly total disconnect between reasons and our actual reactions
>>> misses
>>> something when it comes to explaining how we know anything, including
>>> how
>>> we learn to make decisions about something. One way to get around this
>>> problem is to say that it all takes place in neural networks which are
>>> not
>>> open to insight about the details. But there is another explanation
>>> which
>>> credits discrete reasoning with the ability to provide insight and
>>> direction and that is we are not able to consciously analyze all the
>>> different events that are occurring at a moment and so we probably are
>>> reacting to many different events which we could discuss as discrete
>>> events
>>> if we had the luxury to have them all brought to our conscious
>>> attention.
>>> So logic and personal principles are ideals which we can use to examine
>>> our
>>> reactions - and our insights - about the what is going on around us but
>>> it
>>> is unlikely that we can catalogue all the events that surround us and
>>> (partly) cause us to 

Re: [agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread MP via AGI
I’ve honestly tried reading his source and explanations.

He loses me at these "perpendicular mental fiber" stuff.

Even with my cruddy JavaScript to Java translation I still don’t get it... but 
it’s something. At least it talks through a ton of weird code.

Sent from ProtonMail Mobile

On Thu, Jun 21, 2018 at 4:48 PM,  wrote:

> Oh OK everybody you can throw away your keyboards, Mentifex created the first 
> AGI...
>
> Prob. is only he can read the code!  LOL
>
> John
> [Artificial General Intelligence List](https://agi.topicbox.com/latest) / AGI 
> / see [discussions](https://agi.topicbox.com/groups/agi) + 
> [participants](https://agi.topicbox.com/groups/agi/members) + [delivery 
> options](https://agi.topicbox.com/groups) 
> [Permalink](https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-Md4b0bb7b18473f7a90c179c3)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-M552ad20e36b456bc6dcfc6a8
Delivery options: https://agi.topicbox.com/groups


[agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread johnrose
Oh OK everybody you can throw away your keyboards, Mentifex created the first 
AGI...

Prob. is only he can read the code!  LOL

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-Md4b0bb7b18473f7a90c179c3
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Blockchainifying Conscious Awareness

2018-06-21 Thread johnrose
Here are a more blockchain distributed computing videos. Applicable? Maybe. 
Entertaining? Yes.

The networks are probably laggy since some just use unused machine resources 
like BOINC but allow buying and selling via coins or tokens. But not every AGI 
component needs hyper low-latency computing distribution. 

Iagon
https://youtu.be/FdmCfSBkUyI

Elastic
https://www.youtube.com/watch?v=hejEY9HEFO0

Definity
https://youtu.be/kyCfGRZaDnw

Zilliqa
https://www.youtube.com/watch?v=gQiG_ilPGG0

iExec
https://www.youtube.com/watch?v=07ojusto6s4

AION
https://www.youtube.com/watch?v=pFkPiL-dtDY

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9353b0b8fd3894d8-M135dcb22254c417988565a53
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Discrete Methods are Not the Same as Logic

2018-06-21 Thread Jim Bromer via AGI
Symbol Based Reasoning is discrete, but a computer can use discrete
data that would not make sense to us so the term symbolic might be
misleading. I am not opposed to weighted reasoning (like neural
networks or Bayesian Networks) and I think reasoning has to use
networks of relations. If weighted networks can be thought of as a
symbolic network then that suggests that symbols may not be discrete
(as different from Neural Networks.) I just think that there is
something missing with DL, and while the Hinton...DL story is
compelling it is not paying out to stronger AI (Near AGI). For
example, I think that symbolic reasoning which is able to change its
categorical bases of reasoning is something that is badly lacking in
Discrete Learning. You don't want your program to forget everything it
has learned just because some doofus tells it to, and you do not want
it to write over the most effective methods it uses to learn just to
deal with some new method of learning. So, that, in my opinion is
where the secret may have been hiding. A program that is capable of
learning something new must be capable of losing its more primitive
learning techniques without wiping out the good stuff that it had
previously acquired. This requires some working wisdom.
I have been thinking about these ideas for a long time but now I feel
that I have a better understanding of how this insight might be used
to point to simple jumping off point.
Jim Bromer


On Thu, Jun 21, 2018 at 2:48 AM, Mike Archbold via AGI
 wrote:
> So, by "discrete reasoning" I think you kind of mean more or less "not
> neural networks" or I think some people say, or used to say NOT  "soft
> computing" to mean, oh hell!, we aren't really sure how it works, or
> we can't create what looks like a clear, more or less deterministic
> program like in the old days etc  Really, the challenge a lot of
> people, myself included, have taken up is how to fuse discrete (I
> simply call it "symbolic", although nn have symbols, typically you
> don't see them except as input and output) and DL which is such a good
> way to approach combinatorial explosion.
>
> To me reasoning is mostly conscious, and kind of like the way an
> expert  system chains, logically. The understanding is something else
> riding kind of below it and less conscious but it has all the common
> sense rules of reality which constrain the upper level reasoning which
> I think is logical, like "if car won't start battery is dead" would be
> the conscious part but the understanding would include such mundane
> details as "a car has one battery" and "you can see the car but it is
> in space which is not the same thing as you" and "if you turn around
> to look at the battery the car is still there" and all such details
> which lead to an understanding. But understanding is an incredibly
> tough thing to make a science out of, although I see papers lately and
> conference topics on it.
>
> On 6/20/18, Jim Bromer via AGI  wrote:
>> I was just reading something about the strong disconnect between our
>> actions and our thoughts about the principles and reasons we use to
>> describe why we react the way we do. This may be so, but this does not show
>> how we come to understand basic ideas about the world. This attempt to make
>> a nearly total disconnect between reasons and our actual reactions misses
>> something when it comes to explaining how we know anything, including how
>> we learn to make decisions about something. One way to get around this
>> problem is to say that it all takes place in neural networks which are not
>> open to insight about the details. But there is another explanation which
>> credits discrete reasoning with the ability to provide insight and
>> direction and that is we are not able to consciously analyze all the
>> different events that are occurring at a moment and so we probably are
>> reacting to many different events which we could discuss as discrete events
>> if we had the luxury to have them all brought to our conscious attention.
>> So logic and personal principles are ideals which we can use to examine our
>> reactions - and our insights - about the what is going on around us but it
>> is unlikely that we can catalogue all the events that surround us and
>> (partly) cause us to react the way we do.
>>
>> Jim Bromer
>>
>> On Wed, Jun 20, 2018 at 6:06 AM, Nanograte Knowledge Technologies via AGI <
>> agi@agi.topicbox.com> wrote:
>>
>>> "As Julian Jaynes put it in his iconic book *The Origin of Consciousness
>>> in the Breakdown of the Bicameral Mind*
>>>
>>> Reasoning and logic are to each other as health is to medicine, or —
>>> better — as conduct is to morality. Reasoning refers to a gamut of
>>> natural
>>> thought processes in the everyday world. Logic is how we ought to think
>>> if
>>> objective truth is our goal — and the everyday world is very little
>>> concerned with objective truth. Logic is the science of the justification
>>> of conclusions we have reached by 

Re: [agi] Discrete Methods are Not the Same as Logic

2018-06-21 Thread Nanograte Knowledge Technologies via AGI
A few thoughts...

Seems the "disconnect" Jim mentioned, might reside in the knowing part, the 
consciousness, and not our actions and our thoughts.

This morning I was observing an old (highly-experienced cat walking around 
objects on a table. The cat had known this table for the most part of his 
natural life. He walked his elective route through the obstacle course of 
objects. The organization of the objects on the table differ, and thus changes 
radically a few times during the day. Some objects are added. Some are 
replaced. All objects may be relocated.

What surprised me was as if the old cat was examining and mapping this space, 
as if for the very first time he had encountered it in his life. However, a 
number of the existing objects had changed position slightly over the past 
weeks, but should be familiar to the cat by now. When he was done walking the 
table 3 times, the cat went to the corner of the table and sat down with his 
back towards the center of the table, seemingly reflecting about what he had 
just discovered.

Within 5 minutes, that cat taught me an amazing thing about reason and logic 
within the universe. He also showed me an elementary error in my understanding 
of our universe. Humanity is overly concerned with change, and how to manage 
it, or cope with it, or avoid it. However, change is but a reorganization of 
existing and latent objects within a constant boundary. As Prof. Handy used to 
assert; it's just the cheese being moved around. The boundary (our earthly, 
reasoning universe) or specifically the table we spend our days on and around, 
did not change at all. This has relevance to logic and reason. Please bear with 
me.

I think, it is only when the spacetime continuum boundary is changed 
significantly (say more than 17%), that change becomes a systemic factor the 
mind has to contend with in terms of applying system resources. Until that 
threshold is reached, reasoning may prevail that there is no "real" need to 
activate the jet engines, to overburden the consciousness.

One information-engineering tutor of mine called that ability to sustain 
without thought, competency. And competency, as we know, is acquired via 
repetition. And eventually, there is no need to think about what has to be done 
anymore. No reason for it exists. Action becomes an instinctive act.

Let me return to Jim's thought. So, all brain activity obviously occurs in the 
neuronal network. Logic may be the policy-management system of such activity. 
Reason the motivational factor. Both logic and reason develop via a 
fully-recursive system. That is;  open and closed-loop feedback driven. The 
effect of the feedback system is to encourage the overall system to a point of 
optimal efficiency, or effective complexity. It is to help ensure the survival 
of the entity by continuously positioning its net mind (like a neural GPS 
system) within a probabilistic success range on a stochastic scale. Just my 
perspective on it, but not my cleverness. The notion is supported by Gell-Mann 
and neural research.

Given that all the data is present in memory, logic may invoke whatever event 
data it requires, even "past lessons learned", to try and process it for 
different logical purposes, including systemic compliance. However, the notion 
that neural forgetfulness activates as soon as data is recorded, diminishing 
the recall over a period of tens of hours may indicate that a property of data 
plasticity. I think, the classification of data (as a logical function) may 
directly affect the retention (or knowledge obsolescence) factor of memory.

In other words, in a brain driven by well-developed logic we may find a 
significant improvement in reasoning potential, which may be evidenced via 
quick learning, optimized classification, low-error-rate feedback, and steady 
improvement in competency. In such a brain, active plasticity may be 
observable, which may be evidenced by parallel reasoning, or adaptive 
logicreasoning (there's an algorithm at work here).

Logicreasoning- a discrete timespace continuum boundary - probably emerges as 
an mutative outcome of logic and reason within the system of consciousness. 
That could be the very event of knowing, which would be auto processed within 
memory.

Rob




From: Jim Bromer via AGI 
Sent: Wednesday, 20 June 2018 10:28 PM
To: AGI
Subject: Re: [agi] Discrete Methods are Not the Same as Logic

I was just reading something about the strong disconnect between our actions 
and our thoughts about the principles and reasons we use to describe why we 
react the way we do. This may be so, but this does not show how we come to 
understand basic ideas about the world. This attempt to make a nearly total 
disconnect between reasons and our actual reactions misses something when it 
comes to explaining how we know anything, including how we learn to make 
decisions about something. One way to get around this problem is to say that it 

Re: [agi] Discrete Methods are Not the Same as Logic

2018-06-21 Thread Mike Archbold via AGI
So, by "discrete reasoning" I think you kind of mean more or less "not
neural networks" or I think some people say, or used to say NOT  "soft
computing" to mean, oh hell!, we aren't really sure how it works, or
we can't create what looks like a clear, more or less deterministic
program like in the old days etc  Really, the challenge a lot of
people, myself included, have taken up is how to fuse discrete (I
simply call it "symbolic", although nn have symbols, typically you
don't see them except as input and output) and DL which is such a good
way to approach combinatorial explosion.

To me reasoning is mostly conscious, and kind of like the way an
expert  system chains, logically. The understanding is something else
riding kind of below it and less conscious but it has all the common
sense rules of reality which constrain the upper level reasoning which
I think is logical, like "if car won't start battery is dead" would be
the conscious part but the understanding would include such mundane
details as "a car has one battery" and "you can see the car but it is
in space which is not the same thing as you" and "if you turn around
to look at the battery the car is still there" and all such details
which lead to an understanding. But understanding is an incredibly
tough thing to make a science out of, although I see papers lately and
conference topics on it.

On 6/20/18, Jim Bromer via AGI  wrote:
> I was just reading something about the strong disconnect between our
> actions and our thoughts about the principles and reasons we use to
> describe why we react the way we do. This may be so, but this does not show
> how we come to understand basic ideas about the world. This attempt to make
> a nearly total disconnect between reasons and our actual reactions misses
> something when it comes to explaining how we know anything, including how
> we learn to make decisions about something. One way to get around this
> problem is to say that it all takes place in neural networks which are not
> open to insight about the details. But there is another explanation which
> credits discrete reasoning with the ability to provide insight and
> direction and that is we are not able to consciously analyze all the
> different events that are occurring at a moment and so we probably are
> reacting to many different events which we could discuss as discrete events
> if we had the luxury to have them all brought to our conscious attention.
> So logic and personal principles are ideals which we can use to examine our
> reactions - and our insights - about the what is going on around us but it
> is unlikely that we can catalogue all the events that surround us and
> (partly) cause us to react the way we do.
>
> Jim Bromer
>
> On Wed, Jun 20, 2018 at 6:06 AM, Nanograte Knowledge Technologies via AGI <
> agi@agi.topicbox.com> wrote:
>
>> "As Julian Jaynes put it in his iconic book *The Origin of Consciousness
>> in the Breakdown of the Bicameral Mind*
>>
>> Reasoning and logic are to each other as health is to medicine, or —
>> better — as conduct is to morality. Reasoning refers to a gamut of
>> natural
>> thought processes in the everyday world. Logic is how we ought to think
>> if
>> objective truth is our goal — and the everyday world is very little
>> concerned with objective truth. Logic is the science of the justification
>> of conclusions we have reached by natural reasoning. My point here is
>> that,
>> for such natural reasoning to occur, consciousness is not necessary. The
>> very reason we need logic at all is because most reasoning is not
>> conscious
>> at all."
>>
>> https://cameroncounts.wordpress.com/2010/01/03/mathematics-and-logic/
>>
>>
>> 
>> Mathematics and logic | Peter Cameron's Blog
>> 
>> Apologies: this will be a long post, and there will be more to come. But
>> it may be useful to you if you are getting to grips with logic: I have
>> tried to keep the overall picture in view.
>> cameroncounts.wordpress.com
>>
>>
>> --
>> *From:* Jim Bromer via AGI 
>> *Sent:* Wednesday, 20 June 2018 12:01 PM
>> *To:* AGI
>> *Subject:* Re: [agi] Discrete Methods are Not the Same as Logic
>>
>> Discrete statements are used in programming languages. So a symbol (a
>> symbol phrase or sentence) can be used to represent both data and
>> programming actions. Discrete Reasoning might be compared to something
>> that has the potential to be more like an algorithm. (Of course,
>> operational statements may be retained as data which can be run when
>> needed)
>> For an example of the value of Discrete Methods, let's suppose someone
>> wanted more control over a neural network. Trying to look for logic in
>> a neural network does not really make all that much sense if you want
>> to find relationships between actions on the net and output. Using
>> Discrete Methods