Also, while I'm warming up, you guys aren't the only people who read
"Stranger in a Strange Land" in Grad school, mkay. I'm more of a Spider
Robinson guy actually. Speaking of somebody who can use our financial
support. Anyway, I liked "Friday" and "The Moon is a harsh mistress" way
better but I think Commander Tom Cool really nailed it with "Infectress"
speaking of a guy we should really invite to come on this listserv.

I like what I'm hearing on Colin's approach but I'm going to have to take
my time and re-read through everything thus far. So, any Charles Sheffield
fans? I am Rustum Battacharyia.

Greg "Megachirops" Staskowski, *Lord of the puzzle network* aka Lorton Nikon

On Wed, May 6, 2015 at 9:02 AM, Greg Staskowski <[email protected]> wrote:

> Steve,
>
> Tell you what, *we are dangerously off topic*. Let me help you out. I'm
> not getting into a pissing match with you on a public listserv. You want to
> actually learn something about collecting physiological data and then
> drawing conclusions, hey e-mail me at [email protected] and we can
> spend the next two weeks going back and forth over what I see as the flaws
> in your approach. You really want to go into this with an ultramarathon
> runner, sure hey, no problem, but assuming a skeptic and a scientist will
> never believe you based on your somewhat limited data? *Hey, bad form
> guvnor.* I call shenanigans. Just sayin.
>
> -GJS
>
> On Wed, May 6, 2015 at 2:07 AM, Nanograte Knowledge Technologies <
> [email protected]> wrote:
>
>> Hi Colin
>>
>> You seem to be following a similar process to AI as to what was used to
>> develop the first, nuclear bomb - various approaches were used coupled with
>> great experimentation.
>>
>> Semantically, your inclusion of the term "emergent" in your last message
>> undersores this approach for me. I'd like to dwell on its relevance for a
>> few seconds. Emergence is regarded as the basis for complex-systems
>> engineering (Checkland). Further, Checkland asserted how the debate between
>> complex and simple systems would probably give rise to what is regarded as
>> systems thinking. This is ancient stuff I'm repeating only to stress the
>> importance of its credibility. Thus, on the theoretically basis alone, your
>> experimental approach could be deemed to be sound.
>>
>> Narrow AI, broad AI, AGI? All peas in the same pod of complex-systems
>> thinking. The fundamentals still have no significant incentive to change.
>>
>> Personally, I would value such an experimental approach on the basis of
>> rethinking the whole idea of developing AI. How else was the sound-barrier
>> broken? In addition, if one followed the emerging trend in recent,
>> adaptively-autonomous technologies, one would be hard pressed to write off
>> your approach.
>>
>> Just one theoretically-moot point if I may, albeit a semantic one? Any
>> institutionalised process effectively is a program of code. As an
>> extension, any reduced process - as a procedural implementation - on a
>> computer would become a computerized program. Hence, I suppose, your search
>> for a generic algorithmic platform.
>>
>> In the sense of systemically, as soon as you'd link the "stochastic"
>> environment to a computer chip in any way it should emerge as a form of
>> computer program. Whilst one understands the need for research to be highly
>> focussed on its objectives, one must still have a design framework that
>> would not unduly restrict any design in a short-sighted
>> Heisenbergian-Einstein debate.
>>
>> I would assume then that you do have a quantum-based design framework
>> you're working from. If not though, this particular,  organic approach
>> would sooner or later come up against the eco-systemic realities of
>> highly-abstracted implementation. This then, mainly due to the lack of
>> navigational competency in the R&D framework to consistently and reliably
>> perform adaptive integration. If it cannot be measured somehow, it cannot
>> be reliably tested and I'm by no means suggesting this to be the case with
>> your experiment. Mine are just thoughts on the interesting topic at hand.
>> One day, when bootstrapping does occur, you'd be wanting to debug though.
>> If only purely mathematical, then purely computational? Maybe that was how
>> computer science emerged.
>>
>> Good luck with the experiment.
>>
>> Rob
>>
>> ------------------------------
>> To: [email protected]
>> From: [email protected]
>> Subject: RE: [agi] Re: Starting to Define Algorithms that are More
>> Powerfulthan Narrow AI
>> Date: Wed, 6 May 2015 10:01:33 +1000
>>
>> Hi,
>> Rather busy... Having trouble devoting time here.
>>
>> Jim.... You ask if I am making some kind of electric circuit. Basically
>> yes. Except it's physical instantiation is important. Materials in space. I
>> know you won't get why that is important. That's ok for now. Just accept
>> that it's like that for the same reason the brain is like that.
>>
>> What it isn't is an 'Equivalent circuit' in the traditional sense of
>> voltage/current replication. It is designed to produce functionally
>> equivalent action potential-style signaling AND the brain-style field
>> system that actually expresses the voltages. The hardware will (in the
>> field version) express an EEG and MEG   like brains.
>>
>> Having said that I am currently designing a version that doesn't express
>> the fields but allows their addition later...knowing what performance
>> degradation results (it will be narrow-AI not AGI). Call it a causality
>> mirror with a faked image in it.
>>
>> It is deeply self modifying. The circuits literally rewire themselves.
>> Circuit loops duplicate/diverge and switch out/off. It  accounts for the
>> process of brain development as a kind of learning. I.e. I don't even have
>> to design the 'brain'. It will self configure based on being in the world.
>> Because it's not using neurons it won't automatically mimic brains in
>> structure. I have no idea what a brain will look like. Physically its a
>> crystalline rock. No actual material growth. Functionally it will stabilize
>> in ways I can't know except by experiment. It means that it must be
>> permanently juvenile.. Overexpressed neurons and overexpressed synapses
>> culled back. Lots of wastage. But so what?
>>
>> Not one line of software anywhere. Any 'algorithm' it has is in the
>> adaptation mechanisms. But they are in hardware. The state of the chip's
>> self configuration is the only actual data involved. Yet, when you look at
>> it there will be deep regularities in its behaviour. You could write them
>> down. However they are all emergent.
>>
>> You know what the hardest part of this is? ... Giving it goals. A reason
>> to bother. A reason for it to sustain the quasi-stable resonances that
>> signify its functioning. I have to think of something akin to homeostasis
>> to keep it going! ROBEOSTASIS. You know what might happen? It possibly
>> self-sustain without human intervention or some kind of hardwiring until
>> the fields are added. Unsure. Answering that is an experimental goal. Steve
>> seems to be deeply inside homeostatic concerns. So that's good.
>>
>> I'm not here to justify anything. Experimental proof will speak for me.
>> And if I can't get the version with and without the fields to be different
>> in predicted ways then I will grovel at the feet of the great god
>> computationalism. Not before.[image: Smiling face with smiling eyes]
>>
>>
>> I think the approach is a reversion to 'natural cybernetics' that had a
>> brief life in the 1950s and then was lost in a tsunami called computer
>> science. I bring it back for an upgrade. Notice that AGI failure started
>> the moment cybernetics stopped. The actual science of artificial
>> intelligence stopped then, too...IMO.
>>
>> Enough poking the bear. Gotta get back to it.
>>
>> I really appreciate the interest in this 'adaptive control' approach.
>>
>> Cheers
>>
>> Colin
>>
>>
>> ------------------------------
>> From: Jim Bromer <[email protected]>
>> Sent: ‎4/‎05/‎2015 12:42 AM
>> To: AGI <[email protected]>
>> Subject: Re: [agi] Re: Starting to Define Algorithms that are More
>> Powerfulthan Narrow AI
>>
>> I thought the ideas are interesting and Colin's description was more
>> readable than usual but the arguments supporting the method weren't
>> very powerful.  I am curious about how Colin is implementing the
>> method. Could you give me a little more about that? Are you designing
>> some kind of electrical circuit?
>>
>> What I was trying to say in this thread is that you have to supply a
>> little more insight about why you think that the methods that you are
>> designing and will be implementing would rise above being 'narrow ai'.
>> For instance, Colin's honest report on how far he has actually gotten
>> so far sounds like it is on par with simple narrow AI. As I reread
>> your messages I keep finding a little more in it. But back to my
>> point. Since I can rough out the algorithms that I would use as if
>> they were abstractions, or as if they could exist within an abstract
>> world, it would seem that I should be able to conduct simple tests to
>> show that they could diversify in some way that is: 1. at least better
>> than narrow ai, and 2. useful in some way. So perhaps I should add
>> that. I would say, for example, that artificial neural networks would
>> pass this kind of test. However, the criticism then is, ironically
>> given our use of the narrow ai term, that they lack efficient means to
>> focus and they cannot be efficiently used as componential objects.
>>
>> So, can you guys define some abstract or simple tests that could show
>> that your ideas would become able to adapt to the more complicated
>> demands of actual tests? The value of the simple test is that once you
>> can get your algorithms to pass the first test you might come up with
>> ways to design a slightly more aggressive test. So if I could test my
>> ideas to,say, try to learn to recognize some simple classifications
>> then I might try to see if I can get it to try to get it to learn to
>> utilize systems of classifications effectively and efficiently
>> (without redesigning the program only for that specific kind of test.)
>> So then I would have to design some other kind of test to make sure
>> that it is somewhat general.
>> Jim Bromer
>>
>> On Sun, May 3, 2015 at 3:25 AM, Colin Hales <[email protected]> wrote:
>> >
>> >
>> >> On Sat, May 2, 2015 at 2:50 AM, Steve Richfield <
>> [email protected]> wrote:
>> >>>
>> >>> Jim,
>> >>>
>> >>> Again, I think I see the POV to solve this. All animals, from single
>> cells to us, are fundamentally adaptive process control systems. We use our
>> intelligence to live better and more reliably, procreate, etc., much as
>> single-celled animals, only with MUCH richer functionality. Everything fits
>> this hierarchy of function leading to intelligence.
>> >>>
>> >>> Then, people like those on this forum start by ignoring this and
>> trying to create intelligence from whole cloth. This may be possible, but
>> there is NO existence proof for this, no data to guide the effort, etc. In
>> short, there is NO reason to expect a whole-cloth approach to work anytime
>> during the next century (or two).
>> >>>
>> >>> However, some of the mathematics of adaptive process control is
>> known, and I suspect the rest wouldn't be all that tough - if only SOMEONE
>> were working on it.
>> >
>> >
>> > Erm.... guys. This would be me.
>> >
>> > I am working on it. For well over a decade now. Cognition and
>> intelligence is implemented as an adaptive control system replicating,
>> inorganically, the natural original called the human (mammal) nervous
>> system. I simply replicate it inorganically. Tough job but I am getting
>> there. There's no programming. No software. Just radically adaptively
>> nested looping processes. In control strategy terms it is a non-stationary
>> system (architecture itself is adaptive). Control loops come into existence
>> and bifurcate and vanish adaptively. The architecture commences at the
>> level of single ion channels and nest at multiple levels that then appear
>> in tissue as neurons doing what they do, but need not appear like this in
>> the inorganic version. You don't actually need cells at all. These then
>> nest at increasing spatiotemporal scales forming coalitions, layers,
>> columns and finally whole tissue. All inorganically. All the same at all
>> scales from an adaptive control perspective. Power-law scalable. Physically
>> and logically.
>> >
>> > In my case, for the conscious version the hardware includes the
>> field-superposing, active additional feedback in the wave mechanics of the
>> EM field system produced by brain cells at specific points. The fields form
>> an addition/secondary loop modulation that operates orthogonally,
>> outside/through the space occupied by the chip substrate.
>> >
>> > What I am starting with is the 'zombie' or symbolically ungrounded
>> version. It doesn't produce the active field system (missing a whole
>> control system feedback mechanism) and uses supervised learning
>> (externalised by a conscious human trainer) to compensate for the loss of
>> the natural role consciousness has as an endogenous supervisor. It will, in
>> the zombie form, underperform in precisely the way all computer AGI
>> underperforms. This is what is missing when you use computers to do it all.
>> You end up with a recipe (software) for pulling Pinocchio's strings.
>> Whereas my system bypasses the puppetry altogether. It makes the little
>> boy, not the puppet.
>> >
>> > However you view it, there's nothing else there in a brain except
>> nested loops that have power-law responses in two orthogonal axes: sensory
>> and cognitive.  Adding the field system to the sensory axis (e.g. visual
>> experience) or part of the cognitive axis (e.g. emotional experience)
>> provide the active role for consciousness  implemented through the causal
>> impact of the Lorentz force within the hardware. I suppose it'd be an
>> 'adaptive control loop' philosophy for cognition and 'EM field theory of
>> consciousness' combined. No computing needed whatever. Just like the brain.
>> Most of the last ten years has been spent figuring out the EM field bits!
>> That I am now omitting, knowing what I lose when I do that (i.e.
>> consciousness).
>> >
>> > Teeny weeny Zombie version 0.0 this year I hope. No EM field
>> generation. I call it the 'circular causality controller'. I aim to add the
>> EM fields later. That part requires $millions. It's chip-foundry stuff.
>> >
>> > So chalk me in under this 'adaptive control loop' category for AGI
>> implementation please. I know this forum is a 'using computers to do AGI'
>> forum so I'll just continue to zip it. I haven't mentioned it much over the
>> years because it seems that most of you aren't interested in my approach.
>> For reference and for the record.... I am the 'AGI as adaptive control' guy.
>> >
>> > cheers
>> > colin
>> >
>> >>>
>> >>>
>> >>> I suspect that when the answers are known, it will be a bit like
>> spread spectrum communications, where there is a payoff for complexity, but
>> where ultimately there is a substitute for designed-in complexity, e.g.
>> like the pseudo-random operation of spread spectrum systems. Genetics seems
>> to prefer designed-in complexity (like our brains) but there is NO need for
>> computers to have such limitations.
>> >>>
>> >>> Whatever path you take, you must "see a path" to have ANY chance of
>> succeeding. You must have a POV that helps you to "cut the crap" in pursuit
>> of your goal. Others here are working on whole-cloth approaches, yet
>> bristle when challenged for lacking a guiding POV. I see some hope in
>> adaptive control math. Perhaps you see something else, but it MUST have an
>> associated guiding POV for you to have any hope of succeeding - more than a
>> simple list of what it does NOT have.
>> >>>
>> >>> Steve
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/11721311-f886df0a
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/27055757-c218d4f9> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription <http://www.listbox.com>
>>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to