+1 Dorian.      Mark likes this.

On Thu, May 14, 2015 at 10:17 AM, Dorian Aur <[email protected]> wrote:

> We need to create an infrastructure  e.g. the  *Institute of General
> Intelligence, *elect/appoint a board of directors  to manage the entire
> organization*.* Only a small fraction of funding that is currently
> allocated for the   BRAIN Initiative or Human Brain Project in EU  would
> be enough  to build  the first hybrid system .  This project can be the
> bucket list for an entire generation of computer scientists /
> neuroscientists whom should collaborate- our brain uses less than 30 watts
> to perform all kind of "intelligent" computations. Having first completed
> this step would increase our chance to deliver a more "synthetic"
> approach as Colin proposed.
>
>
> Here is the rationale:
> a.Why use a digital computer to simulate/map or emulate the whole brain
> • It cannot express all forms of computation that are built within
> biological structure (see neuroelectrodynamics);
> • Needs many megawatts to power the system (huge issue);
> • Requires billions of dollars;
> • Cannot generate emotion, consciousness...
> • No reliable model for brain diseases.
> b. Why not shape a biological structure, connect it with a digital
> computer use machine learning (e.g DL) and perform all kinds of
> computations - Can we build a conscious machine
> http://arxiv.org/abs/1411.5224.
> • Naturally, emotion, consciousness ....are expressed
> • Can be used as a model for therapy for about 600 brain diseases
> • Can be connected to a laptop, iPhone uses digital and biological
> computation together which can make any digital computer highly interactive
> • Far less amount of funding required. AGI can become fast an academic
> discipline, it can attract funding not only from private companies
>
> My previous answers on FB
>
> 5.Does an AGI need to be conscious?
> Yes, it has to be conscious otherwise AGI can be dangerous (see 9).
>
> 6.Can AGI be creative?
>
> If we build hybrid systems AGI can become creative
> 7.Will AGI have emotions?
>
> Biological structure embedded in the hybrid system will allow any AGI
> system to experience emotions
> 8.How far off is AGI?
> With current technology the first prototype can be implemented in less
> than 5 years, far less than the BIG detour (2001 - 2015)
>
> 9.Will AGI be dangerous?
>
> The system needs to be conscious about its actions, otherwise it can be
> dangerous
> An example : the missile crisis in Cuba, less intelligent actions can lead
> to an apocalypse for everyone ( it should be embedded in consciousness)
>
> It's time for action
>
>
> Best,
>
> Dorian
>
>
>
> *Note:* EM interaction establishes communication in case of a more
> powerful form of computation, five years ago we call it -
> neuroelectrodynamics. A classical model or a quantum model can be used to
> describe a natural  phenomenon, they are our models . Almost everything
> can be approximated ,simulated on digital computers only if one has the
> algorithm. The simulation in this case requires a huge cost, it is highly
> inefficient and in addition many characteristics developed within
> biological structure are completely lost. Current trend in AGI can continue
> another 5-10 years however a general loss of credibility will follow  - a
>  less "intelligent" path. Saving the AI/AGI idea should be a priority, we
> do have the technology to keep alive, grow and  "connect " neurons and any
> already developed algorithm (e.g AI algorithm) can be used since the
> digital computer will be an important  part of the hybrid system.
>
>
>
>
>
>
> On Thu, May 14, 2015 at 7:27 AM, Steve Richfield <
> [email protected]> wrote:
>
>> Ben,
>>
>> I don't know what Alan's problem is, but it appears he doesn't understand
>> forums in general, and this forum in particular.
>>
>> As Alan's first objection to threads that has been running for several
>> days, Alan rises up to request that the subject be killed!!! This is absurd.
>>
>> The whole purpose of threads is for people to follow the ones they are
>> interested in, while ignoring the others. Apparently Alan is unable to
>> participate in this very simple process.
>>
>> The bases for Alan's request are also absurd as explained below.
>>
>> On Thu, May 14, 2015 at 6:07 AM, Alan Grimes <[email protected]>
>> wrote:
>>
>>> I'm about three days away from formally requesting a killthread on this
>>> EM fields crap.
>>>
>>
>> Ben, you might want to think about moderating Alan.
>>
>>>
>>> 1. Electromagnetism has been Well Understood (tm) for about 140 years
>>> now.
>>>
>>
>> So what. This doesn't seem to be an issue.
>>
>>
>>> 2. By [nearly] all accounts, EM fields in the brain are secondary to its
>>> operation.
>>>
>>
>> What accounts?
>>
>>
>>> 3. Neural Science is a well established field that runs parallel to AGI
>>> and, yes, they do VERY careful science.
>>>
>>
>> You obviously have never worked in a neuroscience lab. However, others in
>> this discussion, including myself, HAVE worked in these labs and know the
>> severe limitations of what people think they know about how neurons work.
>>
>>
>>> 4. AGI is not, formally, a science.
>>
>>
>> I can't speak for the others here, but I suspect that most people here
>> agree, but believe that it should become a science once we know enough to
>> talk about the prospective internals of an AGI system.
>>
>>
>>> It is a branch of engineering.
>>>
>>
>> B.S. If this were true, computers would have been thinking for decades by
>> now. There is presently NO recognizable science supporting AGI. AGI has yet
>> to rise to being science, let alone rising to be engineering based on
>> science.
>>
>> 5. In the interests of getting things done, simplifications have to be
>>> made wherever possible.
>>>
>>
>> So what? This doesn't seem to be an issue. The issue here is determining
>> what is essential, and what can be "simplified". There are many opinions,
>> including yours, none of which have significant evidence to support them.
>>
>>
>>> 6. We are not trying to simulate a brain, we are trying to identify what
>>> characteristics are actually required to create a thinking machine.
>>>
>>
>> Agreed. So what?
>>
>>
>>> 7. The standard of evidence, at this point, to indicate some kind of
>>> non-Turing computation is required to produce thinking is
>>> extraordinarily high at this point.
>>>
>>
>> "Turing computation" isn't really a well defined term, e.g. does it
>> include analog computation?
>>
>> I have posted in the past regarding the potential need for bidirectional
>> computation in AGI, which can be simulated on Turing systems with a loss in
>> speed which is proportional to the logarithm of system size. If
>> bidirectional computation proves to be needed, than Turing systems may
>> indeed NOT be up to AGI. Fortunately there are non-Turing approaches to
>> bidirectional computing.
>>
>> Note that Colin's proposal also includes bidirectional computing, though
>> we haven't yet discussed that.
>>
>> There is a pretty strong case for bidirectional computing, so don't
>> clutch your Turing machine too closely.
>>
>> 8. Once AGI is created it is highly probable that it could be further
>>> enhanced by means of mystical physics, ie quantum fields, and stuff, but
>>> right now it's only a distraction.
>>>
>>
>> ONLY if "mystical physics" proves to be unnecessary. I have seen NO hard
>> evidence either way.
>>
>>
>>> 9. The brain may indeed utilize mystical physics to some extent, we
>>> should be extremely cautious about brain emulation, even if you want to
>>> stick your head in the sand about the identity issue.
>>>
>>
>> We are a loooooong way from brain emulation, but it would sure be nice to
>> be able to emulate a single neuron that can do ALL of the things our own
>> neurons do - fast learning, abandoning useless functions, reducing power
>> demands for slow/rare phenomena, etc. - all things that an AGI will also
>> have to do.
>>
>> 10. I have some pretty strong hypothesii about how the brain works but
>>> I'm frustrated by my inability to test those hypothesii for lack of a
>>> simulation environment or a robot.
>>
>>
>> Join the club. Oh, I see you already have.
>>
>> I don't have either. I have been
>>> stuck at this state of not having a testing platform for ten years
>>
>>
>> Only ten years? I can see you are a newbie at this. I have had this same
>> frustration for >40 years.
>>
>>
>>> and
>>> it's driving me nuts!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>
>>
>> THIS explains a LOT ... B-:D>
>>
>>
>>> (this is what
>>> my minecraft post was about...) I saved up about $12,000 out of a
>>> required $16,000 to get a Nao but then I've been unemployed for three
>>> years and have no job prospects in this awful economy. =(((((((
>>> 11. Meanwhile, I have not been chewing up list bandwidth talking about
>>> how great my untested theory is or spending much time deriding other
>>> list participants.
>>>
>>
>> There ARE other paths, e.g. invent something relating to AGI, get a
>> patent, find someone to promote your invention, find a VC, start a company,
>> etc.
>>
>>>
>>> BTW: Hplus-talk mailing list seems to be down and the admin forwarder is
>>> down too.
>>>
>>> IQ is a measure of how stupid you feel.
>>>
>>
>> Aha, you very obviously do NOT feel stupid at all here, so, by your own
>> measure, your IQ must be VERY low.
>>
>> OK, sorry (but not very sorry) I beat you up here, but understand that it
>> is often quite difficult to examine possibilities that violate your world
>> model, which is obviously your difficulty here. Just because something is
>> obviously "crap" doesn't mean that it is crap. If you can't deal with this,
>> then stand aside for others here who CAN deal with it.
>>
>> Steve
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/17795807-366cfa2a> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/27079473-66e47b26> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Regards,
Mark Seveland



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to