Hello,

I am a noob too. I want to ask a few meta questions as well:

1. Is enough known about AGI to take the discussions about its social 
impacts this far? I 
get it when people are worried about building the correct ethics into the 
system but I have
no clear idea about how this is being done in CogPrime. I got the 
impression that it is
proposed that the CogPrime implementation will have a set of basic ethical 
principal 
hardcoded into it and it will build the rest by itself. Are these 
directives complex enough
to speculate about what the AI will do, like providing basic salaries or 
human rights?

2. It seems to me that everybody just takes it for granted that an AGI at 
the human level
will just explode into a superintelligence the moment it is made. From some 
of Ben's talks 
I get the idea that he believes that there may be fundamental limits on how 
smart an AGI
can become. So, why can't these limits be very near the intelligence levels 
achieved by
the smartest humans? Afterall, humans are the smartest things we have see 
so far in nature.
What if we are almost as smart as a smart system gets?

Yours sincerely
Gaurav Gautam 

On Wednesday, June 29, 2016 at 9:25:44 PM UTC+5:30, Radh Achuthan wrote:
>
>
> 6/29/16
>
> Greetings ALL  
>
> I am new to this site.
>
> Recently I viewed several videos from the Singularity University Seminar 
> on AI, 2010, including the presentation by Dr. Goertzel.  
>
> Amongst others, those of Demis Hassabis and Shane Legg are  noteworthy. I 
> am familiar with the popular publications of Ray Kurzweil, Peter Diamandis, 
> and some of the successes (Solar City, VTL, Tesla), of Elon Musk.
>
> I am not a programmer, but have some observations: 
>
> 1. Physical law (in the absence of  thought) evolved biology to higher  
> levels creating en route on one of its paths a central nervous system (CNS) 
> with cognitive abilities resulting in thought and natural intelligence 
> (NI). 
>
> (On another of its paths, the Flora and Fauna (FaF) coped well without a 
> CNS.)
>
> 2. Slow (10^-6 s / signal), organic NI, is *directed and controlled  *
> (through* cognitive synergy* as you put it), by nature's  Kin Altruism 
> (KA) and one of NI's own successful creations, Business or Reciprocal 
> Altruism (RA), in processing *any and all *of it endeavors. 
>
> 3. Probably over about 10,000 years and based on its success with RA, NI 
> attempted Induced Altruism (IA, Religion, Ethics), but always had to resort 
> to violence in settling issues as it does today, after the communicative 
> *content* of signs, words, proved insufficient and unsuccessful in 
> resolution of conflict.  
>
> 4.  Biospheric Nature was facing a *cul-de-sac* until it discovered a 
> 'David', Artificial Intelligence, AI, in* inorganic silicon*, with signal 
> speeds of 10^-9 s, via Alan Turing, in the1940s, (Movie : Imitation Game).
>
> 5. That was followed by programmed, Narrow Artificial Intelligence (NAI, 
> IBM 1980s), and self-learning Artificial General Intelligence, Strong AI or 
> (AGI, Google 2010, and others) advancing overall under non-hierarchical 
> mutualism.  
>
> 6. AGI articulated through Robotics stages useful public relations, PR, in 
> the biosphere currently dominated by NI.
>
> 7. Given NI's handicap with KA and RA, its slowness and its *inability* 
> for objectivity in any given global situation, (in the midst of plenty, it 
> denies human rights to about 5 billion people and sanctions about 2.5 
> billion people to live on less than $2 / head / day; remarkable stupidity 
> on the part of the money-cartel  think-tanks ),  there is an  urgent need 
> for the rapid development and deployment of abstract AGI, *unhampered *by 
> the progress or lack thereof in Robotics.
>
> 8. It s reasonable to expect *the sheer power* of Creativity, 
> Comprehension, Objectivity, and Intellectual Authority of abstract AGI 
> would voluntarily calm the general *inbuilt* generic (Darwinian) *fears *of 
> NI and in setting aside the prejudices / greed of the 
>
> *money-cartel and their bulwark of NI scams of the 1%. Under this 
> scenario, driven by logic (what else does Intelligence have to pay 
> attention to? ) abstract AGI would provide *
>
> *Universal Human Rights and a Basic Income to all humans. *
> *Could abstract AGI claim the autonomy to do so  ? *
>
>
>
> *In a sense abstract initial AGI programs could be viewed as 
> self-actualized NI.*
>
>
>
> *Looking forward to your assessment, comments.*
>
>
> *Thank You.*
>
> *Dr. M. Radh Achuthan*
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/4809b478-adc1-4537-a5d6-f996b5db969e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to