I guess one point of contention I would have in this debate is that it seems to 
be black and white, either they will obtain sentience or they won't, and the 
implication is that if they do then all will do so. But, why would that be the 
case? Why wouldn't there be a hierarchy of intelligence capabilities, from 
basic automation on one end --> batch machine learning --> continuous on-line 
learning --> higher-level reasoning in AI --> near human-level AI agents used 
in a "tool-like" fashion (without consciousness) --> full human-level AGI 
agents (with consciousness) --> beyond human-level AGI (i.e. singularity). 

It seems pretty irrefutable that we will reach a near human-level AI capability 
that is short of human-level intelligence and actual consciousness and offers a 
more general "near-human AI tool" that is integrated in society, similar to the 
role Google has played but with greater reasoning capability. The last two 
steps are more contentious but I would argue will happen eventually. But here's 
the question, if we do reach the final two steps why would that necessitate the 
end of the previous steps or lead to all forms of AI becoming general conscious 
agents? If we look at nature as an example then human-level intelligence has 
not in anyway meant the end of lesser forms of intelligence. Even our bodies 
are made up of bacteria, cells, & systems that have an intelligence capability, 
if more in an automated way. The development of human intelligence has not 
meant the end of lesser forms of intelligence but more of an expansion beyond 
(if very imperfectly).
 Even with animals we make use of certain animals for food and livestock, but 
they are usually lesser in intelligence capability and animals with higher 
intelligence capabilities (like whales, elephants, or apes) are given greater 
respect. Obviously not uniformly and there is still hunting of these animals, 
but it's becoming the norm. 

Why wouldn't this trend continue for AI and robotics where forms of 
intelligence that are short of full human-level intelligence and autonomy are 
the tools used in our society but that full human-level AGI is allowed it's own 
autonomy?

In regards to human-AGI competition: let me postulate one likely possibility 
and that is the use of AI and robotics in space. With the development of space 
commercialization, one example being the pursuit of asteroid mining, it seems 
likely that robotics would play a key role. If and when we do develop AGI it 
seems much more likely to me that it will happen in regards to space 
development or at least be easily exported to space. Space is extremely 
expensive for humans, but is very cheap for robotics. The availability of 
energy and important materials (metals, rare minerals, etc) are enormous in 
space and in quantities that dwarf those on earth. Space based solar power has 
orders of magnitude more potential than any energy resources on Earth. It's 
extremely expensive and unrealistic (at this point) for humans to develop it 
but once there's a near human-level AGI then that stops being the case. Even if 
a future AGI society is focused around energy
 development and resource exploitation (similar to how it is in human society), 
then there would still not need to be any direct competition between humans and 
AGI because opportunities in space are so much greater and larger that it 
wouldn't need to happen. Avoidance would still be far cheaper and lead to no 
real loss of opportunity so there would never need to be competition, or even 
cooperation, at least for many decades if not centuries to come. 

Just some thoughts...

-Chris



________________________________
 From: Piaget Modeler <[email protected]>
To: AGI <[email protected]> 
Sent: Sunday, January 27, 2013 7:15 PM
Subject: RE: [agi] Robots and Slavery
 

 

So we acknowledge these risks from the outset, and say that whenever robots 
reach sentience,
they should have rights as well, and be given the decision to choose for 
themselves, their own 
destinies. (sp?)

~PM

________________________________
Date: Sun, 27 Jan 2013 18:16:19 -0500
Subject: Re: [agi] Robots and Slavery
From: [email protected]
To: [email protected]


So you realize if robots develop the goals of liberty, justice, and fairness, 
they will become a competitor to humans.
These are revolutionary ideas that have been used to usurp the authority of 
established powers.  A self-proclaimed freedom fighter is a terrorist to
the established order.  What lengths would robots go to secure their freedom?  
Perhaps eliminating the entire human race is a logical way to secure their
freedom from human tyranny. 


All these goals are very subjective and can be interpreted to mean different 
things to different individuals.  For instance, my desire for justice might 
really be
revenge based on a perceived wrong you have done to me, whether or not it was 
intentional.  How do you know robots wont develop their own ethical standards
that benefit themselves at the expense of humans?



On Sun, Jan 27, 2013 at 5:53 PM, Piaget Modeler <[email protected]> 
wrote:

I don't agree that intelligence is completely separable from desire (goals). 
>I think that the goals + solutions + mental processes = intelligence.
>I don't think you can have intelligence without goals, or the solutions that 
>have arisen based on prior goals.  Solutions and goals are intertwined.
>
>
>~PM
>
>
>
>________________________________
>
>
>
>Intelligence is completely separable from desire.
>
>
>~Aaron H.
>AGI | Archives  | Modify Your Subscription  

AGI | Archives  | Modify Your Subscription  
AGI | Archives  | Modify Your Subscription  


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to