On 17 Sep 2016, at 23:32, John Clark wrote:

On Fri, Sep 16, 2016  Telmo Menezes <te...@telmomenezes.com> wrote:

​> ​it is not so clear to me that it has no intelligence. It​ ​leads to better and better designs,

​If Evolution has intelligence it sure doesn't have much, despite the resources of the entire planet at its disposal Evolution was incredibly slow and sloppy in getting its work done; but it was the only way complex things could get made until it finally invented brains after 3 billion years of screwing around.

​> ​my point is that what makes this a "bad idea" is our own​ ​evolutionary context. Why is it a bad idea exactly? It becomes​ tautological. It is a bad idea because it hurts your ability to pursue​ ​goals dictated by evolution.

​The only goal dictated by Evolution is a vague command to figure out a way to get your genes into the next generation, and just like the vague laws that humans make that leaves plenty of room for interpretation, unintended consequences, and loopholes.

​> ​Outside of evolution, why would an entity​ ​not choose the easy way out?

​Any intelligence regardless of if was produced by Evolution or by human engineers or even by God would have a constantly shifting goal structure depending on environmental circumstances. Even your iPhone has, in a sense, a goal to continue working, so personal survival would certainly be a goal for any brain even if it isn't guaranteed the permanent #1 position.


What about suicide?




Just like with people I expect that with a AI survival would usually, although not always, take the #1 slot. But for some some AIs just like with some people, the mindless pursuit pleasure could be #1 , but I don't see why that would be more common in a AI than it is in a person unless the electronic version is far more powerful and addictive than the chemical we call crack. But could the electronic version really be that much more powerful and additive? I don't know, maybe it's possible. If it is then that explains the Fermi Paradox.

Although we're using different language I think we may agree more than we disagree,


A machine which has enough cognitive abilities to bet on its own local digital mechanisticness (computationalism) has enough cognitive abilities to understand she survives anyway in that case, for the worst and/or the best, with some partial control on this.

With computationalism, we face the intrinsic unknown, but it makes sense to say that all experiences are realized (in arithmetic). The 1p tip is then to try to avoid the dark/nightmares and to pursue the search of the light.

The goal is not much survival, as to live with some degree of satisfaction, like to drink when we are thirsty (from the literal sense to the metaphorical senses).

Bruno




 John K Clark   ​




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to