Seems we want autonomous robots to either fuck, or kill. Who cares about 
sentience? 
---In FairfieldLife@yahoogroups.com, <noozguru@...> wrote :

 But do they dream of electric sheep?
 
 On 05/15/2014 02:15 AM, salyavin808 wrote:
 
   Sentient robots? Not possible if you do the maths
 
 So long, robot pals – and robot overlords. Sentient machines may never exist, 
according to a variation on a leading mathematical model of how our brains 
create consciousness 
http://www.newscientist.com/article/mg22229645.000-the-fourth-state-of-matter-consciousness.html";
 style="margin:0px;padding:0px;color:rgb(52, 163, 209);.
 Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and 
his colleagues have developed a mathematical framework for consciousness 
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003588";
 style="margin:0px;padding:0px;color:rgb(52, 163, 209); that has become one of 
the most influential theories in the field. According to their model, the 
ability to integrate information is a key property of consciousness. They argue 
that in conscious minds, integrated information cannot be reduced into smaller 
components. For instance, when a human perceives a red triangle, the brain 
cannot register the object as a colourless triangle plus a shapeless patch of 
red.
 But there is a catch, argues Phil Maguire http://www.cs.nuim.ie/%7Epmaguire/"; 
style="margin:0px;padding:0px;color:rgb(52, 163, 209); at the National 
University of Ireland in Maynooth. He points to a computational device called 
the XOR logic gate, which involves two inputs, A and B. The output of the gate 
is "1" if A and B are the same and "0" if A and B are different. In this 
scenario, it is impossible to predict the output based on A or B alone – you 
need both.
 Memory edit Crucially, this type of integration requires loss of information, 
says Maguire: "You have put in two bits, and you get one out. If the brain 
integrated information in this fashion, it would have to be continuously 
haemorrhaging information."
 Maguire and his colleagues say the brain is unlikely to do this, because 
repeated retrieval of memories would eventually destroy them. Instead, they 
define integration in terms of how difficult information is to edit.
 Consider an album of digital photographs. The pictures are compiled but not 
integrated, so deleting or modifying individual images is easy. But when we 
create memories, we integrate those snapshots of information into our bank of 
earlier memories. This makes it extremely difficult to selectively edit out one 
scene from the "album" in our brain.
 Based on this definition, Maguire and his team have shown mathematically that 
computers can't handle any process that integrates information completely. If 
you accept that consciousness is based on total integration, then computers 
can't be conscious.
 Read the whole story here:
 
http://www.newscientist.com/article/dn25560-sentient-robots-not-possible-if-you-do-the-maths.html#.U3SA5_ldVZK
 
http://www.newscientist.com/article/dn25560-sentient-robots-not-possible-if-you-do-the-maths.html#.U3SA5_ldVZK

 But no, you won't find anything about actual transcending - whatever that is - 
that was a joke on John's surreal exposition from the weekend. I suspect that 
for a robot to transcend we would have to program it to have a set of emotional 
and spacial/visual recognition components that could be rearranged to reflect 
an inner state of disengagement from non-metaphorical stimulus/response 
routines and a corresponding increase of reward systems, possibly feedbacking 
into total rejection of programmed response to outer awareness in favour of 
self-satisfaction.
 Basically we'd have to build a robot to be just like us. But most scientists 
into the idea of artificial intelligence aren't thinking along those lines. In 
fact the best thing about this article is the link to one of the latest 
theories of consciousness:

 Integrated information theory (IIT) approaches the relationship between 
consciousness and its physical substrate by first identifying the fundamental 
properties of experience itself: existence, composition, information, 
integration, and exclusion. IIT then postulates that the physical substrate of 
consciousness must satisfy these very properties. We develop a detailed 
mathematical framework in which composition, information, integration, and 
exclusion are defined precisely and made operational. This allows us to 
establish to what extent simple systems of mechanisms, such as logic gates or 
neuron-like elements, can form complexes that can account for the fundamental 
properties of consciousness. Based on this principled approach, we show that 
IIT can explain many known facts about consciousness and the brain, leads to 
specific predictions, and allows us to infer, at least in principle, both the 
quantity and quality of consciousness for systems whose causal structure is 
known. For example, we show that some simple systems can be minimally 
conscious, some complicated systems can be unconscious, and two different 
systems can be functionally equivalent, yet one is conscious and the other one 
is not.

 
 
 Read all about it, but if you're used to thinking about consciousness in terms 
of touchy-feely spiritual-speak it'll be like cycling uphill with the brakes 
on. 
 It is worth the effort though:

 
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003588 
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003588
 
  



 
 

 


Reply via email to