Nice work, Jed, and also the comments of Vibrator are right on.  As an old
retired biologist,it has been heartening to see the neurosciences admitting
to higher neural and mental functioning in animals, including the
near-human intellectual and cognitive achievements you noted in the cat.
Psycho-and neuro-sciences are indeed making great strides, allowing some
deep peeks into the mechanisms and substrates which produce some of our
human (or animal) behaviors and cognitions, even our thoughts and beliefs.
I survey some of this in my little (layman-directed) book on Amazon ("Mind
>From Matter") where I try to encourage expanding this into actual human
societal realms.  But the fact that our biological apparati, i.e. brain, is
the complete and sole substrate for our human thoughts, beliefs, actions
and behaviors is frequently ignored (or in many cases, totally
unsuspected).  While we all intuitively recognize that humanity is frail,
incomprehensibly complex, uncertain, and quirky, usually though we fail to
recognize too our biological facts.  We see though, that we are often lead
'astray' in various ways by our own brains with its inborn infinitude of
inborn programming and variations.  I beleive we are, however, making great
strides, mainly though science, in 'adjusting' (sometimes!) our thinking or
behavior as a society and a world.   But such a long, long way to go.  My
present wish is for social, and particularly political, scientists to get
with and make some serious efforts to use science to develop some
guidelines and principles to help societies in practical ways.   Of course,
science does not deliver truth, wisdom, judgement, comity, creativity, or
'correct' beliefs or anything of that sort directly, but we still do
require it as a societal facilitator and glue in a million ways.

On Mon, Feb 29, 2016 at 3:04 PM, Vibrator ! <mrvibrat...@gmail.com> wrote:

> Cool topic, cognitive science is one of my interests.  I think that at the
> stage we're at, the outstanding technical challenges aren't so much
> quantitative as qualitative - we need to crack the Hard Problem, for an
> emergent, bottom-up intelligence rather than a "brute forced" but top-down
> Turing champion.
>
>
> Although we've made strides in all areas of dynamical systems theory, we
> can still only speculate about the general principles of multicellular
> information processing - in particular we lack a general principle of
> informational binding (the so-called binding problem), that would unify all
> the disparate sensory modalities and the vagaries of their respective
> sensory systems with a general principle of consciousness.  So, some
> researchers will produce limited success with cellular automata, another
> team with game theory and so on... we already have the quantitative ability
> to simulate the smallest nervous systems (nematodes etc.), but no means of
> understanding whether a given simulation would be processing - or, more to
> the point, "feeling" - in the same manner as a living organism.
>
> And here, the field is still beset by philosophical dogma, such as the
> notion of "qualia" - essentially an argument for the irreducible complexity
> of subjective experience - and widespread doubts that any tractable handle
> on the problem is even possible (typified by David Chalmers "zombie Dave"
> poser - we cannot know that any other entity is conscious in the same
> manner as ourselves); but although i go along with Dennet in many of his
> contentions, i have in my own research identified something traditionally
> believed to be entirely subjective, but which is, in fact, an objective
> universal; namely, the perception of octave equivalence, which i believe
> does give us a "qualia", albeit one amenable to definitive description and
> replication.  In short, i believe it's possible to engineer a neural net
> that would percieve octaves as "equivalent" in the same way we do, and that
> as such it would be "feeling" and processing information about that
> sensation in a naturalistic manner.
>
> The key to the binding problem is deriving an objective theory of metadata
> - ie. identifying how living brains process information "about" other
> information, be that sensory input, motor control or general knowledge.
>
> Work on the "semantic web" (AKA "web of things" or web 2.0), in which
> information is indexed by context, will inevitably spin off advances in
> collating and processing metadata, but this alone won't see us out of the
> "zombie Dave" dilemma.
>
> There's always the question of "does it really matter" - if an AI says
> "here, hold my pint" before trashing a human in an ethics debate, who cares
> if it's genuinely conscious in the same way as us?  But look at where we're
> headed with autonomous vehicles etc. (some lawmakers have already ruled
> that such cars can be considered as "responsible" drivers from a legal
> persective) - if an AI is chauffeuring me around, then actually i'd be
> rather comforted in the knowledge that it doesn't "want" to crash, that it
> truly feels and understands its responsibilities.. if only for it's own
> sense of self-preservation, rather than mine.
>
> So for me, an AI that simply employed deductive reasoning wouldn't be such
> a breakthrough - we already have the logic to codify such aspects of
> intelligence.  Once we've cracked the hard problem, we won't need to design
> anything but the most rudimentary solutions, then sit back and let nature
> do the rest..
>
>
> TL;DR
>
> True AI will be cultivated, not contrived.
>
>
>
> On Mon, Feb 29, 2016 at 4:01 PM, Jed Rothwell <jedrothw...@gmail.com>
> wrote:
>
>> There are a zillion cute cat videos on the Internet. This one is food for
>> thought. It tells you a lot about the nature of animal intelligence, and it
>> demonstrates that animals are still far ahead of the best robots and
>> artificial intelligence computers in many ways. This is a 6-second video
>> GIF.
>>
>> http://mlkshk.com/p/1691Z
>>
>> Let me list the events shown here.
>>
>> 1. A cat is sitting on a dining table after a meal, with a glass half
>> full of water on her left.
>>
>> 2. The cat wants to drink some of the water from the glass but she cannot
>> reach into the glass with her mouth to lap it up. So she reaches into the
>> glass with her left front paw, wets the paw, brings it to her mouth, and
>> licks it off.
>>
>> 3. She is looking down and away from the glass. A human reaches over and
>> removes the glass. The cat does not notice this. Without looking in the
>> direction of the glass, she reaches back into where the glass was a moment
>> ago, again using her left paw. She reaches up and over where the glass
>> should have been.
>>
>> 4. She notices that the glass is not there and looks to where it was, and
>> then looks up, in the direction of the human.
>>
>> What can we learn from this?
>>
>> The cat has clear intentions and short term goals, and knows how to act
>> on them. (This may seem obvious to you, and not extraordinary, but it is
>> difficult to simulate such intentions and plans in a robot.)
>>
>> The cat knows how to use her paw in place of her tongue to get water.
>> This may be instinct.
>>
>> The cat knows that inanimate objects do not move. I doubt she would
>> attempt to reach for a mouse without visually reconfirming its presence.
>>
>> The cat knows that the immediate past is similar to the present. She
>> knows you can usually depend on this. But she also immediately realizes
>> that in this case an anomaly has occurred and the present does not resemble
>> the past.
>>
>> She knows that objects she cannot see or that she has turned away from
>> remain in existence. This is called "object permanence." Human babies
>> develop it between 1 and 8 months of age, in increasingly sophisticated
>> ways.
>>
>> The cat has superb three-dimensional memory, body awareness and
>> sensormotor awareness.
>>
>> She understands how she fits into three-dimensional space. She knows that
>> in order to reach into the glass she has to lift her paw up and over the
>> edge. A biologist described a dramatic example of this. Suppose a dog is
>> carrying a stick in its mouth, while it trots toward a wooden fence with a
>> boards missing, making a narrow space. The dog intends to pass through the
>> fence. To fit through the fence carrying the stick, the dog will turn its
>> head sideways as it approaches the fence. In a fraction of a second the dog
>> sees the three-dimensional space and adjusts its body to fit the geometry,
>> knowing that if it keeps its head level the stick will bash into the sides
>> of the narrow space.
>>
>> The cat recognizes an anomalous event (the disappearing water) and
>> immediately looks to visually confirm it. I wonder whether the cat also
>> realizes that a human being caused the change. Probably she did.
>>
>> I think it would take the fastest supercomputer and robot much longer to
>> take all of these actions, and I doubt that it would synthesize an
>> understanding of events as complete as the cat does. That is to say, I do
>> not think the robot would even attempt to reach for something outside its
>> visual range, and if it did and failed to find the object, I do not think
>> that you could query the robot and show that it "understood" why it failed.
>> ("Because the object was moved.") Robots can drive automobiles nowadays but
>> they cannot do something as simple as this. Yet animals even smaller and
>> less intelligent than cats can handle this sort of task.
>>
>> Robots still "think" very slowly in some ways, even though they can
>> handle real-time traffic while driving automobiles. I have heard that
>> robots can fold laundry but it takes hours.
>>
>> This does not mean that robots will never learn to do things as swiftly
>> or as sophisticated as this cat does, but it does mean that researchers
>> have a great deal more work to do.
>>
>> - Jed
>>
>>
>

Reply via email to