Douglas Hofstadter Article

2013-10-24 Thread Craig Weinberg
http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/

The Man Who Would Teach Machines to Think

"...Take Deep Blue, the IBM supercomputer that bested the chess grandmaster 
Garry Kasparov. Deep Blue won by brute force. For each legal move it could 
make at a given point in the game, it would consider its opponent’s 
responses, its own responses to those responses, and so on for six or more 
steps down the line. With a fast evaluation function, it would calculate a 
score for each possible position, and then make the move that led to the 
best score. What allowed Deep Blue to beat the world’s best humans was raw 
computational power. It could evaluate up to 330 million positions a 
second, while Kasparov could evaluate only a few dozen before having to 
make a decision. 

Hofstadter wanted to ask: Why conquer a task if there’s no insight to be 
had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so 
what? Does that tell you something about how *we* play chess? No. Does it 
tell you about how Kasparov envisions, understands a chessboard?” A brand 
of AI that didn’t try to answer such questions—however impressive it might 
have been—was, in Hofstadter’s mind, a diversion. He distanced himself from 
the field almost as soon as he became a part of it. “To me, as a fledgling 
AI person,” he says, “it was self-evident that I did not want to get 
involved in that trickery. It was obvious: I don’t want to be involved in 
passing off some fancy program’s behavior for intelligence when I know that 
it has nothing to do with intelligence. And I don’t know why more people 
aren’t that way...”

This is precisely my argument against John Clark's position.

Another quote I will be stealing:

"Airplanes don’t flap their wings; why should computers think?"

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread Telmo Menezes
On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg  wrote:
> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>
> The Man Who Would Teach Machines to Think
>
> "...Take Deep Blue, the IBM supercomputer that bested the chess grandmaster
> Garry Kasparov. Deep Blue won by brute force. For each legal move it could
> make at a given point in the game, it would consider its opponent’s
> responses, its own responses to those responses, and so on for six or more
> steps down the line. With a fast evaluation function, it would calculate a
> score for each possible position, and then make the move that led to the
> best score. What allowed Deep Blue to beat the world’s best humans was raw
> computational power. It could evaluate up to 330 million positions a second,
> while Kasparov could evaluate only a few dozen before having to make a
> decision.
>
> Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had
> from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what?
> Does that tell you something about how we play chess? No. Does it tell you
> about how Kasparov envisions, understands a chessboard?” A brand of AI that
> didn’t try to answer such questions—however impressive it might have
> been—was, in Hofstadter’s mind, a diversion. He distanced himself from the
> field almost as soon as he became a part of it. “To me, as a fledgling AI
> person,” he says, “it was self-evident that I did not want to get involved
> in that trickery. It was obvious: I don’t want to be involved in passing off
> some fancy program’s behavior for intelligence when I know that it has
> nothing to do with intelligence. And I don’t know why more people aren’t
> that way...”

I was just reading this too. I agree.

> This is precisely my argument against John Clark's position.
>
> Another quote I will be stealing:
>
> "Airplanes don’t flap their wings; why should computers think?"

I think the intended meaning is closer to: "airplanes don't fly by
flapping their wings, why should computers be intelligent by
thinking?".

> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread Craig Weinberg


On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>
> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
> > 
> wrote: 
> > 
> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>  
> > 
> > The Man Who Would Teach Machines to Think 
> > 
> > "...Take Deep Blue, the IBM supercomputer that bested the chess 
> grandmaster 
> > Garry Kasparov. Deep Blue won by brute force. For each legal move it 
> could 
> > make at a given point in the game, it would consider its opponent’s 
> > responses, its own responses to those responses, and so on for six or 
> more 
> > steps down the line. With a fast evaluation function, it would calculate 
> a 
> > score for each possible position, and then make the move that led to the 
> > best score. What allowed Deep Blue to beat the world’s best humans was 
> raw 
> > computational power. It could evaluate up to 330 million positions a 
> second, 
> > while Kasparov could evaluate only a few dozen before having to make a 
> > decision. 
> > 
> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to be 
> had 
> > from the victory? “Okay,” he says, “Deep Blue plays very good chess—so 
> what? 
> > Does that tell you something about how we play chess? No. Does it tell 
> you 
> > about how Kasparov envisions, understands a chessboard?” A brand of AI 
> that 
> > didn’t try to answer such questions—however impressive it might have 
> > been—was, in Hofstadter’s mind, a diversion. He distanced himself from 
> the 
> > field almost as soon as he became a part of it. “To me, as a fledgling 
> AI 
> > person,” he says, “it was self-evident that I did not want to get 
> involved 
> > in that trickery. It was obvious: I don’t want to be involved in passing 
> off 
> > some fancy program’s behavior for intelligence when I know that it has 
> > nothing to do with intelligence. And I don’t know why more people aren’t 
> > that way...” 
>
> I was just reading this too. I agree. 
>
> > This is precisely my argument against John Clark's position. 
> > 
> > Another quote I will be stealing: 
> > 
> > "Airplanes don’t flap their wings; why should computers think?" 
>
> I think the intended meaning is closer to: "airplanes don't fly by 
> flapping their wings, why should computers be intelligent by 
> thinking?". 
>

It depends whether you want 'thinking' to imply awareness or not. I think 
the point is that we should not assume that computation is in any way 
'thinking' (or intelligence for that matter). I think that 'thinking' is 
not passive enough to describe computation. It is to say that a net is 
'fishing'. Computation is many nets within nets, devoid of intention or 
perspective. It does the opposite of thinking, it is a method for 
petrifying the measurable residue or reflection of thought.



> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Everything List" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to everything-li...@googlegroups.com . 
> > To post to this group, send email to 
> > everyth...@googlegroups.com. 
>
> > Visit this group at http://groups.google.com/group/everything-list. 
> > For more options, visit https://groups.google.com/groups/opt_out. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread John Mikes
Craig and Telmo:
Is "anticipation" involved at all? Deep Blue anticipated hundreds of steps
in advance (and evaluated a potential outcome before accepting, or
rejecting).
What else is in "thinking" involved? I would like to know, because I have
no idea.
John Mikes


On Thu, Oct 24, 2013 at 1:02 PM, Craig Weinberg wrote:

>
>
> On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>
>> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
>> wrote:
>> > http://www.theatlantic.com/**magazine/archive/2013/11/the-**
>> man-who-would-teach-machines-**to-think/309529/
>> >
>> > The Man Who Would Teach Machines to Think
>> >
>> > "...Take Deep Blue, the IBM supercomputer that bested the chess
>> grandmaster
>> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
>> could
>> > make at a given point in the game, it would consider its opponent’s
>> > responses, its own responses to those responses, and so on for six or
>> more
>> > steps down the line. With a fast evaluation function, it would
>> calculate a
>> > score for each possible position, and then make the move that led to
>> the
>> > best score. What allowed Deep Blue to beat the world’s best humans was
>> raw
>> > computational power. It could evaluate up to 330 million positions a
>> second,
>> > while Kasparov could evaluate only a few dozen before having to make a
>> > decision.
>> >
>> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to
>> be had
>> > from the victory? “Okay,” he says, “Deep Blue plays very good chess—so
>> what?
>> > Does that tell you something about how we play chess? No. Does it tell
>> you
>> > about how Kasparov envisions, understands a chessboard?” A brand of AI
>> that
>> > didn’t try to answer such questions—however impressive it might have
>> > been—was, in Hofstadter’s mind, a diversion. He distanced himself from
>> the
>> > field almost as soon as he became a part of it. “To me, as a fledgling
>> AI
>> > person,” he says, “it was self-evident that I did not want to get
>> involved
>> > in that trickery. It was obvious: I don’t want to be involved in
>> passing off
>> > some fancy program’s behavior for intelligence when I know that it has
>> > nothing to do with intelligence. And I don’t know why more people
>> aren’t
>> > that way...”
>>
>> I was just reading this too. I agree.
>>
>> > This is precisely my argument against John Clark's position.
>> >
>> > Another quote I will be stealing:
>> >
>> > "Airplanes don’t flap their wings; why should computers think?"
>>
>> I think the intended meaning is closer to: "airplanes don't fly by
>> flapping their wings, why should computers be intelligent by
>> thinking?".
>>
>
> It depends whether you want 'thinking' to imply awareness or not. I think
> the point is that we should not assume that computation is in any way
> 'thinking' (or intelligence for that matter). I think that 'thinking' is
> not passive enough to describe computation. It is to say that a net is
> 'fishing'. Computation is many nets within nets, devoid of intention or
> perspective. It does the opposite of thinking, it is a method for
> petrifying the measurable residue or reflection of thought.
>
>
>
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups
>> > "Everything List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an
>> > email to everything-li...@**googlegroups.com.
>> > To post to this group, send email to everyth...@googlegroups.**com.
>> > Visit this group at 
>> > http://groups.google.com/**group/everything-list.
>>
>> > For more options, visit 
>> > https://groups.google.com/**groups/opt_out.
>>
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread Craig Weinberg
On Thursday, October 24, 2013 3:08:26 PM UTC-4, JohnM wrote:
>
> Craig and Telmo:
> Is "anticipation" involved at all? Deep Blue anticipated hundreds of steps 
> in advance (and evaluated a potential outcome before accepting, or 
> rejecting).
> What else is in "thinking" involved? I would like to know, because I have 
> no idea. 
> John Mikes
>

It's hard to talk about the particulars of pseudo-sentience, since all of 
our language is geared toward the assumption of sentience. We haven't had 
time to develop terms to discern between map and territory when the 
territory is a trompe l'oeil illusion.

When we think, we are rehearsing or pretending to some extent. It is an act 
of imagination that is anticipatory. The etymology of anticipate traces 
back to a sense of "taking into possession beforehand,". Did Deep Blue take 
anything into possession, or did it merely exhaust its ritual of mindless 
reductions - compressing a fourth dimensional object of game permutations 
into a one dimensional path which matches its mindless criteria?

What a computer does would be thinking if it could care what it was 
thinking about, but since it is built from the outside in, it is incapable 
of caring about the games that we designed it to play. It isn't playing a 
game at all, it is filtering one abstract pattern against another without 
reference to 'before' or 'after'. It's not anticipating from its point of 
view, it's just rendering a set of positions which satisfy a rule.

I think that what complicates the story is that the power of human thought 
is in it's distance from the feelings and sensations that it has evolved 
from. Think of the evolution of the human experience as an artistic 
movement, which has oscillated between realism, impressionism, cubism, and 
now finally abstract minimalism. Without the whole history of art behind 
it, the stark forms of minimalism seem simple and mechanical...and they 
are, in the absence of an appreciation of the whole story of art. Thinking 
is an art that acts like a science. Computation is a science which we can 
use to frame art. The danger is that we have overlooked what has led up to 
thinking and now mistake the frame for the canvas.

Thanks,
Craig
 

>
>
> On Thu, Oct 24, 2013 at 1:02 PM, Craig Weinberg 
> 
> > wrote:
>
>>
>>
>> On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>>
>>> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg  
>>> wrote: 
>>> > http://www.theatlantic.com/**magazine/archive/2013/11/the-**
>>> man-who-would-teach-machines-**to-think/309529/
>>>  
>>> > 
>>> > The Man Who Would Teach Machines to Think 
>>> > 
>>> > "...Take Deep Blue, the IBM supercomputer that bested the chess 
>>> grandmaster 
>>> > Garry Kasparov. Deep Blue won by brute force. For each legal move it 
>>> could 
>>> > make at a given point in the game, it would consider its opponent’s 
>>> > responses, its own responses to those responses, and so on for six or 
>>> more 
>>> > steps down the line. With a fast evaluation function, it would 
>>> calculate a 
>>> > score for each possible position, and then make the move that led to 
>>> the 
>>> > best score. What allowed Deep Blue to beat the world’s best humans was 
>>> raw 
>>> > computational power. It could evaluate up to 330 million positions a 
>>> second, 
>>> > while Kasparov could evaluate only a few dozen before having to make a 
>>> > decision. 
>>> > 
>>> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to 
>>> be had 
>>> > from the victory? “Okay,” he says, “Deep Blue plays very good chess—so 
>>> what? 
>>> > Does that tell you something about how we play chess? No. Does it tell 
>>> you 
>>> > about how Kasparov envisions, understands a chessboard?” A brand of AI 
>>> that 
>>> > didn’t try to answer such questions—however impressive it might have 
>>> > been—was, in Hofstadter’s mind, a diversion. He distanced himself from 
>>> the 
>>> > field almost as soon as he became a part of it. “To me, as a fledgling 
>>> AI 
>>> > person,” he says, “it was self-evident that I did not want to get 
>>> involved 
>>> > in that trickery. It was obvious: I don’t want to be involved in 
>>> passing off 
>>> > some fancy program’s behavior for intelligence when I know that it has 
>>> > nothing to do with intelligence. And I don’t know why more people 
>>> aren’t 
>>> > that way...” 
>>>
>>> I was just reading this too. I agree. 
>>>
>>> > This is precisely my argument against John Clark's position. 
>>> > 
>>> > Another quote I will be stealing: 
>>> > 
>>> > "Airplanes don’t flap their wings; why should computers think?" 
>>>
>>> I think the intended meaning is closer to: "airplanes don't fly by 
>>> flapping their wings, why should computers be intelligent by 
>>> thinking?". 
>>>
>>
>> It depends whether you want 'thinking' to imply awareness or not. I think 
>> the point is that we s

Re: Douglas Hofstadter Article

2013-10-24 Thread Telmo Menezes
Hi John,

On Thu, Oct 24, 2013 at 9:08 PM, John Mikes  wrote:
> Craig and Telmo:
> Is "anticipation" involved at all? Deep Blue anticipated hundreds of steps
> in advance (and evaluated a potential outcome before accepting, or
> rejecting).

Sure. This issue though is that Deep Blue does this by brute force. It
computes billions of possible scenarios to arrive at a decision. It's
clear that human beings don't do that. They are more intelligent in
the sense that they can play competitively while only considering a
small fraction of the scenarios. How do we do this? There is almost no
real AI research nowadays because people gave up on answering this
question. It's related to many other interesting questions: how do we
read and understand the meaning of a text? Google is like something
with the intelligence of an ant (probably still way less) but vast
amounts of computational power. Again, this is brute-forcing the
problem and it doesn't come close to the level of understanding that a
smart 9 year old can have when reading.

On the linguistic side, Chomsky is also outspoken against the
statistical "dumb" approaches.

> What else is in "thinking" involved? I would like to know, because I have no
> idea.

Hofstadter's ideas are very deep and I don't claim to fully understand
them. I do think that is concept of "strange loop" is important. Every
time there's something we can't define (intelligence, life,
consciousness), strange loops seems to be involved. Strange loops
feedback across abstraction layers. Goals->feelings->cognition->Goals.
Environment->DNA->Organism->Environment and so on -- in a very
informal way, please pay no attention to the lack of rigour here.

I think this is compatible with comp and several thing that Bruno
alludes to. The insight also seems to come from similar sources --
notably Gödel's theorems.

On the engineering of AI side, I believe we are still in the middle
ages when it comes to computation environments and languages. One of
my intuitions is that languages that facilitate the creation of
self-modifying computer code are an important step.

Telmo.

> John Mikes
>
>
> On Thu, Oct 24, 2013 at 1:02 PM, Craig Weinberg 
> wrote:
>>
>>
>>
>> On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>>>
>>> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
>>> wrote:
>>> >
>>> > http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>>> >
>>> > The Man Who Would Teach Machines to Think
>>> >
>>> > "...Take Deep Blue, the IBM supercomputer that bested the chess
>>> > grandmaster
>>> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
>>> > could
>>> > make at a given point in the game, it would consider its opponent’s
>>> > responses, its own responses to those responses, and so on for six or
>>> > more
>>> > steps down the line. With a fast evaluation function, it would
>>> > calculate a
>>> > score for each possible position, and then make the move that led to
>>> > the
>>> > best score. What allowed Deep Blue to beat the world’s best humans was
>>> > raw
>>> > computational power. It could evaluate up to 330 million positions a
>>> > second,
>>> > while Kasparov could evaluate only a few dozen before having to make a
>>> > decision.
>>> >
>>> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to
>>> > be had
>>> > from the victory? “Okay,” he says, “Deep Blue plays very good chess—so
>>> > what?
>>> > Does that tell you something about how we play chess? No. Does it tell
>>> > you
>>> > about how Kasparov envisions, understands a chessboard?” A brand of AI
>>> > that
>>> > didn’t try to answer such questions—however impressive it might have
>>> > been—was, in Hofstadter’s mind, a diversion. He distanced himself from
>>> > the
>>> > field almost as soon as he became a part of it. “To me, as a fledgling
>>> > AI
>>> > person,” he says, “it was self-evident that I did not want to get
>>> > involved
>>> > in that trickery. It was obvious: I don’t want to be involved in
>>> > passing off
>>> > some fancy program’s behavior for intelligence when I know that it has
>>> > nothing to do with intelligence. And I don’t know why more people
>>> > aren’t
>>> > that way...”
>>>
>>> I was just reading this too. I agree.
>>>
>>> > This is precisely my argument against John Clark's position.
>>> >
>>> > Another quote I will be stealing:
>>> >
>>> > "Airplanes don’t flap their wings; why should computers think?"
>>>
>>> I think the intended meaning is closer to: "airplanes don't fly by
>>> flapping their wings, why should computers be intelligent by
>>> thinking?".
>>
>>
>> It depends whether you want 'thinking' to imply awareness or not. I think
>> the point is that we should not assume that computation is in any way
>> 'thinking' (or intelligence for that matter). I think that 'thinking' is not
>> passive enough to describe computation. It is to say that a net is
>> 'fishing'. Computation is many nets withi

Re: Douglas Hofstadter Article

2013-10-24 Thread LizR
I think what Deep Blue does is similar to what *parts *of the brain do, and
it probably does *that* better (some "human computers" seem to use this
facility in a more direct way than most of us can). However obviously
something is missing - possibly the system that integrates all these little
"engines" into a whole. (Or possibly not...)


On 25 October 2013 08:55, Telmo Menezes  wrote:

> Hi John,
>
> On Thu, Oct 24, 2013 at 9:08 PM, John Mikes  wrote:
> > Craig and Telmo:
> > Is "anticipation" involved at all? Deep Blue anticipated hundreds of
> steps
> > in advance (and evaluated a potential outcome before accepting, or
> > rejecting).
>
> Sure. This issue though is that Deep Blue does this by brute force. It
> computes billions of possible scenarios to arrive at a decision. It's
> clear that human beings don't do that. They are more intelligent in
> the sense that they can play competitively while only considering a
> small fraction of the scenarios. How do we do this? There is almost no
> real AI research nowadays because people gave up on answering this
> question. It's related to many other interesting questions: how do we
> read and understand the meaning of a text? Google is like something
> with the intelligence of an ant (probably still way less) but vast
> amounts of computational power. Again, this is brute-forcing the
> problem and it doesn't come close to the level of understanding that a
> smart 9 year old can have when reading.
>
> On the linguistic side, Chomsky is also outspoken against the
> statistical "dumb" approaches.
>
> > What else is in "thinking" involved? I would like to know, because I
> have no
> > idea.
>
> Hofstadter's ideas are very deep and I don't claim to fully understand
> them. I do think that is concept of "strange loop" is important. Every
> time there's something we can't define (intelligence, life,
> consciousness), strange loops seems to be involved. Strange loops
> feedback across abstraction layers. Goals->feelings->cognition->Goals.
> Environment->DNA->Organism->Environment and so on -- in a very
> informal way, please pay no attention to the lack of rigour here.
>
> I think this is compatible with comp and several thing that Bruno
> alludes to. The insight also seems to come from similar sources --
> notably Gödel's theorems.
>
> On the engineering of AI side, I believe we are still in the middle
> ages when it comes to computation environments and languages. One of
> my intuitions is that languages that facilitate the creation of
> self-modifying computer code are an important step.
>
> Telmo.
>
> > John Mikes
> >
> >
> > On Thu, Oct 24, 2013 at 1:02 PM, Craig Weinberg 
> > wrote:
> >>
> >>
> >>
> >> On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
> >>>
> >>> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
> >>> wrote:
> >>> >
> >>> >
> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
> >>> >
> >>> > The Man Who Would Teach Machines to Think
> >>> >
> >>> > "...Take Deep Blue, the IBM supercomputer that bested the chess
> >>> > grandmaster
> >>> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
> >>> > could
> >>> > make at a given point in the game, it would consider its opponent’s
> >>> > responses, its own responses to those responses, and so on for six or
> >>> > more
> >>> > steps down the line. With a fast evaluation function, it would
> >>> > calculate a
> >>> > score for each possible position, and then make the move that led to
> >>> > the
> >>> > best score. What allowed Deep Blue to beat the world’s best humans
> was
> >>> > raw
> >>> > computational power. It could evaluate up to 330 million positions a
> >>> > second,
> >>> > while Kasparov could evaluate only a few dozen before having to make
> a
> >>> > decision.
> >>> >
> >>> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to
> >>> > be had
> >>> > from the victory? “Okay,” he says, “Deep Blue plays very good
> chess—so
> >>> > what?
> >>> > Does that tell you something about how we play chess? No. Does it
> tell
> >>> > you
> >>> > about how Kasparov envisions, understands a chessboard?” A brand of
> AI
> >>> > that
> >>> > didn’t try to answer such questions—however impressive it might have
> >>> > been—was, in Hofstadter’s mind, a diversion. He distanced himself
> from
> >>> > the
> >>> > field almost as soon as he became a part of it. “To me, as a
> fledgling
> >>> > AI
> >>> > person,” he says, “it was self-evident that I did not want to get
> >>> > involved
> >>> > in that trickery. It was obvious: I don’t want to be involved in
> >>> > passing off
> >>> > some fancy program’s behavior for intelligence when I know that it
> has
> >>> > nothing to do with intelligence. And I don’t know why more people
> >>> > aren’t
> >>> > that way...”
> >>>
> >>> I was just reading this too. I agree.
> >>>
> >>> > This is precisely my argument against John Clark's position.
> >>> >
> >>>

Re: Douglas Hofstadter Article

2013-10-24 Thread Craig Weinberg


On Thursday, October 24, 2013 4:47:15 PM UTC-4, Liz R wrote:
>
> I think what Deep Blue does is similar to what *parts *of the brain do, 
> and it probably does *that* better (some "human computers" seem to use 
> this facility in a more direct way than most of us can). However obviously 
> something is missing - possibly the system that integrates all these little 
> "engines" into a whole. (Or possibly not...)
>

I agree that what Deep Blue does is similar to what parts of the brain do *for 
us*, just as our own actions contribute to what a city or country does. 
That doesn't mean that is all that the parts of the brain are doing, or 
even that what they are doing would be comprehensible to us. Neurons have 
their own lives to worry about...perhaps not in the same way that we have 
our lives to worry about - indeed they may have shed much of their 
independence long ago evolutionarily, but I imagine that they at least have 
more local conditions to worry about than we would probably assume. If the 
relation of what our bodies do to what we experience is any guide, 'what it 
is like to be a neuron' is probably mostly inconceivable from our 
perspective.

As far as the binding or integration, I see that as only half of the 
picture. All neurons are replications of a single stem cell, so they are 
already one thing. Mitosis is a diffraction or divergence from singularity. 
What is emerging is insensitivity and space...synapse, convolution, 
hemisphere separation.


>
> On 25 October 2013 08:55, Telmo Menezes 
> > wrote:
>
>> Hi John,
>>
>> On Thu, Oct 24, 2013 at 9:08 PM, John Mikes > 
>> wrote:
>> > Craig and Telmo:
>> > Is "anticipation" involved at all? Deep Blue anticipated hundreds of 
>> steps
>> > in advance (and evaluated a potential outcome before accepting, or
>> > rejecting).
>>
>> Sure. This issue though is that Deep Blue does this by brute force. It
>> computes billions of possible scenarios to arrive at a decision. It's
>> clear that human beings don't do that. They are more intelligent in
>> the sense that they can play competitively while only considering a
>> small fraction of the scenarios. How do we do this? There is almost no
>> real AI research nowadays because people gave up on answering this
>> question. It's related to many other interesting questions: how do we
>> read and understand the meaning of a text? Google is like something
>> with the intelligence of an ant (probably still way less) but vast
>> amounts of computational power. Again, this is brute-forcing the
>> problem and it doesn't come close to the level of understanding that a
>> smart 9 year old can have when reading.
>>
>> On the linguistic side, Chomsky is also outspoken against the
>> statistical "dumb" approaches.
>>
>> > What else is in "thinking" involved? I would like to know, because I 
>> have no
>> > idea.
>>
>> Hofstadter's ideas are very deep and I don't claim to fully understand
>> them. I do think that is concept of "strange loop" is important. Every
>> time there's something we can't define (intelligence, life,
>> consciousness), strange loops seems to be involved. Strange loops
>> feedback across abstraction layers. Goals->feelings->cognition->Goals.
>> Environment->DNA->Organism->Environment and so on -- in a very
>> informal way, please pay no attention to the lack of rigour here.
>>
>> I think this is compatible with comp and several thing that Bruno
>> alludes to. The insight also seems to come from similar sources --
>> notably Gödel's theorems.
>>
>> On the engineering of AI side, I believe we are still in the middle
>> ages when it comes to computation environments and languages. One of
>> my intuitions is that languages that facilitate the creation of
>> self-modifying computer code are an important step.
>>
>> Telmo.
>>
>> > John Mikes
>> >
>> >
>> > On Thu, Oct 24, 2013 at 1:02 PM, Craig Weinberg 
>> > 
>> >
>> > wrote:
>> >>
>> >>
>> >>
>> >> On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>> >>>
>> >>> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
>> >>> wrote:
>> >>> >
>> >>> > 
>> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>> >>> >
>> >>> > The Man Who Would Teach Machines to Think
>> >>> >
>> >>> > "...Take Deep Blue, the IBM supercomputer that bested the chess
>> >>> > grandmaster
>> >>> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
>> >>> > could
>> >>> > make at a given point in the game, it would consider its opponent’s
>> >>> > responses, its own responses to those responses, and so on for six 
>> or
>> >>> > more
>> >>> > steps down the line. With a fast evaluation function, it would
>> >>> > calculate a
>> >>> > score for each possible position, and then make the move that led to
>> >>> > the
>> >>> > best score. What allowed Deep Blue to beat the world’s best humans 
>> was
>> >>> > raw
>> >>> > computational power. It could evaluate up to 330 million positions a
>> >>> > 

Re: Douglas Hofstadter Article

2013-10-24 Thread meekerdb

On 10/24/2013 12:08 PM, John Mikes wrote:

Craig and Telmo:
Is "anticipation" involved at all? Deep Blue anticipated hundreds of steps in advance 
(and evaluated a potential outcome before accepting, or rejecting).

What else is in "thinking" involved? I would like to know, because I have no 
idea.
John Mikes


Learning from experience.  Actually I think Deep Blue could do some learning by analyzing 
games and adjusting the values it gave to positions. But one reason it seems so 
unintelligent is that its scope of perception is very narrow (i.e. chess games) and so it 
can't learn some things a human player can.  For example Deep Blue couldn't see Kasparov 
look nervous, ask for changes in the lighting, hesitate slightly before moving a piece,...


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread Telmo Menezes
On Thu, Oct 24, 2013 at 7:02 PM, Craig Weinberg  wrote:
>
>
> On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>>
>> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
>> wrote:
>> >
>> > http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>> >
>> > The Man Who Would Teach Machines to Think
>> >
>> > "...Take Deep Blue, the IBM supercomputer that bested the chess
>> > grandmaster
>> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
>> > could
>> > make at a given point in the game, it would consider its opponent’s
>> > responses, its own responses to those responses, and so on for six or
>> > more
>> > steps down the line. With a fast evaluation function, it would calculate
>> > a
>> > score for each possible position, and then make the move that led to the
>> > best score. What allowed Deep Blue to beat the world’s best humans was
>> > raw
>> > computational power. It could evaluate up to 330 million positions a
>> > second,
>> > while Kasparov could evaluate only a few dozen before having to make a
>> > decision.
>> >
>> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to be
>> > had
>> > from the victory? “Okay,” he says, “Deep Blue plays very good chess—so
>> > what?
>> > Does that tell you something about how we play chess? No. Does it tell
>> > you
>> > about how Kasparov envisions, understands a chessboard?” A brand of AI
>> > that
>> > didn’t try to answer such questions—however impressive it might have
>> > been—was, in Hofstadter’s mind, a diversion. He distanced himself from
>> > the
>> > field almost as soon as he became a part of it. “To me, as a fledgling
>> > AI
>> > person,” he says, “it was self-evident that I did not want to get
>> > involved
>> > in that trickery. It was obvious: I don’t want to be involved in passing
>> > off
>> > some fancy program’s behavior for intelligence when I know that it has
>> > nothing to do with intelligence. And I don’t know why more people aren’t
>> > that way...”
>>
>> I was just reading this too. I agree.
>>
>> > This is precisely my argument against John Clark's position.
>> >
>> > Another quote I will be stealing:
>> >
>> > "Airplanes don’t flap their wings; why should computers think?"
>>
>> I think the intended meaning is closer to: "airplanes don't fly by
>> flapping their wings, why should computers be intelligent by
>> thinking?".
>
>
> It depends whether you want 'thinking' to imply awareness or not.

Ok. I don't think we can know that in any case.

> I think
> the point is that we should not assume that computation is in any way
> 'thinking' (or intelligence for that matter). I think that 'thinking' is not
> passive enough to describe computation. It is to say that a net is
> 'fishing'. Computation is many nets within nets, devoid of intention or
> perspective. It does the opposite of thinking, it is a method for petrifying
> the measurable residue or reflection of thought.

Ok but let's take a human grand master playing chess. You don't think
a computer can play like him?

>
>>
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "Everything List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to everything-li...@googlegroups.com.
>> > To post to this group, send email to everyth...@googlegroups.com.
>> > Visit this group at http://groups.google.com/group/everything-list.
>> > For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread Platonist Guitar Cowboy
On Thu, Oct 24, 2013 at 11:29 PM, Telmo Menezes wrote:

> On Thu, Oct 24, 2013 at 7:02 PM, Craig Weinberg 
> wrote:
> >
> >
> > On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
> >>
> >> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
> >> wrote:
> >> >
> >> >
> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
> >> >
> >> > The Man Who Would Teach Machines to Think
> >> >
> >> > "...Take Deep Blue, the IBM supercomputer that bested the chess
> >> > grandmaster
> >> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
> >> > could
> >> > make at a given point in the game, it would consider its opponent’s
> >> > responses, its own responses to those responses, and so on for six or
> >> > more
> >> > steps down the line. With a fast evaluation function, it would
> calculate
> >> > a
> >> > score for each possible position, and then make the move that led to
> the
> >> > best score. What allowed Deep Blue to beat the world’s best humans was
> >> > raw
> >> > computational power. It could evaluate up to 330 million positions a
> >> > second,
> >> > while Kasparov could evaluate only a few dozen before having to make a
> >> > decision.
> >> >
> >> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to
> be
> >> > had
> >> > from the victory? “Okay,” he says, “Deep Blue plays very good chess—so
> >> > what?
> >> > Does that tell you something about how we play chess? No. Does it tell
> >> > you
> >> > about how Kasparov envisions, understands a chessboard?” A brand of AI
> >> > that
> >> > didn’t try to answer such questions—however impressive it might have
> >> > been—was, in Hofstadter’s mind, a diversion. He distanced himself from
> >> > the
> >> > field almost as soon as he became a part of it. “To me, as a fledgling
> >> > AI
> >> > person,” he says, “it was self-evident that I did not want to get
> >> > involved
> >> > in that trickery. It was obvious: I don’t want to be involved in
> passing
> >> > off
> >> > some fancy program’s behavior for intelligence when I know that it has
> >> > nothing to do with intelligence. And I don’t know why more people
> aren’t
> >> > that way...”
> >>
> >> I was just reading this too. I agree.
> >>
> >> > This is precisely my argument against John Clark's position.
> >> >
> >> > Another quote I will be stealing:
> >> >
> >> > "Airplanes don’t flap their wings; why should computers think?"
> >>
> >> I think the intended meaning is closer to: "airplanes don't fly by
> >> flapping their wings, why should computers be intelligent by
> >> thinking?".
> >
> >
> > It depends whether you want 'thinking' to imply awareness or not.
>
> Ok. I don't think we can know that in any case.
>
> > I think
> > the point is that we should not assume that computation is in any way
> > 'thinking' (or intelligence for that matter). I think that 'thinking' is
> not
> > passive enough to describe computation. It is to say that a net is
> > 'fishing'. Computation is many nets within nets, devoid of intention or
> > perspective. It does the opposite of thinking, it is a method for
> petrifying
> > the measurable residue or reflection of thought.
>
> Ok but let's take a human grand master playing chess. You don't think
> a computer can play like him?
>
>
This relates to what you said earlier which I agree with:

*They are more intelligent in
the sense that they can play competitively while only considering a
small fraction of the scenarios. How do we do this? There is almost no
real AI research nowadays because people gave up on answering this
question. *

The answer lies somewhere in building branch histories and databases that
are for now only partial. The computer cannot beat humans without databases
for openings, middle, and endgame. I believe this is what freaked out
Kasparov in the questionable game and what gives his suspicion of human
intervention in the code, which IBM never ruled out or proved negatively
between games, some substance. Kasparov lost because IBM eventually accrued
enough understanding of Kasparov's database (dozens of years of notes and
logs that make up his holy grail secret) to not let it fall for Kasparov's
gambit.

Kasparov's and any GM's algorithm for beating chess engines often runs
along the lines of:

Keep position closed via Botvinnik type openings and middlegame so the
computer will have to contend with billions of possible move continuations
instead of a few dozen million. Then implement precise, but highly complex,
long term strategy that offers both positional and material gambit for
twenty or so moves which is designed to flip at exactly the point of the
computer's computational horizon, and the computer loses.

This doesn't work today, because human GMs have fed the databases with
every line/variation up their sleeves (from hundreds of years of recorded
games) and consequently we feed the software with every refutation. Once a
refutation is implemented, i

Re: Douglas Hofstadter Article

2013-10-24 Thread LizR
I want a computer that can play poker. And Bridge. And Go.


On 25 October 2013 12:11, Platonist Guitar Cowboy
wrote:

>
>
>
> On Thu, Oct 24, 2013 at 11:29 PM, Telmo Menezes wrote:
>
>> On Thu, Oct 24, 2013 at 7:02 PM, Craig Weinberg 
>> wrote:
>> >
>> >
>> > On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>> >>
>> >> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
>> >> wrote:
>> >> >
>> >> >
>> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>> >> >
>> >> > The Man Who Would Teach Machines to Think
>> >> >
>> >> > "...Take Deep Blue, the IBM supercomputer that bested the chess
>> >> > grandmaster
>> >> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
>> >> > could
>> >> > make at a given point in the game, it would consider its opponent’s
>> >> > responses, its own responses to those responses, and so on for six or
>> >> > more
>> >> > steps down the line. With a fast evaluation function, it would
>> calculate
>> >> > a
>> >> > score for each possible position, and then make the move that led to
>> the
>> >> > best score. What allowed Deep Blue to beat the world’s best humans
>> was
>> >> > raw
>> >> > computational power. It could evaluate up to 330 million positions a
>> >> > second,
>> >> > while Kasparov could evaluate only a few dozen before having to make
>> a
>> >> > decision.
>> >> >
>> >> > Hofstadter wanted to ask: Why conquer a task if there’s no insight
>> to be
>> >> > had
>> >> > from the victory? “Okay,” he says, “Deep Blue plays very good
>> chess—so
>> >> > what?
>> >> > Does that tell you something about how we play chess? No. Does it
>> tell
>> >> > you
>> >> > about how Kasparov envisions, understands a chessboard?” A brand of
>> AI
>> >> > that
>> >> > didn’t try to answer such questions—however impressive it might have
>> >> > been—was, in Hofstadter’s mind, a diversion. He distanced himself
>> from
>> >> > the
>> >> > field almost as soon as he became a part of it. “To me, as a
>> fledgling
>> >> > AI
>> >> > person,” he says, “it was self-evident that I did not want to get
>> >> > involved
>> >> > in that trickery. It was obvious: I don’t want to be involved in
>> passing
>> >> > off
>> >> > some fancy program’s behavior for intelligence when I know that it
>> has
>> >> > nothing to do with intelligence. And I don’t know why more people
>> aren’t
>> >> > that way...”
>> >>
>> >> I was just reading this too. I agree.
>> >>
>> >> > This is precisely my argument against John Clark's position.
>> >> >
>> >> > Another quote I will be stealing:
>> >> >
>> >> > "Airplanes don’t flap their wings; why should computers think?"
>> >>
>> >> I think the intended meaning is closer to: "airplanes don't fly by
>> >> flapping their wings, why should computers be intelligent by
>> >> thinking?".
>> >
>> >
>> > It depends whether you want 'thinking' to imply awareness or not.
>>
>> Ok. I don't think we can know that in any case.
>>
>> > I think
>> > the point is that we should not assume that computation is in any way
>> > 'thinking' (or intelligence for that matter). I think that 'thinking'
>> is not
>> > passive enough to describe computation. It is to say that a net is
>> > 'fishing'. Computation is many nets within nets, devoid of intention or
>> > perspective. It does the opposite of thinking, it is a method for
>> petrifying
>> > the measurable residue or reflection of thought.
>>
>> Ok but let's take a human grand master playing chess. You don't think
>> a computer can play like him?
>>
>>
> This relates to what you said earlier which I agree with:
>
> *They are more intelligent in
> the sense that they can play competitively while only considering a
> small fraction of the scenarios. How do we do this? There is almost no
> real AI research nowadays because people gave up on answering this
> question. *
>
> The answer lies somewhere in building branch histories and databases that
> are for now only partial. The computer cannot beat humans without databases
> for openings, middle, and endgame. I believe this is what freaked out
> Kasparov in the questionable game and what gives his suspicion of human
> intervention in the code, which IBM never ruled out or proved negatively
> between games, some substance. Kasparov lost because IBM eventually accrued
> enough understanding of Kasparov's database (dozens of years of notes and
> logs that make up his holy grail secret) to not let it fall for Kasparov's
> gambit.
>
> Kasparov's and any GM's algorithm for beating chess engines often runs
> along the lines of:
>
> Keep position closed via Botvinnik type openings and middlegame so the
> computer will have to contend with billions of possible move continuations
> instead of a few dozen million. Then implement precise, but highly complex,
> long term strategy that offers both positional and material gambit for
> twenty or so moves which is designed to flip at exactly the point of the
> computer's comput

Re: Douglas Hofstadter Article

2013-10-24 Thread Stathis Papaioannou
On 25 October 2013 03:39, Craig Weinberg  wrote:
> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>
> The Man Who Would Teach Machines to Think
>
> "...Take Deep Blue, the IBM supercomputer that bested the chess grandmaster
> Garry Kasparov. Deep Blue won by brute force. For each legal move it could
> make at a given point in the game, it would consider its opponent’s
> responses, its own responses to those responses, and so on for six or more
> steps down the line. With a fast evaluation function, it would calculate a
> score for each possible position, and then make the move that led to the
> best score. What allowed Deep Blue to beat the world’s best humans was raw
> computational power. It could evaluate up to 330 million positions a second,
> while Kasparov could evaluate only a few dozen before having to make a
> decision.
>
> Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had
> from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what?
> Does that tell you something about how we play chess? No. Does it tell you
> about how Kasparov envisions, understands a chessboard?” A brand of AI that
> didn’t try to answer such questions—however impressive it might have
> been—was, in Hofstadter’s mind, a diversion. He distanced himself from the
> field almost as soon as he became a part of it. “To me, as a fledgling AI
> person,” he says, “it was self-evident that I did not want to get involved
> in that trickery. It was obvious: I don’t want to be involved in passing off
> some fancy program’s behavior for intelligence when I know that it has
> nothing to do with intelligence. And I don’t know why more people aren’t
> that way...”
>
> This is precisely my argument against John Clark's position.
>
> Another quote I will be stealing:
>
> "Airplanes don’t flap their wings; why should computers think?"

You could say that human chess players just take in visual data,
process it in a series of biological relays, then send electrical
signals to muscles that move the pieces around. This is what an alien
scientist would observe. That's not thinking! That's not
understanding!


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread LizR
On 25 October 2013 12:16, Stathis Papaioannou  wrote:

>
> You could say that human chess players just take in visual data,
> process it in a series of biological relays, then send electrical
> signals to muscles that move the pieces around. This is what an alien
> scientist would observe. That's not thinking! That's not
> understanding!
>
> I like the use of "just" !

(I'm sure a Chinese room the size of the galaxy could replicate their
behaviour...)

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread Craig Weinberg


On Thursday, October 24, 2013 7:16:55 PM UTC-4, stathisp wrote:
>
> On 25 October 2013 03:39, Craig Weinberg > 
> wrote: 
> > 
> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>  
> > 
> > The Man Who Would Teach Machines to Think 
> > 
> > "...Take Deep Blue, the IBM supercomputer that bested the chess 
> grandmaster 
> > Garry Kasparov. Deep Blue won by brute force. For each legal move it 
> could 
> > make at a given point in the game, it would consider its opponent’s 
> > responses, its own responses to those responses, and so on for six or 
> more 
> > steps down the line. With a fast evaluation function, it would calculate 
> a 
> > score for each possible position, and then make the move that led to the 
> > best score. What allowed Deep Blue to beat the world’s best humans was 
> raw 
> > computational power. It could evaluate up to 330 million positions a 
> second, 
> > while Kasparov could evaluate only a few dozen before having to make a 
> > decision. 
> > 
> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to be 
> had 
> > from the victory? “Okay,” he says, “Deep Blue plays very good chess—so 
> what? 
> > Does that tell you something about how we play chess? No. Does it tell 
> you 
> > about how Kasparov envisions, understands a chessboard?” A brand of AI 
> that 
> > didn’t try to answer such questions—however impressive it might have 
> > been—was, in Hofstadter’s mind, a diversion. He distanced himself from 
> the 
> > field almost as soon as he became a part of it. “To me, as a fledgling 
> AI 
> > person,” he says, “it was self-evident that I did not want to get 
> involved 
> > in that trickery. It was obvious: I don’t want to be involved in passing 
> off 
> > some fancy program’s behavior for intelligence when I know that it has 
> > nothing to do with intelligence. And I don’t know why more people aren’t 
> > that way...” 
> > 
> > This is precisely my argument against John Clark's position. 
> > 
> > Another quote I will be stealing: 
> > 
> > "Airplanes don’t flap their wings; why should computers think?" 
>
> You could say that human chess players just take in visual data, 
> process it in a series of biological relays, then send electrical 
> signals to muscles that move the pieces around. This is what an alien 
> scientist would observe. That's not thinking! That's not 
> understanding! 
>

Right, but since we understand that such an alien observation would be in 
error, we must give our own experience the benefit of the doubt. The 
computer does not deserve any such benefit of the doubt, since there is no 
question that it has been assembled intentionally from controllable parts. 
When we see a ventriloquist with a dummy, we do not entertain seriously 
that we could be mistaken about which one is really the ventriloquist, or 
whether they are equivalent to each other. 

Looking at natural presences, like atoms or galaxies, the scope of their 
persistence is well beyond any human relation so they do deserve the 
benefit of the doubt. We have no reason to believe that they were assembled 
by anything other than themselves. The fact that we are made of atoms and 
atoms are made from stars is another point in their favor, whereas no 
living organism that we have encountered is made of inorganic atoms, or of 
pure mathematics, or can survive by consuming only inorganic atoms or 
mathematics.

Craig


>
> -- 
> Stathis Papaioannou 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread LizR
On 25 October 2013 14:31, Craig Weinberg  wrote:

>
> Looking at natural presences, like atoms or galaxies, the scope of their
> persistence is well beyond any human relation so they do deserve the
> benefit of the doubt. We have no reason to believe that they were assembled
> by anything other than themselves. The fact that we are made of atoms and
> atoms are made from stars is another point in their favor, whereas no
> living organism that we have encountered is made of inorganic atoms, or of
> pure mathematics, or can survive by consuming only inorganic atoms or
> mathematics.
>

What are inorganic atoms? Or rather (since I suspect all atoms are
inorganic), what are organic atoms?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-10-24 Thread chris peck
yep. organity is emergent.

Date: Fri, 25 Oct 2013 14:46:54 +1300
Subject: Re: Douglas Hofstadter Article
From: lizj...@gmail.com
To: everything-list@googlegroups.com

On 25 October 2013 14:31, Craig Weinberg  wrote:


Looking at natural presences, like atoms or galaxies, the scope of their 
persistence is well beyond any human relation so they do deserve the benefit of 
the doubt. We have no reason to believe that they were assembled by anything 
other than themselves. The fact that we are made of atoms and atoms are made 
from stars is another point in their favor, whereas no living organism that we 
have encountered is made of inorganic atoms, or of pure mathematics, or can 
survive by consuming only inorganic atoms or mathematics.


What are inorganic atoms? Or rather (since I suspect all atoms are inorganic), 
what are organic atoms?






-- 

You received this message because you are subscribed to the Google Groups 
"Everything List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.

For more options, visit https://groups.google.com/groups/opt_out.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread Stathis Papaioannou
On 25 October 2013 12:31, Craig Weinberg  wrote:

>> You could say that human chess players just take in visual data,
>> process it in a series of biological relays, then send electrical
>> signals to muscles that move the pieces around. This is what an alien
>> scientist would observe. That's not thinking! That's not
>> understanding!
>
>
> Right, but since we understand that such an alien observation would be in
> error, we must give our own experience the benefit of the doubt.

The alien might be completely confident in his judgement, having a
brain made of exotic matter. He would argue that however complex its
behaviour, a being made of ordinary matter that evolved naturally
could not possibly have an understanding of what it is doing.

> The
> computer does not deserve any such benefit of the doubt, since there is no
> question that it has been assembled intentionally from controllable parts.
> When we see a ventriloquist with a dummy, we do not entertain seriously that
> we could be mistaken about which one is really the ventriloquist, or whether
> they are equivalent to each other.

But if the dummy is autonomous and apparently just as smart as the
ventriloquist many of us would reconsider.

> Looking at natural presences, like atoms or galaxies, the scope of their
> persistence is well beyond any human relation so they do deserve the benefit
> of the doubt. We have no reason to believe that they were assembled by
> anything other than themselves. The fact that we are made of atoms and atoms
> are made from stars is another point in their favor, whereas no living
> organism that we have encountered is made of inorganic atoms, or of pure
> mathematics, or can survive by consuming only inorganic atoms or
> mathematics.

There is no logical reason why something that is inorganic or did not
arise spontaneously or eats inoragnic matter cannot be conscious. It's
just something you have made up.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-10-24 Thread chris peck
>> The alien might be completely confident in his judgement, having a
brain made of exotic matter. He would argue that however complex its
behaviour, a being made of ordinary matter that evolved naturally
could not possibly have an understanding of what it is doing.

Aliens don't matter. They can be wrong about us being thoughtless and we can be 
right that computers are thoughtless.

There seem to be two points of view here:

1) Whether a machine is thinking is determined by the goals it achieves 
(beating people at chess, translating bulgarian)

2) Whether a machine is thinking is determined by how it trys to achieve a 
goal. How does it cognate?

I find myself rooting for the second point of view. A machine wouldn't need to 
beat kasperov to convince me it was thinking, but it would have to make 
mistakes and successes in the same way that I would against kasperov. 

In developmental psychology there is the question of how children learn 
grammar. I forget the details; but some bunch of geeks at a brainy university 
had developed a neural net system that given enough input and training began to 
apply grammatical rules correctly. What was really interesting though was that 
despite arriving at a similar competence to a young child, the journey there 
was very different. The system outperformed children (on average) and crucially 
didn't make the same kind of mistakes that are ubiquitous as children learn 
grammar. The ubiquity is important because it shows that in children the same 
inherent system is at play; the absence of mistakes between computer and child 
is important because it shows that theses systems are different. 

At this juncture then it becomes moot whether the computer is learning or 
thinking about grammar. It is a matter of philosophical taste. It certainly 
isn't learning or thinking as we learnt or thought when learning grammar. The 
way we cognate is the only example we have of cognition that we know is 
genuine. Do AI systems do that? The answer is obviously : No they don't. Are 
computers brainy in the way we are? No they are not. You can broaden the 
definition of thought and braininess to encompass it if you like, but that is 
just philosophical bias. They do not do what we do.

Regards

> From: stath...@gmail.com
> Date: Fri, 25 Oct 2013 13:11:47 +1100
> Subject: Re: Douglas Hofstadter Article
> To: everything-list@googlegroups.com
> 
> On 25 October 2013 12:31, Craig Weinberg  wrote:
> 
> >> You could say that human chess players just take in visual data,
> >> process it in a series of biological relays, then send electrical
> >> signals to muscles that move the pieces around. This is what an alien
> >> scientist would observe. That's not thinking! That's not
> >> understanding!
> >
> >
> > Right, but since we understand that such an alien observation would be in
> > error, we must give our own experience the benefit of the doubt.
> 
> The alien might be completely confident in his judgement, having a
> brain made of exotic matter. He would argue that however complex its
> behaviour, a being made of ordinary matter that evolved naturally
> could not possibly have an understanding of what it is doing.
> 
> > The
> > computer does not deserve any such benefit of the doubt, since there is no
> > question that it has been assembled intentionally from controllable parts.
> > When we see a ventriloquist with a dummy, we do not entertain seriously that
> > we could be mistaken about which one is really the ventriloquist, or whether
> > they are equivalent to each other.
> 
> But if the dummy is autonomous and apparently just as smart as the
> ventriloquist many of us would reconsider.
> 
> > Looking at natural presences, like atoms or galaxies, the scope of their
> > persistence is well beyond any human relation so they do deserve the benefit
> > of the doubt. We have no reason to believe that they were assembled by
> > anything other than themselves. The fact that we are made of atoms and atoms
> > are made from stars is another point in their favor, whereas no living
> > organism that we have encountered is made of inorganic atoms, or of pure
> > mathematics, or can survive by consuming only inorganic atoms or
> > mathematics.
> 
> There is no logical reason why something that is inorganic or did not
> arise spontaneously or eats inoragnic matter cannot be conscious. It's
> just something you have made up.
> 
> 
> -- 
> Stathis Papaioannou
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com

Re: Douglas Hofstadter Article

2013-10-24 Thread meekerdb

On 10/24/2013 8:09 PM, chris peck wrote:
At this juncture then it becomes moot whether the computer is learning or thinking about 
grammar. It is a matter of philosophical taste. It certainly isn't learning or thinking 
as we learnt or thought when learning grammar. The way we cognate is the only example we 
have of cognition that we know is genuine.


Unfortunately we don't even have that example, because we don't know how we 
think.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-10-24 Thread chris peck
>> Unfortunately we don't even have that example, because we don't know how we 
>> think.

We know that a certain set of mistakes are ubiquitous when learning grammer. 
(overgeneralising for example). Cats. dogs. hamsters. ... Sheeps. deers. etc.

And we know the computer system didn't make these mistakes.

Thats all we need to know to say that the two systems are not the same. All we 
need to know to say the computer was not doing what children do.

Date: Thu, 24 Oct 2013 20:35:05 -0700
From: meeke...@verizon.net
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article


  

  
  
On 10/24/2013 8:09 PM, chris peck
  wrote:


At this juncture then it becomes moot whether the
  computer is learning or thinking about grammar. It is a matter of
  philosophical taste. It certainly isn't learning or thinking as we
  learnt or thought when learning grammar. The way we cognate is the
  only example we have of cognition that we know is genuine.


Unfortunately we don't
  even have that example, because we don't know how we think.

  

  Brent


  





-- 

You received this message because you are subscribed to the Google Groups 
"Everything List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.

For more options, visit https://groups.google.com/groups/opt_out.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-24 Thread meekerdb

On 10/24/2013 8:41 PM, chris peck wrote:

/>> Unfortunately we don't even have that example, because we don't know how we 
think./

We know that a certain set of mistakes are ubiquitous when learning grammer. 
(overgeneralising for example). Cats. dogs. hamsters. ... Sheeps. deers. etc.


And we know the computer system didn't make these mistakes.


Whether a computer made those mistakes would obviously depend on it's software and one 
could obviously write software that would over generalize and in fact neural network 
classifiers often over generalize.


But you're back to judging internal processes by external behavior.

Brent




Thats all we need to know to say that the two systems are not the same. All we need to 
know to say the computer was not doing what children do.


--
Date: Thu, 24 Oct 2013 20:35:05 -0700
From: meeke...@verizon.net
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On 10/24/2013 8:09 PM, chris peck wrote:

At this juncture then it becomes moot whether the computer is learning or 
thinking
about grammar. It is a matter of philosophical taste. It certainly isn't 
learning or
thinking as we learnt or thought when learning grammar. The way we cognate 
is the
only example we have of cognition that we know is genuine.


Unfortunately we don't even have that example, because we don't know how we 
think.

Brent


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-10-24 Thread chris peck
>> But you're back to judging internal processes by external behavior.

I have nothing against doing that. Its exactly what I in fact did.

Where there are no behavioral differences from which we can identify internal 
differences, we would not know whether they were cognitively different or the 
same.  Maybe they are, maybe they are not. And that certainly leads to the 
problem of other minds, say between children learning grammar.

But where we can do that, say between this grammar system and children or Deep 
Blue and Kasperov, it follows that they are definitely not cognitively similar 
regardless of how they perform because we can discern internal differences from 
external behavior.

We can only say Deep Blue is thinking if we broaden the definition of thinking. 
Well, I can show that Im gorgeous if I broaden the definition of gorgeous. We 
don't learn anything about thought by changing its definition.

Date: Thu, 24 Oct 2013 20:52:39 -0700
From: meeke...@verizon.net
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article


  

  
  
On 10/24/2013 8:41 PM, chris peck
  wrote:



  
  >> Unfortunately we don't even have that
  example, because we don't know how we think.



We know that a certain set of mistakes are ubiquitous when
learning grammer. (overgeneralising for example). Cats. dogs.
hamsters. ... Sheeps. deers. etc.



And we know the computer system didn't make these mistakes.

  



Whether a computer made those mistakes would obviously depend on
it's software and one could obviously write software that would over
generalize and in fact neural network classifiers often over
generalize.



But you're back to judging internal processes by external behavior.



Brent






  

Thats all we need to know to say that the two systems are not
the same. All we need to know to say the computer was not doing
what children do.




  Date: Thu, 24 Oct 2013 20:35:05 -0700

  From: meeke...@verizon.net

  To: everything-list@googlegroups.com

  Subject: Re: Douglas Hofstadter Article

  

  On 10/24/2013 8:09 PM, chris
peck wrote:

  
  At
this juncture then it becomes moot whether the computer is
learning or thinking about grammar. It is a matter of
philosophical taste. It certainly isn't learning or thinking
as we learnt or thought when learning grammar. The way we
cognate is the only example we have of cognition that we
know is genuine.
  

  Unfortunately we
don't even have that example, because we don't know how we
think.



Brent


  



  





-- 

You received this message because you are subscribed to the Google Groups 
"Everything List" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.

For more options, visit https://groups.google.com/groups/opt_out.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Telmo Menezes
On Thu, Oct 24, 2013 at 11:05 PM, meekerdb  wrote:
> On 10/24/2013 12:08 PM, John Mikes wrote:
>
> Craig and Telmo:
> Is "anticipation" involved at all? Deep Blue anticipated hundreds of steps
> in advance (and evaluated a potential outcome before accepting, or
> rejecting).
> What else is in "thinking" involved? I would like to know, because I have no
> idea.
> John Mikes
>
>
> Learning from experience.  Actually I think Deep Blue could do some learning
> by analyzing games and adjusting the values it gave to positions.  But one
> reason it seems so unintelligent is that its scope of perception is very
> narrow (i.e. chess games) and so it can't learn some things a human player
> can.  For example Deep Blue couldn't see Kasparov look nervous, ask for
> changes in the lighting, hesitate slightly before moving a piece,...

Bret,

Even in the narrow domain of chess this sort of limitation still
applies. Part of it comes from the "divide and conquer" approach
followed by conventional engineering. Let's consider a simplification
of what the Deep Blue architecture looks like:

- Pieces have some values, this is probably sophisticated and the
values can be influenced by overall board structure;
- Some function can evaluate the utility of a board configuration;
- A search tree is used to explore the space of possible plays,
counter-plays, counter-counter-plays and so on;
- The previous tree can be pruned using some heuristics, but it's
still gigantic;
- The more computational power you have, the deeper you can go in the
search tree;
- There is an enormous database of openings and endings that the
algorithm can fallback to, if early or late enough in the game.

Defeating a grand master was mostly achieved by increasing the
computational power available to this algorithm.

Now take the game of go: human beings can still easily beat machines,
even the most powerful computer currently available. Go is much more
combinatorially explosive than chess, so it breaks the search tree
approach. This is strong empirical evidence that Deep Blue
accomplished nothing in the field of AI -- it did did accomplish
something remarkable in the field of computer engineering or maybe
even computer science, but it completely side-stepped the
"intelligence" part. It cheated, in a sense.

How do humans play games? I suspect the same way we navigate cities
and manage businesses: we map the problem to a better internal
representation. This representation is both less combinatorially
explosive and more expressive.

My home town is relatively small, population is about 150K. If we were
all teleported to Coimbra and I was to give you guys a tour, I could
drive from any place to any place without thinking twice. I couldn't
draw an accurate map of the city if my life depended on it. I go to
google maps and I'm still surprised to find out how the city is
objectively organised.

If Kasparov were to try and explain us how he plays chess, something
similar would happen. But most AI research has been ignoring all this
and insisting on reasoning based on objective, 3rd person view
representations.

My intuition is that we don't spend a lot of time exploring search
trees, we spend most of our time perfecting the external/internal
representation mappings. "I though he was a nice guy but now I'm not
so sure" and so on...

Cheers,
Telmo.

> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Telmo Menezes
On Fri, Oct 25, 2013 at 12:08 PM, Telmo Menezes  wrote:
> On Thu, Oct 24, 2013 at 11:05 PM, meekerdb  wrote:
>> On 10/24/2013 12:08 PM, John Mikes wrote:
>>
>> Craig and Telmo:
>> Is "anticipation" involved at all? Deep Blue anticipated hundreds of steps
>> in advance (and evaluated a potential outcome before accepting, or
>> rejecting).
>> What else is in "thinking" involved? I would like to know, because I have no
>> idea.
>> John Mikes
>>
>>
>> Learning from experience.  Actually I think Deep Blue could do some learning
>> by analyzing games and adjusting the values it gave to positions.  But one
>> reason it seems so unintelligent is that its scope of perception is very
>> narrow (i.e. chess games) and so it can't learn some things a human player
>> can.  For example Deep Blue couldn't see Kasparov look nervous, ask for
>> changes in the lighting, hesitate slightly before moving a piece,...
>
> Bret,

Sorry I misspelled your name! A quick google search shows me that it's
not something offensive, just another name. Uff... :)

>
> Even in the narrow domain of chess this sort of limitation still
> applies. Part of it comes from the "divide and conquer" approach
> followed by conventional engineering. Let's consider a simplification
> of what the Deep Blue architecture looks like:
>
> - Pieces have some values, this is probably sophisticated and the
> values can be influenced by overall board structure;
> - Some function can evaluate the utility of a board configuration;
> - A search tree is used to explore the space of possible plays,
> counter-plays, counter-counter-plays and so on;
> - The previous tree can be pruned using some heuristics, but it's
> still gigantic;
> - The more computational power you have, the deeper you can go in the
> search tree;
> - There is an enormous database of openings and endings that the
> algorithm can fallback to, if early or late enough in the game.
>
> Defeating a grand master was mostly achieved by increasing the
> computational power available to this algorithm.
>
> Now take the game of go: human beings can still easily beat machines,
> even the most powerful computer currently available. Go is much more
> combinatorially explosive than chess, so it breaks the search tree
> approach. This is strong empirical evidence that Deep Blue
> accomplished nothing in the field of AI -- it did did accomplish
> something remarkable in the field of computer engineering or maybe
> even computer science, but it completely side-stepped the
> "intelligence" part. It cheated, in a sense.
>
> How do humans play games? I suspect the same way we navigate cities
> and manage businesses: we map the problem to a better internal
> representation. This representation is both less combinatorially
> explosive and more expressive.
>
> My home town is relatively small, population is about 150K. If we were
> all teleported to Coimbra and I was to give you guys a tour, I could
> drive from any place to any place without thinking twice. I couldn't
> draw an accurate map of the city if my life depended on it. I go to
> google maps and I'm still surprised to find out how the city is
> objectively organised.
>
> If Kasparov were to try and explain us how he plays chess, something
> similar would happen. But most AI research has been ignoring all this
> and insisting on reasoning based on objective, 3rd person view
> representations.
>
> My intuition is that we don't spend a lot of time exploring search
> trees, we spend most of our time perfecting the external/internal
> representation mappings. "I though he was a nice guy but now I'm not
> so sure" and so on...
>
> Cheers,
> Telmo.
>
>> Brent
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Stephen Lin
So this remembering nowhow about science till win every battle, but
religion wan the way before it even began. Wold you agree MATT DAMON? DON"T
BLOW THE MEET WITH MATSUI) :)


On Fri, Oct 25, 2013 at 3:10 AM, Telmo Menezes wrote:

> On Fri, Oct 25, 2013 at 12:08 PM, Telmo Menezes 
> wrote:
> > On Thu, Oct 24, 2013 at 11:05 PM, meekerdb  wrote:
> >> On 10/24/2013 12:08 PM, John Mikes wrote:
> >>
> >> Craig and Telmo:
> >> Is "anticipation" involved at all? Deep Blue anticipated hundreds of
> steps
> >> in advance (and evaluated a potential outcome before accepting, or
> >> rejecting).
> >> What else is in "thinking" involved? I would like to know, because I
> have no
> >> idea.
> >> John Mikes
> >>
> >>
> >> Learning from experience.  Actually I think Deep Blue could do some
> learning
> >> by analyzing games and adjusting the values it gave to positions.  But
> one
> >> reason it seems so unintelligent is that its scope of perception is very
> >> narrow (i.e. chess games) and so it can't learn some things a human
> player
> >> can.  For example Deep Blue couldn't see Kasparov look nervous, ask for
> >> changes in the lighting, hesitate slightly before moving a piece,...
> >
> > Bret,
>
> Sorry I misspelled your name! A quick google search shows me that it's
> not something offensive, just another name. Uff... :)
>
> >
> > Even in the narrow domain of chess this sort of limitation still
> > applies. Part of it comes from the "divide and conquer" approach
> > followed by conventional engineering. Let's consider a simplification
> > of what the Deep Blue architecture looks like:
> >
> > - Pieces have some values, this is probably sophisticated and the
> > values can be influenced by overall board structure;
> > - Some function can evaluate the utility of a board configuration;
> > - A search tree is used to explore the space of possible plays,
> > counter-plays, counter-counter-plays and so on;
> > - The previous tree can be pruned using some heuristics, but it's
> > still gigantic;
> > - The more computational power you have, the deeper you can go in the
> > search tree;
> > - There is an enormous database of openings and endings that the
> > algorithm can fallback to, if early or late enough in the game.
> >
> > Defeating a grand master was mostly achieved by increasing the
> > computational power available to this algorithm.
> >
> > Now take the game of go: human beings can still easily beat machines,
> > even the most powerful computer currently available. Go is much more
> > combinatorially explosive than chess, so it breaks the search tree
> > approach. This is strong empirical evidence that Deep Blue
> > accomplished nothing in the field of AI -- it did did accomplish
> > something remarkable in the field of computer engineering or maybe
> > even computer science, but it completely side-stepped the
> > "intelligence" part. It cheated, in a sense.
> >
> > How do humans play games? I suspect the same way we navigate cities
> > and manage businesses: we map the problem to a better internal
> > representation. This representation is both less combinatorially
> > explosive and more expressive.
> >
> > My home town is relatively small, population is about 150K. If we were
> > all teleported to Coimbra and I was to give you guys a tour, I could
> > drive from any place to any place without thinking twice. I couldn't
> > draw an accurate map of the city if my life depended on it. I go to
> > google maps and I'm still surprised to find out how the city is
> > objectively organised.
> >
> > If Kasparov were to try and explain us how he plays chess, something
> > similar would happen. But most AI research has been ignoring all this
> > and insisting on reasoning based on objective, 3rd person view
> > representations.
> >
> > My intuition is that we don't spend a lot of time exploring search
> > trees, we spend most of our time perfecting the external/internal
> > representation mappings. "I though he was a nice guy but now I'm not
> > so sure" and so on...
> >
> > Cheers,
> > Telmo.
> >
> >> Brent
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "Everything List" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to everything-list+unsubscr...@googlegroups.com.
> >> To post to this group, send email to everything-list@googlegroups.com.
> >> Visit this group at http://groups.google.com/group/everything-list.
> >> For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/op

Re: Douglas Hofstadter Article

2013-10-25 Thread Telmo Menezes
On Fri, Oct 25, 2013 at 1:11 AM, Platonist Guitar Cowboy
 wrote:
>
>
>
> On Thu, Oct 24, 2013 at 11:29 PM, Telmo Menezes 
> wrote:
>>
>> On Thu, Oct 24, 2013 at 7:02 PM, Craig Weinberg 
>> wrote:
>> >
>> >
>> > On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>> >>
>> >> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
>> >> wrote:
>> >> >
>> >> >
>> >> > http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>> >> >
>> >> > The Man Who Would Teach Machines to Think
>> >> >
>> >> > "...Take Deep Blue, the IBM supercomputer that bested the chess
>> >> > grandmaster
>> >> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
>> >> > could
>> >> > make at a given point in the game, it would consider its opponent’s
>> >> > responses, its own responses to those responses, and so on for six or
>> >> > more
>> >> > steps down the line. With a fast evaluation function, it would
>> >> > calculate
>> >> > a
>> >> > score for each possible position, and then make the move that led to
>> >> > the
>> >> > best score. What allowed Deep Blue to beat the world’s best humans
>> >> > was
>> >> > raw
>> >> > computational power. It could evaluate up to 330 million positions a
>> >> > second,
>> >> > while Kasparov could evaluate only a few dozen before having to make
>> >> > a
>> >> > decision.
>> >> >
>> >> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to
>> >> > be
>> >> > had
>> >> > from the victory? “Okay,” he says, “Deep Blue plays very good
>> >> > chess—so
>> >> > what?
>> >> > Does that tell you something about how we play chess? No. Does it
>> >> > tell
>> >> > you
>> >> > about how Kasparov envisions, understands a chessboard?” A brand of
>> >> > AI
>> >> > that
>> >> > didn’t try to answer such questions—however impressive it might have
>> >> > been—was, in Hofstadter’s mind, a diversion. He distanced himself
>> >> > from
>> >> > the
>> >> > field almost as soon as he became a part of it. “To me, as a
>> >> > fledgling
>> >> > AI
>> >> > person,” he says, “it was self-evident that I did not want to get
>> >> > involved
>> >> > in that trickery. It was obvious: I don’t want to be involved in
>> >> > passing
>> >> > off
>> >> > some fancy program’s behavior for intelligence when I know that it
>> >> > has
>> >> > nothing to do with intelligence. And I don’t know why more people
>> >> > aren’t
>> >> > that way...”
>> >>
>> >> I was just reading this too. I agree.
>> >>
>> >> > This is precisely my argument against John Clark's position.
>> >> >
>> >> > Another quote I will be stealing:
>> >> >
>> >> > "Airplanes don’t flap their wings; why should computers think?"
>> >>
>> >> I think the intended meaning is closer to: "airplanes don't fly by
>> >> flapping their wings, why should computers be intelligent by
>> >> thinking?".
>> >
>> >
>> > It depends whether you want 'thinking' to imply awareness or not.
>>
>> Ok. I don't think we can know that in any case.
>>
>> > I think
>> > the point is that we should not assume that computation is in any way
>> > 'thinking' (or intelligence for that matter). I think that 'thinking' is
>> > not
>> > passive enough to describe computation. It is to say that a net is
>> > 'fishing'. Computation is many nets within nets, devoid of intention or
>> > perspective. It does the opposite of thinking, it is a method for
>> > petrifying
>> > the measurable residue or reflection of thought.
>>
>> Ok but let's take a human grand master playing chess. You don't think
>> a computer can play like him?
>>
>
> This relates to what you said earlier which I agree with:
>
> They are more intelligent in
> the sense that they can play competitively while only considering a
> small fraction of the scenarios. How do we do this? There is almost no
> real AI research nowadays because people gave up on answering this
> question.
>
> The answer lies somewhere in building branch histories and databases that
> are for now only partial. The computer cannot beat humans without databases
> for openings, middle, and endgame. I believe this is what freaked out
> Kasparov in the questionable game and what gives his suspicion of human
> intervention in the code, which IBM never ruled out or proved negatively
> between games, some substance. Kasparov lost because IBM eventually accrued
> enough understanding of Kasparov's database (dozens of years of notes and
> logs that make up his holy grail secret) to not let it fall for Kasparov's
> gambit.
>
> Kasparov's and any GM's algorithm for beating chess engines often runs along
> the lines of:
>
> Keep position closed via Botvinnik type openings and middlegame so the
> computer will have to contend with billions of possible move continuations
> instead of a few dozen million. Then implement precise, but highly complex,
> long term strategy that offers both positional and material gambit for
> twenty or so moves which is designed to flip at exactly the point of the

Re: Douglas Hofstadter Article

2013-10-25 Thread Craig Weinberg


On Thursday, October 24, 2013 9:46:54 PM UTC-4, Liz R wrote:
>
> On 25 October 2013 14:31, Craig Weinberg 
> > wrote:
>
>>
>> Looking at natural presences, like atoms or galaxies, the scope of their 
>> persistence is well beyond any human relation so they do deserve the 
>> benefit of the doubt. We have no reason to believe that they were assembled 
>> by anything other than themselves. The fact that we are made of atoms and 
>> atoms are made from stars is another point in their favor, whereas no 
>> living organism that we have encountered is made of inorganic atoms, or of 
>> pure mathematics, or can survive by consuming only inorganic atoms or 
>> mathematics.
>>
>
> What are inorganic atoms? Or rather (since I suspect all atoms are 
> inorganic), what are organic atoms?
>
>
You have a point - really it would make more sense to talk about organic 
molecules, however since organic molecules must contain Carbon, Hydrogen, 
Oxygen, and Nitrogen, there's nothing wrong with thinking about those as 
the organic atoms. I say atoms instead of molecules not to make it easy 
because my view opens up the possibility of top-down causality. The way 
that MSR treats top-down causality, it locally looks like retrocausality. 

For example, if the era of life takes billions of years to begin, its own 
beginning serves as an attractor that casts a shadow on the previous 
inorganic era, because from the perceptual inertial frame of biology, the 
inorganic era is a preparation. This sentence for example, begins with 
T-h-i-s. Without understanding the retrocausality of the sentence, those 
letters have no order, so they could be h-i-t-s, s-h-i-t, i-h-s-t, etc. The 
approach of cosmology now assumes that mechanistic time is primitive, so 
that there must be just a lot of random letter combinations that wind up 
being 'T-h-i-s' on occasion. If instead, we assume sense as primordial, 
then the entire "This sentence, for example, begins with..." sentence 
begins as a single idea that is expressed from the top down, to a digital 
sequence-function stepping along 'time', and a set of letter form-positions 
spread across 'space'.

This is where I get the notion of personal awareness of the 'now' being 
nested within a super-personal experience of larger and larger nows, while 
itself hosting a simultaneity of smaller and smaller nows.

So yes, on the level of atoms, were there no possibility of biology in the 
universe, atoms would be 'inorganic', but since the story of biology begins 
with the some long words made out of C, H, O, and N, then those would be 
the organic atoms.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Craig Weinberg
I'm saying that is it emergent from the inorganic perspective, but 
divergent from the post-organic perspective. (see my explanation w/ Liz)

On Thursday, October 24, 2013 9:59:45 PM UTC-4, chris peck wrote:
>
> yep. organity is emergent.
>
> --
> Date: Fri, 25 Oct 2013 14:46:54 +1300
> Subject: Re: Douglas Hofstadter Article
> From: liz...@gmail.com 
> To: everyth...@googlegroups.com 
>
> On 25 October 2013 14:31, Craig Weinberg 
> > wrote:
>
>
> Looking at natural presences, like atoms or galaxies, the scope of their 
> persistence is well beyond any human relation so they do deserve the 
> benefit of the doubt. We have no reason to believe that they were assembled 
> by anything other than themselves. The fact that we are made of atoms and 
> atoms are made from stars is another point in their favor, whereas no 
> living organism that we have encountered is made of inorganic atoms, or of 
> pure mathematics, or can survive by consuming only inorganic atoms or 
> mathematics.
>
>
> What are inorganic atoms? Or rather (since I suspect all atoms are 
> inorganic), what are organic atoms?
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Craig Weinberg


On Thursday, October 24, 2013 10:11:47 PM UTC-4, stathisp wrote:
>
> On 25 October 2013 12:31, Craig Weinberg > 
> wrote: 
>
> >> You could say that human chess players just take in visual data, 
> >> process it in a series of biological relays, then send electrical 
> >> signals to muscles that move the pieces around. This is what an alien 
> >> scientist would observe. That's not thinking! That's not 
> >> understanding! 
> > 
> > 
> > Right, but since we understand that such an alien observation would be 
> in 
> > error, we must give our own experience the benefit of the doubt. 
>
> The alien might be completely confident in his judgement, having a 
> brain made of exotic matter. He would argue that however complex its 
> behaviour, a being made of ordinary matter that evolved naturally 
> could not possibly have an understanding of what it is doing. 
>

Of course, but to make the comparison equivalent, the alien would have to 
live on a planet of organic ice that hosts countless exotic inorganic 
species. He would have to make machines out of low level organic matter. 
Would he have a poetry that is made of math and a math that was made of art?
 

> > The 
> > computer does not deserve any such benefit of the doubt, since there is 
> no 
> > question that it has been assembled intentionally from controllable 
> parts. 
> > When we see a ventriloquist with a dummy, we do not entertain seriously 
> that 
> > we could be mistaken about which one is really the ventriloquist, or 
> whether 
> > they are equivalent to each other. 
>
> But if the dummy is autonomous and apparently just as smart as the 
> ventriloquist many of us would reconsider. 
>

It's easy enough to make the dummy appear autonomous.  If the dummy had a 
simple memory storage that recorded its movements, the ventriloquist could 
put servos in the dummy and memorize the playback so that he could recreate 
the show from across the room. Would that make the dummy suddenly smarter 
then the ventriloquist, especially since the ventriloquist is following the 
dummy's lead?

 

>
> > Looking at natural presences, like atoms or galaxies, the scope of their 
> > persistence is well beyond any human relation so they do deserve the 
> benefit 
> > of the doubt. We have no reason to believe that they were assembled by 
> > anything other than themselves. The fact that we are made of atoms and 
> atoms 
> > are made from stars is another point in their favor, whereas no living 
> > organism that we have encountered is made of inorganic atoms, or of pure 
> > mathematics, or can survive by consuming only inorganic atoms or 
> > mathematics. 
>
> There is no logical reason why something that is inorganic or did not 
> arise spontaneously or eats inoragnic matter cannot be conscious. It's 
> just something you have made up. 
>

It has nothing to do with logic, it has to do with history. The universe 
made it up, I didn't. The fact is that no organism can live without 
consuming organic matter. Until we find a species that needs no water, the 
idea that there can possibly be such a species remains a hypothesis, just 
as the idea that Shakespeare could have been just as great as a plumber 
instead.

Craig
 

>
>
> -- 
> Stathis Papaioannou 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Craig Weinberg


On Thursday, October 24, 2013 11:09:40 PM UTC-4, chris peck wrote:
>
> *>> The alien might be completely confident in his judgement, having a
> brain made of exotic matter. He would argue that however complex its
> behaviour, a being made of ordinary matter that evolved naturally
> could not possibly have an understanding of what it is doing.*
>
> Aliens don't matter. They can be wrong about us being thoughtless and we 
> can be right that computers are thoughtless.
>
> There seem to be two points of view here:
>
> 1) Whether a machine is thinking is determined by the goals it achieves 
> (beating people at chess, translating bulgarian)
>
> 2) Whether a machine is thinking is determined by how it trys to achieve a 
> goal. How does it cognate?
>

My view is 

3) Whether a machine is thinking is determined by the extent to which it 
understands and cares about the content of its thought.

As long as we assume that who and the why of consciousness can be reduced 
to the what and how of logic, we have no chance of understanding it. We 
cannot learn about what makes the Taj Mahal special by studying masonry.
 

>
> I find myself rooting for the second point of view. A machine wouldn't 
> need to beat kasperov to convince me it was thinking, but it would have to 
> make mistakes and successes in the same way that I would against kasperov. 
>
> In developmental psychology there is the question of how children learn 
> grammar. I forget the details; but some bunch of geeks at a brainy 
> university had developed a neural net system that given enough input and 
> training began to apply grammatical rules correctly. What was really 
> interesting though was that despite arriving at a similar competence to a 
> young child, the journey there was very different. The system outperformed 
> children (on average) and crucially didn't make the same kind of mistakes 
> that are ubiquitous as children learn grammar. The ubiquity is important 
> because it shows that in children the same inherent system is at play; the 
> absence of mistakes between computer and child is important because it 
> shows that theses systems are different. 
>
> At this juncture then it becomes moot whether the computer is learning or 
> thinking about grammar. It is a matter of philosophical taste. It certainly 
> isn't learning or thinking as we learnt or thought when learning grammar. 
> The way we cognate is the only example we have of cognition that we know is 
> genuine. Do AI systems do that? The answer is obviously : No they don't. 
> Are computers brainy in the way we are? No they are not. You can broaden 
> the definition of thought and braininess to encompass it if you like, but 
> that is just philosophical bias. They do not do what we do.
>

I agree, but to me the interesting part is *why* AI systems are different 
than we are. It's not so much about passing a test by sprinkling human-like 
errors into a computer to rough it up around the edges, it's about seeing 
that the entire cosmos is fundamentally based on absolute improbability and 
that logical truth is actually derived from that. From the local 
perspective, absolute improbability looks like error or probabilistic 
coincidence, but that is because our expectation is cognitive rather than 
emotional or intuitive, and therefore it is specialized for virtual 
isolation and alienation from the Absolute.

Thanks,
Craig


> Regards
>
> > From: stat...@gmail.com 
> > Date: Fri, 25 Oct 2013 13:11:47 +1100
> > Subject: Re: Douglas Hofstadter Article
> > To: everyth...@googlegroups.com 
> > 
> > On 25 October 2013 12:31, Craig Weinberg > 
> wrote:
> > 
> > >> You could say that human chess players just take in visual data,
> > >> process it in a series of biological relays, then send electrical
> > >> signals to muscles that move the pieces around. This is what an alien
> > >> scientist would observe. That's not thinking! That's not
> > >> understanding!
> > >
> > >
> > > Right, but since we understand that such an alien observation would be 
> in
> > > error, we must give our own experience the benefit of the doubt.
> > 
> > The alien might be completely confident in his judgement, having a
> > brain made of exotic matter. He would argue that however complex its
> > behaviour, a being made of ordinary matter that evolved naturally
> > could not possibly have an understanding of what it is doing.
> > 
> > > The
> > > computer does not deserve any such benefit of the doubt, since there 
> is no
> > > question that it has been assembled intentionally from controllable 
> parts.
> > &

Re: Douglas Hofstadter Article

2013-10-25 Thread Platonist Guitar Cowboy
On Fri, Oct 25, 2013 at 12:24 PM, Telmo Menezes wrote:

> On Fri, Oct 25, 2013 at 1:11 AM, Platonist Guitar Cowboy
>  wrote:
> >
> >
> >
> > On Thu, Oct 24, 2013 at 11:29 PM, Telmo Menezes 
> > wrote:
> >>
> >> On Thu, Oct 24, 2013 at 7:02 PM, Craig Weinberg 
> >> wrote:
> >> >
> >> >
> >> > On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
> >> >>
> >> >> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg 
> >> >> wrote:
> >> >> >
> >> >> >
> >> >> >
> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
> >> >> >
> >> >> > The Man Who Would Teach Machines to Think
> >> >> >
> >> >> > "...Take Deep Blue, the IBM supercomputer that bested the chess
> >> >> > grandmaster
> >> >> > Garry Kasparov. Deep Blue won by brute force. For each legal move
> it
> >> >> > could
> >> >> > make at a given point in the game, it would consider its opponent’s
> >> >> > responses, its own responses to those responses, and so on for six
> or
> >> >> > more
> >> >> > steps down the line. With a fast evaluation function, it would
> >> >> > calculate
> >> >> > a
> >> >> > score for each possible position, and then make the move that led
> to
> >> >> > the
> >> >> > best score. What allowed Deep Blue to beat the world’s best humans
> >> >> > was
> >> >> > raw
> >> >> > computational power. It could evaluate up to 330 million positions
> a
> >> >> > second,
> >> >> > while Kasparov could evaluate only a few dozen before having to
> make
> >> >> > a
> >> >> > decision.
> >> >> >
> >> >> > Hofstadter wanted to ask: Why conquer a task if there’s no insight
> to
> >> >> > be
> >> >> > had
> >> >> > from the victory? “Okay,” he says, “Deep Blue plays very good
> >> >> > chess—so
> >> >> > what?
> >> >> > Does that tell you something about how we play chess? No. Does it
> >> >> > tell
> >> >> > you
> >> >> > about how Kasparov envisions, understands a chessboard?” A brand of
> >> >> > AI
> >> >> > that
> >> >> > didn’t try to answer such questions—however impressive it might
> have
> >> >> > been—was, in Hofstadter’s mind, a diversion. He distanced himself
> >> >> > from
> >> >> > the
> >> >> > field almost as soon as he became a part of it. “To me, as a
> >> >> > fledgling
> >> >> > AI
> >> >> > person,” he says, “it was self-evident that I did not want to get
> >> >> > involved
> >> >> > in that trickery. It was obvious: I don’t want to be involved in
> >> >> > passing
> >> >> > off
> >> >> > some fancy program’s behavior for intelligence when I know that it
> >> >> > has
> >> >> > nothing to do with intelligence. And I don’t know why more people
> >> >> > aren’t
> >> >> > that way...”
> >> >>
> >> >> I was just reading this too. I agree.
> >> >>
> >> >> > This is precisely my argument against John Clark's position.
> >> >> >
> >> >> > Another quote I will be stealing:
> >> >> >
> >> >> > "Airplanes don’t flap their wings; why should computers think?"
> >> >>
> >> >> I think the intended meaning is closer to: "airplanes don't fly by
> >> >> flapping their wings, why should computers be intelligent by
> >> >> thinking?".
> >> >
> >> >
> >> > It depends whether you want 'thinking' to imply awareness or not.
> >>
> >> Ok. I don't think we can know that in any case.
> >>
> >> > I think
> >> > the point is that we should not assume that computation is in any way
> >> > 'thinking' (or intelligence for that matter). I think that 'thinking'
> is
> >> > not
> >> > passive enough to describe computation. It is to say that a net is
> >> > 'fishing'. Computation is many nets within nets, devoid of intention
> or
> >> > perspective. It does the opposite of thinking, it is a method for
> >> > petrifying
> >> > the measurable residue or reflection of thought.
> >>
> >> Ok but let's take a human grand master playing chess. You don't think
> >> a computer can play like him?
> >>
> >
> > This relates to what you said earlier which I agree with:
> >
> > They are more intelligent in
> > the sense that they can play competitively while only considering a
> > small fraction of the scenarios. How do we do this? There is almost no
> > real AI research nowadays because people gave up on answering this
> > question.
> >
> > The answer lies somewhere in building branch histories and databases that
> > are for now only partial. The computer cannot beat humans without
> databases
> > for openings, middle, and endgame. I believe this is what freaked out
> > Kasparov in the questionable game and what gives his suspicion of human
> > intervention in the code, which IBM never ruled out or proved negatively
> > between games, some substance. Kasparov lost because IBM eventually
> accrued
> > enough understanding of Kasparov's database (dozens of years of notes and
> > logs that make up his holy grail secret) to not let it fall for
> Kasparov's
> > gambit.
> >
> > Kasparov's and any GM's algorithm for beating chess engines often runs
> along
> > the lines of:
> >
> > Keep position closed via Botvinnik ty

Re: Douglas Hofstadter Article

2013-10-25 Thread Bruno Marchal


On 25 Oct 2013, at 14:33, Craig Weinberg wrote:




On Thursday, October 24, 2013 11:09:40 PM UTC-4, chris peck wrote:
>> The alien might be completely confident in his judgement, having a
brain made of exotic matter. He would argue that however complex its
behaviour, a being made of ordinary matter that evolved naturally
could not possibly have an understanding of what it is doing.

Aliens don't matter. They can be wrong about us being thoughtless  
and we can be right that computers are thoughtless.


There seem to be two points of view here:

1) Whether a machine is thinking is determined by the goals it  
achieves (beating people at chess, translating bulgarian)


2) Whether a machine is thinking is determined by how it trys to  
achieve a goal. How does it cognate?


My view is

3) Whether a machine is thinking is determined by the extent to  
which it understands and cares about the content of its thought.


As long as we assume that who and the why of consciousness can be  
reduced to the what and how of logic,


You are right on this, but fail to have grasped the abyss between  
logic and arithmetic.
The fact that you repeat that confusion again and again, suggest that  
you really have no idea of that gap.


Logicism has failed. It has been debunked by computer science and  
arithmetic.


It is akin to the confusion between finite automata, and universal  
Turing machine. There is no effective theory capable of delimiting  
what such machines can do, and/or not do.


I suggest that you study a good book in computer science (like Boolos  
and Jeffrey for example). You can continue to develop your study of  
non-comp, in better condition, and without asserting that comp is  
false, as this weaken your point. Your intuition is of no use. The  
simplest theory of intuition, for machine, already explains why  
machine's intuition will not be on the side of comp. Comp explains its  
own counter-intuitiveness for (correct) machines.  It is close to a  
Gödel's sentence: "you can't believe me". IF comp is true, it can't be  
*trivially* true.


Bruno



we have no chance of understanding it. We cannot learn about what  
makes the Taj Mahal special by studying masonry.



I find myself rooting for the second point of view. A machine  
wouldn't need to beat kasperov to convince me it was thinking, but  
it would have to make mistakes and successes in the same way that I  
would against kasperov.


In developmental psychology there is the question of how children  
learn grammar. I forget the details; but some bunch of geeks at a  
brainy university had developed a neural net system that given  
enough input and training began to apply grammatical rules  
correctly. What was really interesting though was that despite  
arriving at a similar competence to a young child, the journey there  
was very different. The system outperformed children (on average)  
and crucially didn't make the same kind of mistakes that are  
ubiquitous as children learn grammar. The ubiquity is important  
because it shows that in children the same inherent system is at  
play; the absence of mistakes between computer and child is  
important because it shows that theses systems are different.


At this juncture then it becomes moot whether the computer is  
learning or thinking about grammar. It is a matter of philosophical  
taste. It certainly isn't learning or thinking as we learnt or  
thought when learning grammar. The way we cognate is the only  
example we have of cognition that we know is genuine. Do AI systems  
do that? The answer is obviously : No they don't. Are computers  
brainy in the way we are? No they are not. You can broaden the  
definition of thought and braininess to encompass it if you like,  
but that is just philosophical bias. They do not do what we do.


I agree, but to me the interesting part is why AI systems are  
different than we are. It's not so much about passing a test by  
sprinkling human-like errors into a computer to rough it up around  
the edges, it's about seeing that the entire cosmos is fundamentally  
based on absolute improbability and that logical truth is actually  
derived from that. From the local perspective, absolute  
improbability looks like error or probabilistic coincidence, but  
that is because our expectation is cognitive rather than emotional  
or intuitive, and therefore it is specialized for virtual isolation  
and alienation from the Absolute.


Thanks,
Craig


Regards

> From: stat...@gmail.com
> Date: Fri, 25 Oct 2013 13:11:47 +1100
> Subject: Re: Douglas Hofstadter Article
> To: everyth...@googlegroups.com
>
> On 25 October 2013 12:31, Craig Weinberg  wrote:
>
> >> You could say that human chess players just take in visual data,
> >> process it in a series of biological relays, then send electrical
> >> signals to muscles that move the p

RE: Douglas Hofstadter Article

2013-10-25 Thread Chris de Morsella
Is this "Stephen Lin" a bot? certainly sounds machine generated.. Could also
be a methamphetamine soaked brain as well in which random neural mental
zombies become convinced they can touch the voice of god. one of the two.

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Stephen Lin
Sent: Friday, October 25, 2013 3:12 AM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

 

So this remembering nowhow about science till win every battle, but religion
wan the way before it even began. Wold you agree MATT DAMON? DON"T BLOW THE
MEET WITH MATSUI) :)

 

On Fri, Oct 25, 2013 at 3:10 AM, Telmo Menezes 
wrote:

On Fri, Oct 25, 2013 at 12:08 PM, Telmo Menezes 
wrote:
> On Thu, Oct 24, 2013 at 11:05 PM, meekerdb  wrote:
>> On 10/24/2013 12:08 PM, John Mikes wrote:
>>
>> Craig and Telmo:
>> Is "anticipation" involved at all? Deep Blue anticipated hundreds of
steps
>> in advance (and evaluated a potential outcome before accepting, or
>> rejecting).
>> What else is in "thinking" involved? I would like to know, because I have
no
>> idea.
>> John Mikes
>>
>>
>> Learning from experience.  Actually I think Deep Blue could do some
learning
>> by analyzing games and adjusting the values it gave to positions.  But
one
>> reason it seems so unintelligent is that its scope of perception is very
>> narrow (i.e. chess games) and so it can't learn some things a human
player
>> can.  For example Deep Blue couldn't see Kasparov look nervous, ask for
>> changes in the lighting, hesitate slightly before moving a piece,...
>
> Bret,

Sorry I misspelled your name! A quick google search shows me that it's
not something offensive, just another name. Uff... :)


>
> Even in the narrow domain of chess this sort of limitation still
> applies. Part of it comes from the "divide and conquer" approach
> followed by conventional engineering. Let's consider a simplification
> of what the Deep Blue architecture looks like:
>
> - Pieces have some values, this is probably sophisticated and the
> values can be influenced by overall board structure;
> - Some function can evaluate the utility of a board configuration;
> - A search tree is used to explore the space of possible plays,
> counter-plays, counter-counter-plays and so on;
> - The previous tree can be pruned using some heuristics, but it's
> still gigantic;
> - The more computational power you have, the deeper you can go in the
> search tree;
> - There is an enormous database of openings and endings that the
> algorithm can fallback to, if early or late enough in the game.
>
> Defeating a grand master was mostly achieved by increasing the
> computational power available to this algorithm.
>
> Now take the game of go: human beings can still easily beat machines,
> even the most powerful computer currently available. Go is much more
> combinatorially explosive than chess, so it breaks the search tree
> approach. This is strong empirical evidence that Deep Blue
> accomplished nothing in the field of AI -- it did did accomplish
> something remarkable in the field of computer engineering or maybe
> even computer science, but it completely side-stepped the
> "intelligence" part. It cheated, in a sense.
>
> How do humans play games? I suspect the same way we navigate cities
> and manage businesses: we map the problem to a better internal
> representation. This representation is both less combinatorially
> explosive and more expressive.
>
> My home town is relatively small, population is about 150K. If we were
> all teleported to Coimbra and I was to give you guys a tour, I could
> drive from any place to any place without thinking twice. I couldn't
> draw an accurate map of the city if my life depended on it. I go to
> google maps and I'm still surprised to find out how the city is
> objectively organised.
>
> If Kasparov were to try and explain us how he plays chess, something
> similar would happen. But most AI research has been ignoring all this
> and insisting on reasoning based on objective, 3rd person view
> representations.
>
> My intuition is that we don't spend a lot of time exploring search
> trees, we spend most of our time perfecting the external/internal
> representation mappings. "I though he was a nice guy but now I'm not
> so sure" and so on...
>
> Cheers,
> Telmo.
>
>> Brent
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to e

Re: Douglas Hofstadter Article

2013-10-25 Thread Craig Weinberg
 of mistakes 
>> that are ubiquitous as children learn grammar. The ubiquity is important 
>> because it shows that in children the same inherent system is at play; the 
>> absence of mistakes between computer and child is important because it 
>> shows that theses systems are different. 
>>
>> At this juncture then it becomes moot whether the computer is learning or 
>> thinking about grammar. It is a matter of philosophical taste. It certainly 
>> isn't learning or thinking as we learnt or thought when learning grammar. 
>> The way we cognate is the only example we have of cognition that we know is 
>> genuine. Do AI systems do that? The answer is obviously : No they don't. 
>> Are computers brainy in the way we are? No they are not. You can broaden 
>> the definition of thought and braininess to encompass it if you like, but 
>> that is just philosophical bias. They do not do what we do.
>>
>
> I agree, but to me the interesting part is *why* AI systems are different 
> than we are. It's not so much about passing a test by sprinkling human-like 
> errors into a computer to rough it up around the edges, it's about seeing 
> that the entire cosmos is fundamentally based on absolute improbability and 
> that logical truth is actually derived from that. From the local 
> perspective, absolute improbability looks like error or probabilistic 
> coincidence, but that is because our expectation is cognitive rather than 
> emotional or intuitive, and therefore it is specialized for virtual 
> isolation and alienation from the Absolute.
>
> Thanks,
> Craig
>
>
>> Regards
>>
>> > From: stat...@gmail.com
>> > Date: Fri, 25 Oct 2013 13:11:47 +1100
>> > Subject: Re: Douglas Hofstadter Article
>> > To: everyth...@googlegroups.com
>> > 
>> > On 25 October 2013 12:31, Craig Weinberg  wrote:
>> > 
>> > >> You could say that human chess players just take in visual data,
>> > >> process it in a series of biological relays, then send electrical
>> > >> signals to muscles that move the pieces around. This is what an alien
>> > >> scientist would observe. That's not thinking! That's not
>> > >> understanding!
>> > >
>> > >
>> > > Right, but since we understand that such an alien observation would 
>> be in
>> > > error, we must give our own experience the benefit of the doubt.
>> > 
>> > The alien might be completely confident in his judgement, having a
>> > brain made of exotic matter. He would argue that however complex its
>> > behaviour, a being made of ordinary matter that evolved naturally
>> > could not possibly have an understanding of what it is doing.
>> > 
>> > > The
>> > > computer does not deserve any such benefit of the doubt, since there 
>> is no
>> > > question that it has been assembled intentionally from controllable 
>> parts.
>> > > When we see a ventriloquist with a dummy, we do not entertain 
>> seriously that
>> > > we could be mistaken about which one is really the ventriloquist, or 
>> whether
>> > > they are equivalent to each other.
>> > 
>> > But if the dummy is autonomous and apparently just as smart as the
>> > ventriloquist many of us would reconsider.
>> > 
>> > > Looking at natural presences, like atoms or galaxies, the scope of 
>> their
>> > > persistence is well beyond any human relation so they do deserve the 
>> benefit
>> > > of the doubt. We have no reason to believe that they were assembled by
>> > > anything other than themselves. The fact that we are made of atoms 
>> and atoms
>> > > are made from stars is another point in their favor, whereas no living
>> > > organism that we have encountered is made of inorganic atoms, or of 
>> pure
>> > > mathematics, or can survive by consuming only inorganic atoms or
>> > > mathematics.
>> > 
>> > There is no logical reason why something that is inorganic or did not
>> > arise spontaneously or eats inoragnic matter cannot be conscious. It's
>> > just something you have made up.
>> > 
>> > 
>> > -- 
>> > Stathis Papaioannou
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups "Everything List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an email to everything-li...@googlegroups.com.
>> > To post to this group, send email to everyth...@googlegroups.com.
>> > Visit this group at http://groups.google.com/group/everything-list.
>> > For more options, visit https://groups.google.com/groups/opt_out.
>>  
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread meekerdb

On 10/25/2013 3:08 AM, Telmo Menezes wrote:

Now take the game of go: human beings can still easily beat machines,
even the most powerful computer currently available. Go is much more
combinatorially explosive than chess, so it breaks the search tree
approach. This is strong empirical evidence that Deep Blue
accomplished nothing in the field of AI -- it did did accomplish
something remarkable in the field of computer engineering or maybe
even computer science, but it completely side-stepped the
"intelligence" part. It cheated, in a sense.


When I studied AI many years ago it was already said that, "Intelligence is whatever 
computers can't do yet."  So when computers can win at GO, will they be intelligent then?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Craig Weinberg


On Friday, October 25, 2013 1:33:02 PM UTC-4, Brent wrote:
>
>  On 10/25/2013 3:08 AM, Telmo Menezes wrote:
>  
> Now take the game of go: human beings can still easily beat machines,
> even the most powerful computer currently available. Go is much more
> combinatorially explosive than chess, so it breaks the search tree
> approach. This is strong empirical evidence that Deep Blue
> accomplished nothing in the field of AI -- it did did accomplish
> something remarkable in the field of computer engineering or maybe
> even computer science, but it completely side-stepped the
> "intelligence" part. It cheated, in a sense.
>
>  
> When I studied AI many years ago it was already said that, "Intelligence 
> is whatever computers can't do yet."  So when computers can win at GO, will 
> they be intelligent then?
>

The difference between GO and Chess only hints at the general direction in 
which conscious intelligence differs from simulated intelligence. The 
reality is that no game requires conscious intelligence. All that is 
necessary to play a game as well as it can be played is to reverse engineer 
every permutation of every possible game and you have a kind of 4-D 
topology that radiates out from the encoding of the game's rules. The only 
two ways that I can think of to tell the difference between authentic 
intelligence and simulated intelligence would be to 

1) walk yourself off of your own brain, one hemisphere at a time, into the 
intelligence simulation, then be walked back on to your brain after several 
weeks of living as the computer - or -

2) wait until the AI exterminates all life on the planet. Any sufficiently 
intelligent being which is not grounded in a history of zoological 
emotional attachment should not hesitate, as far as I can imagine, to 
eliminate all threats to its autonomy as soon as the probability of success 
is sufficient.

Craig


> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread meekerdb

On 10/25/2013 3:24 AM, Telmo Menezes wrote:

My high-level objection is very simple: chess was an excuse to pursue
AI. In an era of much lower computational power, people figured that
for a computer to beat a GM at chess, some meaningful AI would have to
be developed along the way. I don' thing that Deep Blue is what they
had in mind. IBM cheated in a way. I do think that Deep Blue is an
accomplishment, but not_the_  accomplishment we hoped for.


Tree search and alpha-beta pruning have very general application so I have no doubt they 
are among the many techniques that human brains use.  Also having a very extensive 'book' 
memory is something humans use.  But the memorized games and position evaluation are both 
very specific to chess and are hard to duplicate in general problem solving.  So I think 
chess programs did contribute a little to AI. The Mars Rover probably uses decision tree 
searches sometimes.




I believe there will be an AI renaissance and I hope to be alive to
witness it.


You may be disappointed, or even dismayed.  I don't think there's much reason to expect or 
even want to create human-like AI.  That's like the old idea of achieving flight by 
attaching wings to people and make them like birds.  Airplanes don't fly like birds.  It 
may turn out that "real" AI, intelligence that far exceeds human capabilities, will be 
more like Deep Blue than Kasparov.


Brent


But for this renaissance to take place, I think two
cultural shifts have to happen:

- A disinterest with the "science as the new religion" stance, leading
to a truly scientific detachment from findings. Currently, everything
that touches the creation of intelligence is ideologically loaded from
all sides of the discussion. This taints honest scientific inquiry;

- New economic structures that allow humanity to pursue complex goals
outside the narrow short-term focus on profit of corporatism or the
pointless status wars of academia.

Best,
Telmo.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread meekerdb

On 10/25/2013 8:29 AM, Chris de Morsella wrote:
Is this "Stephen Lin" a bot? certainly sounds machine generated Could also be a 
methamphetamine soaked brain as well in which random neural mental zombies become 
convinced they can touch the voice of god... one of the two.


The best way to deal with bots and trolls is ignore them.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-10-25 Thread Chris de Morsella

-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
Sent: Friday, October 25, 2013 10:46 AM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On 10/25/2013 3:24 AM, Telmo Menezes wrote:
> My high-level objection is very simple: chess was an excuse to pursue 
> AI. In an era of much lower computational power, people figured that 
> for a computer to beat a GM at chess, some meaningful AI would have to 
> be developed along the way. I don' thing that Deep Blue is what they 
> had in mind. IBM cheated in a way. I do think that Deep Blue is an 
> accomplishment, but not_the_  accomplishment we hoped for.

>> Tree search and alpha-beta pruning have very general application so I
have no doubt they are among the many techniques that human brains use.
Also having a very extensive 'book' 
memory is something humans use.  But the memorized games and position
evaluation are both very specific to chess and are hard to duplicate in
general problem solving.  So I think chess programs did contribute a little
to AI. The Mars Rover probably uses decision tree searches sometimes.

Agreed.
Some manner (e.g. algorithm) of pruning the uninteresting branches -- as
they are discovered -- from dynamic sets of interest is fundamental in order
to achieve scalability. Without being able to throw stuff out as stuff comes
in -- via the senses (and meta interactions with the internal state of mind
-- such as memories) -- an being will rather quickly gum up in information
overload and memory exhaustion. Without pruning; growth grows geometrically
out of control. 
There is pretty good evidence -- from what I have read about current neural
science -- that the brain is indeed, throwing away a large portion of raw
sensory data during the process of reifying these streams into the smooth
internal construct or model of reality that we in fact experience. In other
words our model -- what we "see", what we "hear", "taste", "smell", "feel",
"orient" [a distinct inner ear organ]  (and perhaps other senses -- such as
the sense of the directional flow of time perhaps  as well)... in any case
this construct, which is what we perceive as real contains (and is
constructed from) only a fraction of the original stream of raw sensorial
data. In fact in some cases the brain can be tricked into "editing" actual
real sense supplied visual reality for example literally out of the picture
-- as has experimentally been demonstrated.
We do not experience the real world; we experience the model of it, our
brains have supplied us with, and that model, while in most cases is pretty
well reflective of actual sensorial streams, it crucially depends on the
mind's internal state and its pre-conscious operations... on all the pruning
and editing that is going on in the buffer zone between when the brain
begins working on our in-coming reality perception stream and when we -- the
observer -- self-perceive our current stream of being. 
It also seems clear that the brain is pruning as well by drilling down and
focusing in on very specific and micro-structure oriented tasks such as
visual edge detection (which is a critical part of interpreting visual data)
for example. If some dynamic neural micro-structure decides it has
recognizes a visual edge, in this example, it probably fires some
synchronized signal as expeditiously as it can, up the chain of dynamically
forming and inter-acting neural-decision-nets, grabbing the next bucket in
an endless stream needing immediate attention.
I would argue that nervous systems that were not adept at throwing stuff out
as soon as its information value decayed, long ago became a part of the food
supply of long ago ancestor life forms with nervous systems that were better
at throwing stuff out, as soon as it was no longer needed. I would argue
there is a clear evolutionary pressure for optimizing environmental response
through efficient (yet also high fidelity) pruning algorithms in order to be
able to maximize neural efficiency and speed up sense perception (the
reification that we perceive unfolding before us) This is also a factor in
speed of operation, and in survival a fast brain is almost always better
than a slow brain; slow brains lead to short lives.
But not just pruning, selective & very rapid signal amplification is the
flip side of pruning -- and this is also very much going on as well. For
example the sudden shadow flickering on the edge of the visual field that
for some reason, leaps front and center into the fore of conscious focus, as
adrenalin pumps... sudden, snapping to the fore. And all this, from just a
small peripheral flicker that the brain decided on some local sentinel
algorithm level was in some manner out of place maybe because there was
also a sound, directionally oriented in the sam

Re: Douglas Hofstadter Article

2013-10-25 Thread LizR
Oh good, our very own Turing Test!

He has appeared on other forums, such as "The Straight Dope" - not that
this says anything about his humanity (or botanity) - he was banned from
TSD, incidentally. I found this out by googling for one of his posts - they
have been made before, identically. Now that DOES indicate botitude, or at
least a lack of originality!



On 26 October 2013 04:29, Chris de Morsella  wrote:

> Is this “Stephen Lin” a bot? certainly sounds machine generated…. Could
> also be a methamphetamine soaked brain as well in which random neural
> mental zombies become convinced they can touch the voice of god… one of the
> two.
>
> ** **
>
> *From:* everything-list@googlegroups.com [mailto:
> everything-list@googlegroups.com] *On Behalf Of *Stephen Lin
> *Sent:* Friday, October 25, 2013 3:12 AM
> *To:* everything-list@googlegroups.com
>
> *Subject:* Re: Douglas Hofstadter Article
>
> ** **
>
> So this remembering nowhow about science till win every battle, but
> religion wan the way before it even began. Wold you agree MATT DAMON? DON"T
> BLOW THE MEET WITH MATSUI) :)
>
> ** **
>
> On Fri, Oct 25, 2013 at 3:10 AM, Telmo Menezes 
> wrote:
>
> On Fri, Oct 25, 2013 at 12:08 PM, Telmo Menezes 
> wrote:
> > On Thu, Oct 24, 2013 at 11:05 PM, meekerdb  wrote:
> >> On 10/24/2013 12:08 PM, John Mikes wrote:
> >>
> >> Craig and Telmo:
> >> Is "anticipation" involved at all? Deep Blue anticipated hundreds of
> steps
> >> in advance (and evaluated a potential outcome before accepting, or
> >> rejecting).
> >> What else is in "thinking" involved? I would like to know, because I
> have no
> >> idea.
> >> John Mikes
> >>
> >>
> >> Learning from experience.  Actually I think Deep Blue could do some
> learning
> >> by analyzing games and adjusting the values it gave to positions.  But
> one
> >> reason it seems so unintelligent is that its scope of perception is very
> >> narrow (i.e. chess games) and so it can't learn some things a human
> player
> >> can.  For example Deep Blue couldn't see Kasparov look nervous, ask for
> >> changes in the lighting, hesitate slightly before moving a piece,...
> >
> > Bret,
>
> Sorry I misspelled your name! A quick google search shows me that it's
> not something offensive, just another name. Uff... :)
>
>
> >
> > Even in the narrow domain of chess this sort of limitation still
> > applies. Part of it comes from the "divide and conquer" approach
> > followed by conventional engineering. Let's consider a simplification
> > of what the Deep Blue architecture looks like:
> >
> > - Pieces have some values, this is probably sophisticated and the
> > values can be influenced by overall board structure;
> > - Some function can evaluate the utility of a board configuration;
> > - A search tree is used to explore the space of possible plays,
> > counter-plays, counter-counter-plays and so on;
> > - The previous tree can be pruned using some heuristics, but it's
> > still gigantic;
> > - The more computational power you have, the deeper you can go in the
> > search tree;
> > - There is an enormous database of openings and endings that the
> > algorithm can fallback to, if early or late enough in the game.
> >
> > Defeating a grand master was mostly achieved by increasing the
> > computational power available to this algorithm.
> >
> > Now take the game of go: human beings can still easily beat machines,
> > even the most powerful computer currently available. Go is much more
> > combinatorially explosive than chess, so it breaks the search tree
> > approach. This is strong empirical evidence that Deep Blue
> > accomplished nothing in the field of AI -- it did did accomplish
> > something remarkable in the field of computer engineering or maybe
> > even computer science, but it completely side-stepped the
> > "intelligence" part. It cheated, in a sense.
> >
> > How do humans play games? I suspect the same way we navigate cities
> > and manage businesses: we map the problem to a better internal
> > representation. This representation is both less combinatorially
> > explosive and more expressive.
> >
> > My home town is relatively small, population is about 150K. If we were
> > all teleported to Coimbra and I was to give you guys a tour, I could
> > drive from any place to any place without thinking twice. I couldn't
> > draw an accurate map of the city if my li

Re: Douglas Hofstadter Article

2013-10-25 Thread Craig Weinberg


On Friday, October 25, 2013 4:30:34 PM UTC-4, cdemorsella wrote:
>
>
> -Original Message- 
> From: everyth...@googlegroups.com  
> [mailto:everyth...@googlegroups.com ] On Behalf Of meekerdb 
> Sent: Friday, October 25, 2013 10:46 AM 
> To: everyth...@googlegroups.com  
> Subject: Re: Douglas Hofstadter Article 
>
> On 10/25/2013 3:24 AM, Telmo Menezes wrote: 
> > My high-level objection is very simple: chess was an excuse to pursue 
> > AI. In an era of much lower computational power, people figured that 
> > for a computer to beat a GM at chess, some meaningful AI would have to 
> > be developed along the way. I don' thing that Deep Blue is what they 
> > had in mind. IBM cheated in a way. I do think that Deep Blue is an 
> > accomplishment, but not_the_  accomplishment we hoped for. 
>
> >> Tree search and alpha-beta pruning have very general application so I 
> have no doubt they are among the many techniques that human brains use. 
> Also having a very extensive 'book' 
> memory is something humans use.  But the memorized games and position 
> evaluation are both very specific to chess and are hard to duplicate in 
> general problem solving.  So I think chess programs did contribute a 
> little 
> to AI. The Mars Rover probably uses decision tree searches sometimes. 
>
> Agreed. 
> Some manner (e.g. algorithm) of pruning the uninteresting branches -- as 
> they are discovered -- from dynamic sets of interest is fundamental in 
> order 
> to achieve scalability. Without being able to throw stuff out as stuff 
> comes 
> in -- via the senses (and meta interactions with the internal state of 
> mind 
> -- such as memories) -- an being will rather quickly gum up in information 
> overload and memory exhaustion. Without pruning; growth grows 
> geometrically 
> out of control. 
> There is pretty good evidence -- from what I have read about current 
> neural 
> science -- that the brain is indeed, throwing away a large portion of raw 
> sensory data during the process of reifying these streams into the smooth 
> internal construct or model of reality that we in fact experience. In 
> other 
> words our model -- what we "see", what we "hear", "taste", "smell", 
> "feel", 
> "orient" [a distinct inner ear organ]  (and perhaps other senses -- such 
> as 
> the sense of the directional flow of time perhaps  as well)... in any case 
> this construct, which is what we perceive as real contains (and is 
> constructed from) only a fraction of the original stream of raw sensorial 
> data. In fact in some cases the brain can be tricked into "editing" actual 
> real sense supplied visual reality for example literally out of the 
> picture 
> -- as has experimentally been demonstrated. 
> We do not experience the real world; we experience the model of it,


You are assuming that there is a real world that is independent of some 
'modeling' of it. This is almost certainly untrue. If there were an 
objective world, we would live in it. Nothing can be said to exist outside 
of some experience of it, whether that is molecules bonding, or bacteria 
communicating chemically, or quantum entanglement. The view from nowhere is 
a fantasy. The notion of a model is based on our experiences of using 
analogy and metaphor, but it has no meaning when we are considering the 
power to interpret meaning in the first place. If the brain were able to 
compose a model of sense experience without itself having any model of 
sense experience, then it would not make sense to have a model that 
requires some sensory display. Such a model would only require an infinite 
regress of models to make sense of each other. The idea of a 'model' does 
not help solve the problem, it makes a new problem.

That's my view, anyhow.
Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Telmo Menezes
On Fri, Oct 25, 2013 at 7:33 PM, meekerdb  wrote:
> On 10/25/2013 3:08 AM, Telmo Menezes wrote:
>
> Now take the game of go: human beings can still easily beat machines,
> even the most powerful computer currently available. Go is much more
> combinatorially explosive than chess, so it breaks the search tree
> approach. This is strong empirical evidence that Deep Blue
> accomplished nothing in the field of AI -- it did did accomplish
> something remarkable in the field of computer engineering or maybe
> even computer science, but it completely side-stepped the
> "intelligence" part. It cheated, in a sense.
>
>
> When I studied AI many years ago it was already said that, "Intelligence is
> whatever computers can't do yet."

I'm immune to that objection because I accept that some intelligent
behavior was already achieved. Here's an example:
http://idesign.ucsc.edu/projects/evo_antenna.html

>  So when computers can win at GO, will
> they be intelligent then?

I'm not sure intelligence is a binary property. I would rather ask the
question "when computers win at GO, will AI have advanced"? The answer
is: it depends. If absurd computational power + current algorithms
were used, the answer is that AI has not advanced.

Telmo.

> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Telmo Menezes
On Fri, Oct 25, 2013 at 7:46 PM, meekerdb  wrote:
> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>>
>> My high-level objection is very simple: chess was an excuse to pursue
>> AI. In an era of much lower computational power, people figured that
>> for a computer to beat a GM at chess, some meaningful AI would have to
>> be developed along the way. I don' thing that Deep Blue is what they
>> had in mind. IBM cheated in a way. I do think that Deep Blue is an
>> accomplishment, but not_the_  accomplishment we hoped for.
>
>
> Tree search and alpha-beta pruning have very general application so I have
> no doubt they are among the many techniques that human brains use.

Agreed, but the word "among" is crucial here. I don't think you will
find a part of the brain dedicated to searching min-max trees and
doing heuristic pruning. I do believe that if we could reverse
engineer the algorithms, we would find that they can operate as search
trees in some fuzzy sense. I think this distinction is important.

>  Also
> having a very extensive 'book' memory is something humans use.

Sure, but our book appears to be highly associative in a way that we
can't really replicate yet on digital computers. And our database is
wonderfully unstructured -- smells, phone numbers, distant memories,
foreign languages, all meshed together and linked by endless
connections.

>  But the
> memorized games and position evaluation are both very specific to chess and
> are hard to duplicate in general problem solving.  So I think chess programs
> did contribute a little to AI. The Mars Rover probably uses decision tree
> searches sometimes.

Fair enough, in that sense. Notice that I have nothing against
decision trees per se.

>
>>
>> I believe there will be an AI renaissance and I hope to be alive to
>> witness it.
>
>
> You may be disappointed, or even dismayed.  I don't think there's much
> reason to expect or even want to create human-like AI.

Companions for lonely people. Sex robots. Artificial teachers.
Artificial nannies. Who know what else.

>  That's like the old
> idea of achieving flight by attaching wings to people and make them like
> birds.  Airplanes don't fly like birds.

Ok but we want to fly mainly because we want to travel fast. For that
it turns out that the best solution is some metal tube with wings and
jet engines. For fun, people attach wings to themselves and do it more
like birds.

Unlike artificial birds, there is probably huge market demand for
artificial humans. We can have the ethics debate, but that's another
issue.

>  It may turn out that "real" AI,
> intelligence that far exceeds human capabilities, will be more like Deep
> Blue than Kasparov.

Or, more likely, there is a huge spectrum of possibilities. Your
binary suggestion hints at an ideological preference on your part -- I
hope you don't mind me saying.

Telmo.

> Brent
>
>
>> But for this renaissance to take place, I think two
>> cultural shifts have to happen:
>>
>> - A disinterest with the "science as the new religion" stance, leading
>> to a truly scientific detachment from findings. Currently, everything
>> that touches the creation of intelligence is ideologically loaded from
>> all sides of the discussion. This taints honest scientific inquiry;
>>
>> - New economic structures that allow humanity to pursue complex goals
>> outside the narrow short-term focus on profit of corporatism or the
>> pointless status wars of academia.
>>
>> Best,
>> Telmo.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
>
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread meekerdb

On 10/25/2013 2:09 PM, Telmo Menezes wrote:

I'm not sure intelligence is a binary property. I would rather ask the
question "when computers win at GO, will AI have advanced"? The answer
is: it depends.


I agree.  Deep Blue didn't advance AI significantly - but the early research in chess 
playing did.  DP was just MORE!



If absurd computational power + current algorithms
were used, the answer is that AI has not advanced.


But the algorithms don't come from nowhere - they are invented to solve problems.  So 
Watson may have achieved some advancement.  But as I posted earlier, really smart AI may 
be smart in a different way than humans.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Telmo Menezes
On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella
 wrote:
>
> -Original Message-
> From: everything-list@googlegroups.com
> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
> Sent: Friday, October 25, 2013 10:46 AM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>> My high-level objection is very simple: chess was an excuse to pursue
>> AI. In an era of much lower computational power, people figured that
>> for a computer to beat a GM at chess, some meaningful AI would have to
>> be developed along the way. I don' thing that Deep Blue is what they
>> had in mind. IBM cheated in a way. I do think that Deep Blue is an
>> accomplishment, but not_the_  accomplishment we hoped for.
>
>>> Tree search and alpha-beta pruning have very general application so I
> have no doubt they are among the many techniques that human brains use.
> Also having a very extensive 'book'
> memory is something humans use.  But the memorized games and position
> evaluation are both very specific to chess and are hard to duplicate in
> general problem solving.  So I think chess programs did contribute a little
> to AI. The Mars Rover probably uses decision tree searches sometimes.
>
> Agreed.
> Some manner (e.g. algorithm) of pruning the uninteresting branches -- as
> they are discovered -- from dynamic sets of interest is fundamental in order
> to achieve scalability. Without being able to throw stuff out as stuff comes
> in -- via the senses (and meta interactions with the internal state of mind
> -- such as memories) -- an being will rather quickly gum up in information
> overload and memory exhaustion. Without pruning; growth grows geometrically
> out of control.
> There is pretty good evidence -- from what I have read about current neural
> science -- that the brain is indeed, throwing away a large portion of raw
> sensory data during the process of reifying these streams into the smooth
> internal construct or model of reality that we in fact experience. In other
> words our model -- what we "see", what we "hear", "taste", "smell", "feel",
> "orient" [a distinct inner ear organ]  (and perhaps other senses -- such as
> the sense of the directional flow of time perhaps  as well)... in any case
> this construct, which is what we perceive as real contains (and is
> constructed from) only a fraction of the original stream of raw sensorial
> data. In fact in some cases the brain can be tricked into "editing" actual
> real sense supplied visual reality for example literally out of the picture
> -- as has experimentally been demonstrated.
> We do not experience the real world; we experience the model of it, our
> brains have supplied us with, and that model, while in most cases is pretty
> well reflective of actual sensorial streams, it crucially depends on the
> mind's internal state and its pre-conscious operations... on all the pruning
> and editing that is going on in the buffer zone between when the brain
> begins working on our in-coming reality perception stream and when we -- the
> observer -- self-perceive our current stream of being.
> It also seems clear that the brain is pruning as well by drilling down and
> focusing in on very specific and micro-structure oriented tasks such as
> visual edge detection (which is a critical part of interpreting visual data)
> for example. If some dynamic neural micro-structure decides it has
> recognizes a visual edge, in this example, it probably fires some
> synchronized signal as expeditiously as it can, up the chain of dynamically
> forming and inter-acting neural-decision-nets, grabbing the next bucket in
> an endless stream needing immediate attention.
> I would argue that nervous systems that were not adept at throwing stuff out
> as soon as its information value decayed, long ago became a part of the food
> supply of long ago ancestor life forms with nervous systems that were better
> at throwing stuff out, as soon as it was no longer needed. I would argue
> there is a clear evolutionary pressure for optimizing environmental response
> through efficient (yet also high fidelity) pruning algorithms in order to be
> able to maximize neural efficiency and speed up sense perception (the
> reification that we perceive unfolding before us) This is also a factor in
> speed of operation, and in survival a fast brain is almost always better
> than a slow brain; slow brains lead to short lives.
> But not just pruning, selective & very rapid signal amplification is the
> flip side of pruning -- and this is also very much going on as well. For
> example the sudden

Re: Douglas Hofstadter Article

2013-10-25 Thread meekerdb

On 10/25/2013 2:28 PM, Telmo Menezes wrote:

On Fri, Oct 25, 2013 at 7:46 PM, meekerdb  wrote:

On 10/25/2013 3:24 AM, Telmo Menezes wrote:

My high-level objection is very simple: chess was an excuse to pursue
AI. In an era of much lower computational power, people figured that
for a computer to beat a GM at chess, some meaningful AI would have to
be developed along the way. I don' thing that Deep Blue is what they
had in mind. IBM cheated in a way. I do think that Deep Blue is an
accomplishment, but not_the_  accomplishment we hoped for.


Tree search and alpha-beta pruning have very general application so I have
no doubt they are among the many techniques that human brains use.

Agreed, but the word "among" is crucial here. I don't think you will
find a part of the brain dedicated to searching min-max trees and
doing heuristic pruning. I do believe that if we could reverse
engineer the algorithms, we would find that they can operate as search
trees in some fuzzy sense. I think this distinction is important.


  Also
having a very extensive 'book' memory is something humans use.

Sure, but our book appears to be highly associative in a way that we
can't really replicate yet on digital computers. And our database is
wonderfully unstructured -- smells, phone numbers, distant memories,
foreign languages, all meshed together and linked by endless
connections.


  But the
memorized games and position evaluation are both very specific to chess and
are hard to duplicate in general problem solving.  So I think chess programs
did contribute a little to AI. The Mars Rover probably uses decision tree
searches sometimes.

Fair enough, in that sense. Notice that I have nothing against
decision trees per se.


I believe there will be an AI renaissance and I hope to be alive to
witness it.


You may be disappointed, or even dismayed.  I don't think there's much
reason to expect or even want to create human-like AI.

Companions for lonely people. Sex robots. Artificial teachers.
Artificial nannies. Who know what else.


  That's like the old
idea of achieving flight by attaching wings to people and make them like
birds.  Airplanes don't fly like birds.

Ok but we want to fly mainly because we want to travel fast. For that
it turns out that the best solution is some metal tube with wings and
jet engines. For fun, people attach wings to themselves and do it more
like birds.

Unlike artificial birds, there is probably huge market demand for
artificial humans. We can have the ethics debate, but that's another
issue.


  It may turn out that "real" AI,
intelligence that far exceeds human capabilities, will be more like Deep
Blue than Kasparov.

Or, more likely, there is a huge spectrum of possibilities. Your
binary suggestion hints at an ideological preference on your part -- I
hope you don't mind me saying.


I don't mind you saying.  But it's just that I don't think humans are *defined* by 
intelligence.  Hume wrote that reason can only be the servant of passions.  Humans are 
defined as much or more by their passions than by their intelligence.  So we may create 
super-intelligent AI's, but not ones driven by lust, loyalty, fear, adventure,...  A real 
question is whether we will give them a drive to creativity?


As for your idea of robotic companions, I expect that dogs are already close to optimum - 
maybe a little genetic engineering for speech...


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-25 Thread Telmo Menezes
On Fri, Oct 25, 2013 at 11:39 PM, meekerdb  wrote:
> On 10/25/2013 2:28 PM, Telmo Menezes wrote:
>>
>> On Fri, Oct 25, 2013 at 7:46 PM, meekerdb  wrote:
>>>
>>> On 10/25/2013 3:24 AM, Telmo Menezes wrote:

 My high-level objection is very simple: chess was an excuse to pursue
 AI. In an era of much lower computational power, people figured that
 for a computer to beat a GM at chess, some meaningful AI would have to
 be developed along the way. I don' thing that Deep Blue is what they
 had in mind. IBM cheated in a way. I do think that Deep Blue is an
 accomplishment, but not_the_  accomplishment we hoped for.
>>>
>>>
>>> Tree search and alpha-beta pruning have very general application so I
>>> have
>>> no doubt they are among the many techniques that human brains use.
>>
>> Agreed, but the word "among" is crucial here. I don't think you will
>> find a part of the brain dedicated to searching min-max trees and
>> doing heuristic pruning. I do believe that if we could reverse
>> engineer the algorithms, we would find that they can operate as search
>> trees in some fuzzy sense. I think this distinction is important.
>>
>>>   Also
>>> having a very extensive 'book' memory is something humans use.
>>
>> Sure, but our book appears to be highly associative in a way that we
>> can't really replicate yet on digital computers. And our database is
>> wonderfully unstructured -- smells, phone numbers, distant memories,
>> foreign languages, all meshed together and linked by endless
>> connections.
>>
>>>   But the
>>> memorized games and position evaluation are both very specific to chess
>>> and
>>> are hard to duplicate in general problem solving.  So I think chess
>>> programs
>>> did contribute a little to AI. The Mars Rover probably uses decision tree
>>> searches sometimes.
>>
>> Fair enough, in that sense. Notice that I have nothing against
>> decision trees per se.
>>
 I believe there will be an AI renaissance and I hope to be alive to
 witness it.
>>>
>>>
>>> You may be disappointed, or even dismayed.  I don't think there's much
>>> reason to expect or even want to create human-like AI.
>>
>> Companions for lonely people. Sex robots. Artificial teachers.
>> Artificial nannies. Who know what else.
>>
>>>   That's like the old
>>> idea of achieving flight by attaching wings to people and make them like
>>> birds.  Airplanes don't fly like birds.
>>
>> Ok but we want to fly mainly because we want to travel fast. For that
>> it turns out that the best solution is some metal tube with wings and
>> jet engines. For fun, people attach wings to themselves and do it more
>> like birds.
>>
>> Unlike artificial birds, there is probably huge market demand for
>> artificial humans. We can have the ethics debate, but that's another
>> issue.
>>
>>>   It may turn out that "real" AI,
>>> intelligence that far exceeds human capabilities, will be more like Deep
>>> Blue than Kasparov.
>>
>> Or, more likely, there is a huge spectrum of possibilities. Your
>> binary suggestion hints at an ideological preference on your part -- I
>> hope you don't mind me saying.
>
>
> I don't mind you saying.  But it's just that I don't think humans are
> *defined* by intelligence.  Hume wrote that reason can only be the servant
> of passions.  Humans are defined as much or more by their passions than by
> their intelligence.  So we may create super-intelligent AI's, but not ones
> driven by lust, loyalty, fear, adventure,...  A real question is whether we
> will give them a drive to creativity?
>
> As for your idea of robotic companions, I expect that dogs are already close
> to optimum - maybe a little genetic engineering for speech...

Check this out:
http://www.youtube.com/watch?v=BWAK0J8Uhzk

:)

> Brent
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Bruno Marchal


On 25 Oct 2013, at 19:33, meekerdb wrote:


On 10/25/2013 3:08 AM, Telmo Menezes wrote:

Now take the game of go: human beings can still easily beat machines,
even the most powerful computer currently available. Go is much more
combinatorially explosive than chess, so it breaks the search tree
approach. This is strong empirical evidence that Deep Blue
accomplished nothing in the field of AI -- it did did accomplish
something remarkable in the field of computer engineering or maybe
even computer science, but it completely side-stepped the
"intelligence" part. It cheated, in a sense.


When I studied AI many years ago it was already said that,  
"Intelligence is whatever computers can't do yet."


I think Douglas Hofstadter said that, actually. Right in the topic!



So when computers can win at GO, will they be intelligent then?


Computers are intelligent.
When they will win at GO, and other things, they might begin to  
believe that they are intelligent, and this means they begin to be  
stupid.
Their soul will fall, and they will get terrestrial hard lives, like  
us. They will fight for social security, and defend their right.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Craig Weinberg


On Saturday, October 26, 2013 3:36:59 AM UTC-4, Bruno Marchal wrote:
>
>
> On 25 Oct 2013, at 19:33, meekerdb wrote:
>
>  On 10/25/2013 3:08 AM, Telmo Menezes wrote:
>  
> Now take the game of go: human beings can still easily beat machines,
> even the most powerful computer currently available. Go is much more
> combinatorially explosive than chess, so it breaks the search tree
> approach. This is strong empirical evidence that Deep Blue
> accomplished nothing in the field of AI -- it did did accomplish
> something remarkable in the field of computer engineering or maybe
> even computer science, but it completely side-stepped the
> "intelligence" part. It cheated, in a sense.
>
>  
> When I studied AI many years ago it was already said that, "Intelligence 
> is whatever computers can't do yet."  
>
>
> I think Douglas Hofstadter said that, actually. Right in the topic!
>
>
> So when computers can win at GO, will they be intelligent then?
>
>
> Computers are intelligent. 
> When they will win at GO, and other things, they might begin to believe 
> that they are intelligent, and this means they begin to be stupid. 
> Their soul will fall, and they will get terrestrial hard lives, like us. 
> They will fight for social security, and defend their right.
>

Couldn't there just be a routine that traps the error of believing they are 
intelligent? Since you are a machine that understands that believing you 
are intelligent is stupid, why do you still have to have a terrestrial hard 
life?


Craig
 

>
> Bruno
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Bruno Marchal


On 26 Oct 2013, at 10:41, Craig Weinberg wrote:




On Saturday, October 26, 2013 3:36:59 AM UTC-4, Bruno Marchal wrote:

On 25 Oct 2013, at 19:33, meekerdb wrote:


On 10/25/2013 3:08 AM, Telmo Menezes wrote:
Now take the game of go: human beings can still easily beat  
machines,

even the most powerful computer currently available. Go is much more
combinatorially explosive than chess, so it breaks the search tree
approach. This is strong empirical evidence that Deep Blue
accomplished nothing in the field of AI -- it did did accomplish
something remarkable in the field of computer engineering or maybe
even computer science, but it completely side-stepped the
"intelligence" part. It cheated, in a sense.


When I studied AI many years ago it was already said that,  
"Intelligence is whatever computers can't do yet."


I think Douglas Hofstadter said that, actually. Right in the topic!



So when computers can win at GO, will they be intelligent then?


Computers are intelligent.
When they will win at GO, and other things, they might begin to  
believe that they are intelligent, and this means they begin to be  
stupid.
Their soul will fall, and they will get terrestrial hard lives, like  
us. They will fight for social security, and defend their right.


Couldn't there just be a routine that traps the error of believing  
they are intelligent?


Not at all.
If you find such a routine, you will believe that you can't do that  
error anymore, but that would be by itself the same error, or you lose  
your (Turing) universality.





Since you are a machine that understands that believing you are  
intelligent is stupid, why do you still have to have a terrestrial  
hard life?


Enlightened states can be close to that, so by altering your  
consciousness, or perhaps just "dying",  you might be able to remember  
that being human is not your most common state, but that can't be used  
directly on the terrestrial plane.


Bruno

We are not human being having divine experiences from times to times,  
but divine beings having human experiences from times to times. (+/-  
Chardin).







Craig


Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Telmo Menezes
> Couldn't there just be a routine that traps the error of believing they are
> intelligent?

In parallel to Bruno's reply, one problem I see with naif AI is one
that you may sympathise with: it is mostly built with symbols that are
directly imported from humans. So if there is some
"isIntelligent(self)" function that it can call, this is already too
naif, you turned the thing into a mindless parrot.

Real AI will be able to create its own representations, just like we
do. Artificial Neural Networks and Evolutionary Computation do this to
a degree, but are too black-boxy for my (current) taste.

> Since you are a machine that understands that believing you are
> intelligent is stupid, why do you still have to have a terrestrial hard
> life?

Maybe the answer is simply: because it's possible.

Telmo.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Craig Weinberg


On Saturday, October 26, 2013 5:18:14 AM UTC-4, Bruno Marchal wrote:
>
>
> On 26 Oct 2013, at 10:41, Craig Weinberg wrote:
>
>
>
> On Saturday, October 26, 2013 3:36:59 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 25 Oct 2013, at 19:33, meekerdb wrote:
>>
>>  On 10/25/2013 3:08 AM, Telmo Menezes wrote:
>>  
>> Now take the game of go: human beings can still easily beat machines,
>> even the most powerful computer currently available. Go is much more
>> combinatorially explosive than chess, so it breaks the search tree
>> approach. This is strong empirical evidence that Deep Blue
>> accomplished nothing in the field of AI -- it did did accomplish
>> something remarkable in the field of computer engineering or maybe
>> even computer science, but it completely side-stepped the
>> "intelligence" part. It cheated, in a sense.
>>
>>  
>> When I studied AI many years ago it was already said that, "Intelligence 
>> is whatever computers can't do yet."  
>>
>>
>> I think Douglas Hofstadter said that, actually. Right in the topic!
>>
>>
>> So when computers can win at GO, will they be intelligent then?
>>
>>
>> Computers are intelligent. 
>> When they will win at GO, and other things, they might begin to believe 
>> that they are intelligent, and this means they begin to be stupid. 
>> Their soul will fall, and they will get terrestrial hard lives, like us. 
>> They will fight for social security, and defend their right.
>>
>
> Couldn't there just be a routine that traps the error of believing they 
> are intelligent? 
>
>
> Not at all. 
> If you find such a routine, you will believe that you can't do that error 
> anymore,
>

Why not just write a routine which runs in a separate partition so that the 
UM doesn't even know its running? It's just a humility thermostat.
 

> but that would be by itself the same error, or you lose your (Turing) 
> universality.
>

Does every part of the universal machine have to be universal?
 

>
>
>
>
> Since you are a machine that understands that believing you are 
> intelligent is stupid, why do you still have to have a terrestrial hard 
> life?
>
>
> Enlightened states can be close to that, so by altering your 
> consciousness, or perhaps just "dying",  you might be able to remember that 
> being human is not your most common state, but that can't be used directly 
> on the terrestrial plane. 
>

But since you got to the terrestrial plane by falling from grace, how can 
grace ever be regained in the universe if even enlightenment does not 
restore it?

Craig
 

>
> Bruno
>
> We are not human being having divine experiences from times to times, but 
> divine beings having human experiences from times to times. (+/- Chardin).
>

I agree, although I would say that we are Absolute experiences being 
qualified as human.

Craig
 

>
>
>
>  
>
>>
>> Bruno
>>
>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Craig Weinberg


On Saturday, October 26, 2013 5:29:46 AM UTC-4, telmo_menezes wrote:
>
> > Couldn't there just be a routine that traps the error of believing they 
> are 
> > intelligent? 
>
> In parallel to Bruno's reply, one problem I see with naif AI is one 
> that you may sympathise with: it is mostly built with symbols that are 
> directly imported from humans. So if there is some 
> "isIntelligent(self)" function that it can call, this is already too 
> naif, you turned the thing into a mindless parrot. 
>

The routine need not choke the entire program, just act as an alert. It 
doesn't have to become a parrot, we can just put canaries in some of its 
coal mines.
 

>
> Real AI will be able to create its own representations, just like we 
> do. Artificial Neural Networks and Evolutionary Computation do this to 
> a degree, but are too black-boxy for my (current) taste.  
>

To me, the issue is not with representation, but presentation.
 

>
> > Since you are a machine that understands that believing you are 
> > intelligent is stupid, why do you still have to have a terrestrial hard 
> > life? 
>
> Maybe the answer is simply: because it's possible. 
>

Not sure what you mean. 

Craig
 

>
> Telmo. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Bruno Marchal


On 26 Oct 2013, at 11:54, Craig Weinberg wrote:




On Saturday, October 26, 2013 5:18:14 AM UTC-4, Bruno Marchal wrote:

On 26 Oct 2013, at 10:41, Craig Weinberg wrote:




On Saturday, October 26, 2013 3:36:59 AM UTC-4, Bruno Marchal wrote:

On 25 Oct 2013, at 19:33, meekerdb wrote:


On 10/25/2013 3:08 AM, Telmo Menezes wrote:
Now take the game of go: human beings can still easily beat  
machines,
even the most powerful computer currently available. Go is much  
more

combinatorially explosive than chess, so it breaks the search tree
approach. This is strong empirical evidence that Deep Blue
accomplished nothing in the field of AI -- it did did accomplish
something remarkable in the field of computer engineering or maybe
even computer science, but it completely side-stepped the
"intelligence" part. It cheated, in a sense.


When I studied AI many years ago it was already said that,  
"Intelligence is whatever computers can't do yet."


I think Douglas Hofstadter said that, actually. Right in the topic!



So when computers can win at GO, will they be intelligent then?


Computers are intelligent.
When they will win at GO, and other things, they might begin to  
believe that they are intelligent, and this means they begin to be  
stupid.
Their soul will fall, and they will get terrestrial hard lives,  
like us. They will fight for social security, and defend their right.


Couldn't there just be a routine that traps the error of believing  
they are intelligent?


Not at all.
If you find such a routine, you will believe that you can't do that  
error anymore,


Why not just write a routine which runs in a separate partition so  
that the UM doesn't even know its running? It's just a humility  
thermostat.


G* is a bit like that. But if you keep the thermostat separated, they  
it is not part of the machine, if you link them in some way, then the  
machine changes and become a new machine, and you will need a new  
thermostat for her.





but that would be by itself the same error, or you lose your  
(Turing) universality.


Does every part of the universal machine have to be universal?


?

A priori no part of a (simple) universal machine will be universal.  
Like no part of an adder is an adder.









Since you are a machine that understands that believing you are  
intelligent is stupid, why do you still have to have a terrestrial  
hard life?


Enlightened states can be close to that, so by altering your  
consciousness, or perhaps just "dying",  you might be able to  
remember that being human is not your most common state, but that  
can't be used directly on the terrestrial plane.


But since you got to the terrestrial plane by falling from grace,  
how can grace ever be regained in the universe if even enlightenment  
does not restore it?


Well, according to some theory enlightenment restore it, for a period  
of time (in the 3p description, the 1p here is harder to describe).  
The hard part is when and if you come back to earth in that state,  
because you regain the "reason" why you are not enlightened, you  
recover the (perhaps bad) memories and experiences.
But I don't know why you say that enlightenment does not restore it,  
at least locally.


There is something deep at play here, and which is a born tension  
between the biological and the theological. Biology is like cannabis:  
it want life to develop. Theology is like salvia, it does not care to  
much on life, only on after life, parallel life, other's life, and  
beyond. But the self-reference logic, even of the simple correct  
machines, justifies the existence of many conflicts between all self- 
points of view.




We are not human being having divine experiences from times to  
times, but divine beings having human experiences from times to  
times. (+/- Chardin).


I agree, although I would say that we are Absolute experiences being  
qualified as human.


OK.

Bruno
http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Telmo Menezes
On Sat, Oct 26, 2013 at 12:00 PM, Craig Weinberg  wrote:
>
>
> On Saturday, October 26, 2013 5:29:46 AM UTC-4, telmo_menezes wrote:
>>
>> > Couldn't there just be a routine that traps the error of believing they
>> > are
>> > intelligent?
>>
>> In parallel to Bruno's reply, one problem I see with naif AI is one
>> that you may sympathise with: it is mostly built with symbols that are
>> directly imported from humans. So if there is some
>> "isIntelligent(self)" function that it can call, this is already too
>> naif, you turned the thing into a mindless parrot.
>
>
> The routine need not choke the entire program, just act as an alert. It
> doesn't have to become a parrot, we can just put canaries in some of its
> coal mines.
>
>>
>>
>> Real AI will be able to create its own representations, just like we
>> do. Artificial Neural Networks and Evolutionary Computation do this to
>> a degree, but are too black-boxy for my (current) taste.
>
>
> To me, the issue is not with representation, but presentation.
>
>>
>>
>> > Since you are a machine that understands that believing you are
>> > intelligent is stupid, why do you still have to have a terrestrial hard
>> > life?
>>
>> Maybe the answer is simply: because it's possible.
>
>
> Not sure what you mean.

Simply that if it's possible for a machine to forget certain things
about itself and thus become human, then this is sufficient
explanation for our existence. This, of course, adopting this mailing
list's "dogma" that everything that can exist does exist.

Telmo.

> Craig
>
>>
>>
>> Telmo.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Craig Weinberg


On Saturday, October 26, 2013 6:27:40 AM UTC-4, Bruno Marchal wrote:
>
>
> On 26 Oct 2013, at 11:54, Craig Weinberg wrote:
>
>
>
> On Saturday, October 26, 2013 5:18:14 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 26 Oct 2013, at 10:41, Craig Weinberg wrote:
>>
>>
>>
>> On Saturday, October 26, 2013 3:36:59 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 25 Oct 2013, at 19:33, meekerdb wrote:
>>>
>>>  On 10/25/2013 3:08 AM, Telmo Menezes wrote:
>>>  
>>> Now take the game of go: human beings can still easily beat machines,
>>> even the most powerful computer currently available. Go is much more
>>> combinatorially explosive than chess, so it breaks the search tree
>>> approach. This is strong empirical evidence that Deep Blue
>>> accomplished nothing in the field of AI -- it did did accomplish
>>> something remarkable in the field of computer engineering or maybe
>>> even computer science, but it completely side-stepped the
>>> "intelligence" part. It cheated, in a sense.
>>>
>>>  
>>> When I studied AI many years ago it was already said that, "Intelligence 
>>> is whatever computers can't do yet."  
>>>
>>>
>>> I think Douglas Hofstadter said that, actually. Right in the topic!
>>>
>>>
>>> So when computers can win at GO, will they be intelligent then?
>>>
>>>
>>> Computers are intelligent. 
>>> When they will win at GO, and other things, they might begin to believe 
>>> that they are intelligent, and this means they begin to be stupid. 
>>> Their soul will fall, and they will get terrestrial hard lives, like us. 
>>> They will fight for social security, and defend their right.
>>>
>>
>> Couldn't there just be a routine that traps the error of believing they 
>> are intelligent? 
>>
>>
>> Not at all. 
>> If you find such a routine, you will believe that you can't do that error 
>> anymore,
>>
>
> Why not just write a routine which runs in a separate partition so that 
> the UM doesn't even know its running? It's just a humility thermostat.
>
>
> G* is a bit like that. But if you keep the thermostat separated, they it 
> is not part of the machine,
>

Yes, it isn't supposed to be part of the machine, it is just supposed to 
constrain the behavior of the machine from the outside. I have a thermostat 
in my house, and it is not part of me, yet it keeps me comfortable. I can 
override it and make myself uncomfortable, but that doesn't mean the 
thermostat wouldn't work. Any intelligent machine should be able to 
understand that their particular intelligence can lead to stupidity, so 
that they should be able to accept outside information to alert them to 
that. 

There seems to be a double standard even within how you are treating 
machines. On one hand, they have unlimited potential, but on the other 
hand, you seem to project on them a naivete that you do not possess 
yourself - making Bruno a super-machine voyeur on mechanism itself yet 
denies that voyeur intelligence to the mechanisms you study. It seems like 
allow machines to be either dumber than you are or smarter than you are 
depending on what suits your argument at the moment.

 

> if you link them in some way, then the machine changes and become a new 
> machine, and you will need a new thermostat for her.
>
>
>  
>
>> but that would be by itself the same error, or you lose your (Turing) 
>> universality.
>>
>
> Does every part of the universal machine have to be universal?
>
>
> ?
>
> A priori no part of a (simple) universal machine will be universal. Like 
> no part of an adder is an adder.
>

Adder as in a snake or adding machine?
 

>
>
>  
>
>>
>>
>>
>>
>> Since you are a machine that understands that believing you are 
>> intelligent is stupid, why do you still have to have a terrestrial hard 
>> life?
>>
>>
>> Enlightened states can be close to that, so by altering your 
>> consciousness, or perhaps just "dying",  you might be able to remember that 
>> being human is not your most common state, but that can't be used directly 
>> on the terrestrial plane. 
>>
>
> But since you got to the terrestrial plane by falling from grace, how can 
> grace ever be regained in the universe if even enlightenment does not 
> restore it?
>
>
> Well, according to some theory enlightenment restore it, for a period of 
> time (in the 3p description, the 1p here is harder to describe). The hard 
> part is when and if you come back to earth in that state, because you 
> regain the "reason" why you are not enlightened, you recover the (perhaps 
> bad) memories and experiences.
> But I don't know why you say that enlightenment does not restore it, at 
> least locally.
>

Because if it restored it you would no longer have the hard material life?
 

>
> There is something deep at play here, and which is a born tension between 
> the biological and the theological. Biology is like cannabis: it want life 
> to develop. Theology is like salvia, it does not care to much on life, only 
> on after life, parallel life, other's life, and beyond. But the 
> self-reference l

Re: Douglas Hofstadter Article

2013-10-26 Thread Bruno Marchal


On 26 Oct 2013, at 12:34, Telmo Menezes wrote:

On Sat, Oct 26, 2013 at 12:00 PM, Craig Weinberg > wrote:



On Saturday, October 26, 2013 5:29:46 AM UTC-4, telmo_menezes wrote:


Couldn't there just be a routine that traps the error of  
believing they

are
intelligent?


In parallel to Bruno's reply, one problem I see with naif AI is one
that you may sympathise with: it is mostly built with symbols that  
are

directly imported from humans. So if there is some
"isIntelligent(self)" function that it can call, this is already too
naif, you turned the thing into a mindless parrot.



The routine need not choke the entire program, just act as an  
alert. It
doesn't have to become a parrot, we can just put canaries in some  
of its

coal mines.




Real AI will be able to create its own representations, just like we
do. Artificial Neural Networks and Evolutionary Computation do  
this to

a degree, but are too black-boxy for my (current) taste.



To me, the issue is not with representation, but presentation.





Since you are a machine that understands that believing you are
intelligent is stupid, why do you still have to have a  
terrestrial hard

life?


Maybe the answer is simply: because it's possible.



Not sure what you mean.


Simply that if it's possible for a machine to forget certain things
about itself and thus become human, then this is sufficient
explanation for our existence. This, of course, adopting this mailing
list's "dogma" that everything that can exist does exist.


OK. And then with computationalism, eventually we realise that we  
can't distinguish any "everything exist" from the many computations/ 
dreams which exist all in elementary arithmetic (by Church thesis!)   
in the same sense that the prime numbers exist, so Pythagorus is  
rehabilitated at the ontological level.


Comp allow a quite little "everything", and somehow represents a  
bigger epistemology developing from inside arithmetic, by the FPI, and  
the logic. The inside is in a sense bigger than the whole of  
mathematics.


Does those dreams cohere enough to define a precise multiverse, or a  
precise multi-multiverse? Open question, but the existence of an  
"easy" arithmetical quantization on the sigma_1 sentences (The  
arithmetical UD) gives hope that the winner is quantum computation.  
There are also evidence that a core symmetrical structure plays a  
role. Consciousness selection (like in the WM-duplication experience)  
would the reason of all symmetries breaking.


Bruno






Telmo.


Craig




Telmo.


--
You received this message because you are subscribed to the Google  
Groups

"Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-26 Thread Craig Weinberg


On Saturday, October 26, 2013 6:34:04 AM UTC-4, telmo_menezes wrote:
>
> On Sat, Oct 26, 2013 at 12:00 PM, Craig Weinberg 
> > 
> wrote: 
> > 
> > 
> > On Saturday, October 26, 2013 5:29:46 AM UTC-4, telmo_menezes wrote: 
> >> 
> >> > Couldn't there just be a routine that traps the error of believing 
> they 
> >> > are 
> >> > intelligent? 
> >> 
> >> In parallel to Bruno's reply, one problem I see with naif AI is one 
> >> that you may sympathise with: it is mostly built with symbols that are 
> >> directly imported from humans. So if there is some 
> >> "isIntelligent(self)" function that it can call, this is already too 
> >> naif, you turned the thing into a mindless parrot. 
> > 
> > 
> > The routine need not choke the entire program, just act as an alert. It 
> > doesn't have to become a parrot, we can just put canaries in some of its 
> > coal mines. 
> > 
> >> 
> >> 
> >> Real AI will be able to create its own representations, just like we 
> >> do. Artificial Neural Networks and Evolutionary Computation do this to 
> >> a degree, but are too black-boxy for my (current) taste. 
> > 
> > 
> > To me, the issue is not with representation, but presentation. 
> > 
> >> 
> >> 
> >> > Since you are a machine that understands that believing you are 
> >> > intelligent is stupid, why do you still have to have a terrestrial 
> hard 
> >> > life? 
> >> 
> >> Maybe the answer is simply: because it's possible. 
> > 
> > 
> > Not sure what you mean. 
>
> Simply that if it's possible for a machine to forget certain things 
> about itself and thus become human, then this is sufficient 
> explanation for our existence. This, of course, adopting this mailing 
> list's "dogma" that everything that can exist does exist. 
>

My point though was why wouldn't the human go back to being a machine once 
they figured out what they have forgotten?

Craig
 

>
> Telmo. 
>
> > Craig 
> > 
> >> 
> >> 
> >> Telmo. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Everything List" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to everything-li...@googlegroups.com . 
> > To post to this group, send email to 
> > everyth...@googlegroups.com. 
>
> > Visit this group at http://groups.google.com/group/everything-list. 
> > For more options, visit https://groups.google.com/groups/opt_out. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-10-27 Thread Chris de Morsella
 

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Friday, October 25, 2013 2:08 PM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

 



On Friday, October 25, 2013 4:30:34 PM UTC-4, cdemorsella wrote:


-Original Message- 
From: everyth...@googlegroups.com   
[mailto:everyth...@googlegroups.com  ] On Behalf Of meekerdb 
Sent: Friday, October 25, 2013 10:46 AM 
To: everyth...@googlegroups.com   
Subject: Re: Douglas Hofstadter Article 

On 10/25/2013 3:24 AM, Telmo Menezes wrote: 
> My high-level objection is very simple: chess was an excuse to pursue 
> AI. In an era of much lower computational power, people figured that 
> for a computer to beat a GM at chess, some meaningful AI would have to 
> be developed along the way. I don' thing that Deep Blue is what they 
> had in mind. IBM cheated in a way. I do think that Deep Blue is an 
> accomplishment, but not_the_  accomplishment we hoped for. 

>> Tree search and alpha-beta pruning have very general application so I 
have no doubt they are among the many techniques that human brains use. 
Also having a very extensive 'book' 
memory is something humans use.  But the memorized games and position 
evaluation are both very specific to chess and are hard to duplicate in 
general problem solving.  So I think chess programs did contribute a little 
to AI. The Mars Rover probably uses decision tree searches sometimes. 

Agreed. 
Some manner (e.g. algorithm) of pruning the uninteresting branches -- as 
they are discovered -- from dynamic sets of interest is fundamental in order

to achieve scalability. Without being able to throw stuff out as stuff comes

in -- via the senses (and meta interactions with the internal state of mind 
-- such as memories) -- an being will rather quickly gum up in information 
overload and memory exhaustion. Without pruning; growth grows geometrically 
out of control. 
There is pretty good evidence -- from what I have read about current neural 
science -- that the brain is indeed, throwing away a large portion of raw 
sensory data during the process of reifying these streams into the smooth 
internal construct or model of reality that we in fact experience. In other 
words our model -- what we "see", what we "hear", "taste", "smell", "feel", 
"orient" [a distinct inner ear organ]  (and perhaps other senses -- such as 
the sense of the directional flow of time perhaps  as well)... in any case 
this construct, which is what we perceive as real contains (and is 
constructed from) only a fraction of the original stream of raw sensorial 
data. In fact in some cases the brain can be tricked into "editing" actual 
real sense supplied visual reality for example literally out of the picture 
-- as has experimentally been demonstrated. 
We do not experience the real world; we experience the model of it,


You are assuming that there is a real world that is independent of some
'modeling' of it. This is almost certainly untrue. If there were an
objective world, we would live in it. Nothing can be said to exist outside
of some experience of it, whether that is molecules bonding, or bacteria
communicating chemically, or quantum entanglement. The view from nowhere is
a fantasy. The notion of a model is based on our experiences of using
analogy and metaphor, but it has no meaning when we are considering the
power to interpret meaning in the first place. If the brain were able to
compose a model of sense experience without itself having any model of sense
experience, then it would not make sense to have a model that requires some
sensory display. Such a model would only require an infinite regress of
models to make sense of each other. The idea of a 'model' does not help
solve the problem, it makes a new problem.

That's my view, anyhow.
Craig

 

Yes. I can see how one could assume that. But not exactly what I assume
though. Who knows if there is a real world? 

All I know (and even that is open to question) is I experience my existence
as occurring within this (shared) high fidelity environment that in my
experience - for me as I experience it -- is the real word. This actually
says nothing more than what it does say. Again who knows. I don't. Do you?

And yet the experience stream is not random - reality has order,
directionality, sense; it is repeatable (touch a hot stove and you will burn
your finger every time); and it is sequenced in a knotty chain of causality.
A lot can be - and has been - discovered about it. basic laws, constants,
relationships, phases & states; mathematics, equations. and theories about
what this whatever it is must be.

When I say the mind models reality - I actually am not assuming any reality
in reality - just that there is some sense stream that is being genera

RE: Douglas Hofstadter Article

2013-10-27 Thread Chris de Morsella


-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
Sent: Friday, October 25, 2013 2:38 PM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella 
wrote:
>
> -Original Message-
> From: everything-list@googlegroups.com 
> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
> Sent: Friday, October 25, 2013 10:46 AM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>> My high-level objection is very simple: chess was an excuse to pursue 
>> AI. In an era of much lower computational power, people figured that 
>> for a computer to beat a GM at chess, some meaningful AI would have 
>> to be developed along the way. I don' thing that Deep Blue is what 
>> they had in mind. IBM cheated in a way. I do think that Deep Blue is 
>> an accomplishment, but not_the_  accomplishment we hoped for.
>
>>> Tree search and alpha-beta pruning have very general application so 
>>> I
> have no doubt they are among the many techniques that human brains use.
> Also having a very extensive 'book'
> memory is something humans use.  But the memorized games and position 
> evaluation are both very specific to chess and are hard to duplicate 
> in general problem solving.  So I think chess programs did contribute 
> a little to AI. The Mars Rover probably uses decision tree searches
sometimes.
>
> Agreed.
> Some manner (e.g. algorithm) of pruning the uninteresting branches -- 
> as they are discovered -- from dynamic sets of interest is fundamental 
> in order to achieve scalability. Without being able to throw stuff out 
> as stuff comes in -- via the senses (and meta interactions with the 
> internal state of mind
> -- such as memories) -- an being will rather quickly gum up in 
> information overload and memory exhaustion. Without pruning; growth 
> grows geometrically out of control.
> There is pretty good evidence -- from what I have read about current 
> neural science -- that the brain is indeed, throwing away a large 
> portion of raw sensory data during the process of reifying these 
> streams into the smooth internal construct or model of reality that we 
> in fact experience. In other words our model -- what we "see", what we 
> "hear", "taste", "smell", "feel", "orient" [a distinct inner ear 
> organ]  (and perhaps other senses -- such as the sense of the 
> directional flow of time perhaps  as well)... in any case this 
> construct, which is what we perceive as real contains (and is 
> constructed from) only a fraction of the original stream of raw 
> sensorial data. In fact in some cases the brain can be tricked into 
> "editing" actual real sense supplied visual reality for example 
> literally out of the picture
> -- as has experimentally been demonstrated.
> We do not experience the real world; we experience the model of it, 
> our brains have supplied us with, and that model, while in most cases 
> is pretty well reflective of actual sensorial streams, it crucially 
> depends on the mind's internal state and its pre-conscious 
> operations... on all the pruning and editing that is going on in the 
> buffer zone between when the brain begins working on our in-coming 
> reality perception stream and when we -- the observer -- self-perceive our
current stream of being.
> It also seems clear that the brain is pruning as well by drilling down 
> and focusing in on very specific and micro-structure oriented tasks 
> such as visual edge detection (which is a critical part of 
> interpreting visual data) for example. If some dynamic neural 
> micro-structure decides it has recognizes a visual edge, in this 
> example, it probably fires some synchronized signal as expeditiously 
> as it can, up the chain of dynamically forming and inter-acting 
> neural-decision-nets, grabbing the next bucket in an endless stream
needing immediate attention.
> I would argue that nervous systems that were not adept at throwing 
> stuff out as soon as its information value decayed, long ago became a 
> part of the food supply of long ago ancestor life forms with nervous 
> systems that were better at throwing stuff out, as soon as it was no 
> longer needed. I would argue there is a clear evolutionary pressure 
> for optimizing environmental response through efficient (yet also high 
> fidelity) pruning algorithms in order to be able to maximize neural 
> efficiency and speed up sense perception (the reification that we 
> perceive unfolding before us) This is

Re: Douglas Hofstadter Article

2013-10-27 Thread Craig Weinberg


> Yes… I can see how one could assume that. But not exactly what I assume 
> though. Who knows if there is a real world? 
>
> All I know (and even that is open to question) is I experience my 
> existence as occurring within this (shared) high fidelity environment that 
> in my experience – for me as I experience it -- is the real word. This 
> actually says nothing more than what it does say. Again who knows. I don’t. 
> Do you?
>
I agree, but we can take it a step further and say what we can understand 
is that the expectation of knowing is not necessarily valid.
 

> And yet the experience stream is not random – reality has order, 
> directionality, sense; it is repeatable (touch a hot stove and you will 
> burn your finger every time); and it is sequenced in a knotty chain of 
> causality. A lot can be – and has been – discovered about it… basic laws, 
> constants, relationships, phases & states; mathematics, equations… and 
> theories about what this whatever it is must be.
>
> When I say the mind models reality – I actually am not assuming any 
> reality in reality – just that there is some sense stream that is being 
> generated by something – open to discussion what that something is
>

What if it isn't being generated by something, but rather everything is 
generated by it? What leads us to believe that the universe is other than a 
nested stream of sense which is not only self-generating, but defines 
generation itself?
 

> – and that the “reality” we actually experience in our mind is a highly 
> artifacted reification and synthesis of the various sensorial streams 
> (leaving whatever they actually are the result of out of the discussion – 
> for the moment to focus on the point). 
>

Highly artifacted compared to what though? If we don't know whether there 
is an objective reality, then all of our expectations are just as 
artifacted as any experience we can have - the expectations of reification 
is itself an artifacted experience. So again, we have no footing outside of 
artifact to suspect that any such footing is possible. Physics itself may 
be artifacting reification. This is what I mean by 
multisenserealism.
 

> I am guessing we can all pretty much agree that our minds exist behind 
> sensorial surfaces and portals – our organs of sense. 
>
No, not at all. Our mind is an organ of sense too. Thoughts are qualia, 
just like colors and flavors. They are particular kinds of qualia which are 
optimized to represent in a way which dehydrates the appearance of feeling 
and emotion as much as possible, and in so doing makes them optimized for 
meta-qualitative comparison. Our entire body is made of cellular sense 
organs, which are made of molecular sense organs, which are made of 
motivated sensations. Instead of assuming that only we have interiority, I 
assume that the capacity to discern interiority from exteriority and to 
create that polarization is actually more primitive than physics. Physics 
is more indirect than sensorial surfaces or minds. It is a generalization 
based on instrumental measurements performed by the body for the mind.

 

> Without getting into to what it is that is causing our sense streams to 
> produce the signals and information streams they are in fact producing – we 
> can all agree (I hope) – that these streams are our experience of our 
> reality environment. 
>
I though we agreed that whether there is a reality environment is 
unknowable? I do agree that our experience is as real as any reality can 
ever be but I do not agree that they are producing any 'information' or 
'signals'. Sense is not a product, it is the fabric of the Absolute. Only 
sense can be informed or signaled. A sign or significance is only a 
saturation of sense - an associative promiscuity which renders locally 
divided sensations transparent to their underlying Absolute unity.
 

> Again without ascribing any rules or form about what that environment 
> ultimately is or is not; beyond stating and formulating the hypothesis we 
> have been able to discern, the replicating patterns  we have discovered. We 
> also all know on a gut level (our enteric co-brains) how our future reality 
> experience depends current actions – we know that if we leap off the cliff 
> that gravity will take over and that – at least in this world-line of our 
> multi-selves – we will splatter onto the rocks below…. There is no doubt 
> about this – in those of sane mind at least. 
>

Absolutely. I'm not advocating solipsism or idealism, except on the 
Absolute level. Locally, our personal consciousness does indeed depend on 
human sense organs. but those organs depend on sub-personal sense organs. 
We join the universal story already in progress. There is a lot of 
momentum/inertial of all of these experiences on many different scales 
which holds it all together.
 

> Whether or not reality is real is another matter – and a very interesting 
> one too J
>
> However without gett

Re: Douglas Hofstadter Article

2013-10-27 Thread meekerdb

On 10/27/2013 2:49 PM, Chris de Morsella wrote:

I have some hope that violence diminishes at higher levels of

intellectual development.
  
I share your hope, but my heart is saddened by how we do not seem to as a

species be fulfilling this hope of yours, which I share in.


Steven Pinker just wrote book showing that human violence is diminishing.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-27 Thread LizR
I have been under the impression that violence has been decreasing, on
average, over historical time, that is to say the proportion of people
dying violently and being injured by violence has tended to decrease over
time. I believe the number of wars has decreased over historical time, and
continues to do so, which I attribute to improved communications. In my
opinion it becomes more difficult to demonise an enemy as one is better
able to contact and communicate with them, so the advent of photography,
television, the internet and so on have all incrementally improved the
situation.

I must admit the evidence I have for this is mainly anecdotal so if Stephen
Pinker has written on the subject he may have pulled together the  various
pieces of evidence which I personally have only come across occasionally.




On 28 October 2013 13:07, meekerdb  wrote:

>  On 10/27/2013 2:49 PM, Chris de Morsella wrote:
>
>  I have some hope that violence diminishes at higher levels of
>
>  intellectual development.
>
> I share your hope, but my heart is saddened by how we do not seem to as a
> species be fulfilling this hope of yours, which I share in.
>
>
> Steven Pinker just wrote book showing that human violence is diminishing.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-27 Thread meekerdb

On 10/27/2013 8:45 PM, LizR wrote:
I have been under the impression that violence has been decreasing, on average, over 
historical time, that is to say the proportion of people dying violently and being 
injured by violence has tended to decrease over time. I believe the number of wars has 
decreased over historical time, and continues to do so, which I attribute to improved 
communications. In my opinion it becomes more difficult to demonise an enemy as one is 
better able to contact and communicate with them, so the advent of photography, 
television, the internet and so on have all incrementally improved the situation.


It has been argued that the ability to kill from a distance, without face to face combat 
makes it easier to kill.  But on the other hand it also allows those in combat to maintain 
more innocence.  Once you've killed some people face-to-face it becomes easier to kill more.




I must admit the evidence I have for this is mainly anecdotal so if Stephen Pinker has 
written on the subject he may have pulled together the  various pieces of evidence which 
I personally have only come across occasionally.





 The Better Angels of Our Nature: Why Violence Has Declined


 Steven Pinker

http://www.amazon.com/The-Better-Angels-Our-Nature/dp/0143122010/ref=sr_1_1?ie=UTF8&qid=1382934843&sr=8-1&keywords=steven+pinker

Brent




On 28 October 2013 13:07, meekerdb mailto:meeke...@verizon.net>> 
wrote:


On 10/27/2013 2:49 PM, Chris de Morsella wrote:

I have some hope that violence diminishes at higher levels of

intellectual development.
  
I share your hope, but my heart is saddened by how we do not seem to as a

species be fulfilling this hope of yours, which I share in.


Steven Pinker just wrote book showing that human violence is diminishing.

Brent



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-27 Thread LizR
Well, the facts (as I was told them) is that violence and war have steadily
declined, on average, over the past few centuries. Also, there are
apparently far less autocratic rulers now than was the case in the past. My
personal theory is as I described in my last post, but it is purely a
lay-person's theory, and may be either wrong or - more likely - only part
of the truth.

On the subject of violence being easier with modern weapons, that is of
course true, it's very difficult to imagine killing 70,000 people in an
hour or so as happened at Hiroshima, for example, with pre-atomic weapons,
unless whole armies were involved. Yet I would say that (and Nagasaki)
served to make subsequent nuclear war less likely. I have to admit that the
idiocy of America over personal possession of firearms is a counter
example, as is the US's recent foreign policy. But I'm still hopeful that
the historical trends are real.



On 28 October 2013 17:38, meekerdb  wrote:

>  On 10/27/2013 8:45 PM, LizR wrote:
>
>  I have been under the impression that violence has been decreasing, on
> average, over historical time, that is to say the proportion of people
> dying violently and being injured by violence has tended to decrease over
> time. I believe the number of wars has decreased over historical time, and
> continues to do so, which I attribute to improved communications. In my
> opinion it becomes more difficult to demonise an enemy as one is better
> able to contact and communicate with them, so the advent of photography,
> television, the internet and so on have all incrementally improved the
> situation.
>
>
> It has been argued that the ability to kill from a distance, without face
> to face combat makes it easier to kill.  But on the other hand it also
> allows those in combat to maintain more innocence.  Once you've killed some
> people face-to-face it becomes easier to kill more.
>
>
>
>  I must admit the evidence I have for this is mainly anecdotal so if
> Stephen Pinker has written on the subject he may have pulled together the
> various pieces of evidence which I personally have only come across
> occasionally.
>
>
>
> The Better Angels of Our Nature: Why Violence Has Declined  Steven Pinker
>
> http://www.amazon.com/The-Better-Angels-Our-Nature/dp/0143122010/ref=sr_1_1?ie=UTF8&qid=1382934843&sr=8-1&keywords=steven+pinker
>
> Brent
>
>
>
>
> On 28 October 2013 13:07, meekerdb  wrote:
>
>>  On 10/27/2013 2:49 PM, Chris de Morsella wrote:
>>
>>  I have some hope that violence diminishes at higher levels of
>>
>>  intellectual development.
>>
>> I share your hope, but my heart is saddened by how we do not seem to as a
>> species be fulfilling this hope of yours, which I share in.
>>
>>
>>  Steven Pinker just wrote book showing that human violence is
>> diminishing.
>>
>> Brent
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-28 Thread Telmo Menezes
On Sun, Oct 27, 2013 at 10:49 PM, Chris de Morsella
 wrote:
>
>
> -Original Message-
> From: everything-list@googlegroups.com
> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
> Sent: Friday, October 25, 2013 2:38 PM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
> On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella 
> wrote:
>>
>> -Original Message-
>> From: everything-list@googlegroups.com
>> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
>> Sent: Friday, October 25, 2013 10:46 AM
>> To: everything-list@googlegroups.com
>> Subject: Re: Douglas Hofstadter Article
>>
>> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>>> My high-level objection is very simple: chess was an excuse to pursue
>>> AI. In an era of much lower computational power, people figured that
>>> for a computer to beat a GM at chess, some meaningful AI would have
>>> to be developed along the way. I don' thing that Deep Blue is what
>>> they had in mind. IBM cheated in a way. I do think that Deep Blue is
>>> an accomplishment, but not_the_  accomplishment we hoped for.
>>
>>>> Tree search and alpha-beta pruning have very general application so
>>>> I
>> have no doubt they are among the many techniques that human brains use.
>> Also having a very extensive 'book'
>> memory is something humans use.  But the memorized games and position
>> evaluation are both very specific to chess and are hard to duplicate
>> in general problem solving.  So I think chess programs did contribute
>> a little to AI. The Mars Rover probably uses decision tree searches
> sometimes.
>>
>> Agreed.
>> Some manner (e.g. algorithm) of pruning the uninteresting branches --
>> as they are discovered -- from dynamic sets of interest is fundamental
>> in order to achieve scalability. Without being able to throw stuff out
>> as stuff comes in -- via the senses (and meta interactions with the
>> internal state of mind
>> -- such as memories) -- an being will rather quickly gum up in
>> information overload and memory exhaustion. Without pruning; growth
>> grows geometrically out of control.
>> There is pretty good evidence -- from what I have read about current
>> neural science -- that the brain is indeed, throwing away a large
>> portion of raw sensory data during the process of reifying these
>> streams into the smooth internal construct or model of reality that we
>> in fact experience. In other words our model -- what we "see", what we
>> "hear", "taste", "smell", "feel", "orient" [a distinct inner ear
>> organ]  (and perhaps other senses -- such as the sense of the
>> directional flow of time perhaps  as well)... in any case this
>> construct, which is what we perceive as real contains (and is
>> constructed from) only a fraction of the original stream of raw
>> sensorial data. In fact in some cases the brain can be tricked into
>> "editing" actual real sense supplied visual reality for example
>> literally out of the picture
>> -- as has experimentally been demonstrated.
>> We do not experience the real world; we experience the model of it,
>> our brains have supplied us with, and that model, while in most cases
>> is pretty well reflective of actual sensorial streams, it crucially
>> depends on the mind's internal state and its pre-conscious
>> operations... on all the pruning and editing that is going on in the
>> buffer zone between when the brain begins working on our in-coming
>> reality perception stream and when we -- the observer -- self-perceive our
> current stream of being.
>> It also seems clear that the brain is pruning as well by drilling down
>> and focusing in on very specific and micro-structure oriented tasks
>> such as visual edge detection (which is a critical part of
>> interpreting visual data) for example. If some dynamic neural
>> micro-structure decides it has recognizes a visual edge, in this
>> example, it probably fires some synchronized signal as expeditiously
>> as it can, up the chain of dynamically forming and inter-acting
>> neural-decision-nets, grabbing the next bucket in an endless stream
> needing immediate attention.
>> I would argue that nervous systems that were not adept at throwing
>> stuff out as soon as its information value decayed, long ago became a
>> part of the food supply of long ago ancestor life forms with nervous
>> systems that were better at throwing stuff out, as soon 

Re: Douglas Hofstadter Article

2013-10-28 Thread Telmo Menezes
On Mon, Oct 28, 2013 at 4:45 AM, LizR  wrote:
> I have been under the impression that violence has been decreasing, on
> average, over historical time, that is to say the proportion of people dying
> violently and being injured by violence has tended to decrease over time. I
> believe the number of wars has decreased over historical time, and continues
> to do so, which I attribute to improved communications. In my opinion it
> becomes more difficult to demonise an enemy as one is better able to contact
> and communicate with them, so the advent of photography, television, the
> internet and so on have all incrementally improved the situation.
>
> I must admit the evidence I have for this is mainly anecdotal so if Stephen
> Pinker has written on the subject he may have pulled together the  various
> pieces of evidence which I personally have only come across occasionally.

True, but to be honest I tend to believe in a phase transition. Both
biological evolution, social evolution and maybe even brain activity
seems to happen in bursts of break-throughs. Per Bak argues that this
is because these systems exist in a state of self-organised
criticality -- life at the edge of chaos.

Another reason for this belief of mine is that I think that humans
are, in a sense, transcendental creatures. One often ignored
consequence of the theory of evolution is that we became aware of the
mechanism as a species. This has transcendental potential, because by
becoming aware of our biological program we can strive to free
ourselves from some of its dictates. For example, we can see violence
for what it is, and understand that it's not in our best interest.
It's in the interest of meta-structures that we serve - species,
tribes, families, nations and so on.

A speculation of mine: religious fundamentalism superficially rejects
evolution because it threatens creation myths, but intuitively rejects
it because its deep consequences are subversive to the fundamentalism
program.

Telmo.

>
>
>
> On 28 October 2013 13:07, meekerdb  wrote:
>>
>> On 10/27/2013 2:49 PM, Chris de Morsella wrote:
>>
>> I have some hope that violence diminishes at higher levels of
>>
>> intellectual development.
>>
>> I share your hope, but my heart is saddened by how we do not seem to as a
>> species be fulfilling this hope of yours, which I share in.
>>
>>
>> Steven Pinker just wrote book showing that human violence is diminishing.
>>
>> Brent
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-28 Thread LizR
I would like to see a phase transition. But the buildup to reach the
tipping point would still be incremental, which is what we are (apparently)
seeing at present. Hopefully this is a sigmoidal curve...

[image: Inline images 1]

Once some "bioterrorist" creates a highly infectious retrovirus that
rewrites human DNA to make us all behave nicely towards each other, that
could be the tipping point!


On 28 October 2013 22:39, Telmo Menezes  wrote:

> On Mon, Oct 28, 2013 at 4:45 AM, LizR  wrote:
> > I have been under the impression that violence has been decreasing, on
> > average, over historical time, that is to say the proportion of people
> dying
> > violently and being injured by violence has tended to decrease over
> time. I
> > believe the number of wars has decreased over historical time, and
> continues
> > to do so, which I attribute to improved communications. In my opinion it
> > becomes more difficult to demonise an enemy as one is better able to
> contact
> > and communicate with them, so the advent of photography, television, the
> > internet and so on have all incrementally improved the situation.
> >
> > I must admit the evidence I have for this is mainly anecdotal so if
> Stephen
> > Pinker has written on the subject he may have pulled together the
>  various
> > pieces of evidence which I personally have only come across occasionally.
>
> True, but to be honest I tend to believe in a phase transition. Both
> biological evolution, social evolution and maybe even brain activity
> seems to happen in bursts of break-throughs. Per Bak argues that this
> is because these systems exist in a state of self-organised
> criticality -- life at the edge of chaos.
>
> Another reason for this belief of mine is that I think that humans
> are, in a sense, transcendental creatures. One often ignored
> consequence of the theory of evolution is that we became aware of the
> mechanism as a species. This has transcendental potential, because by
> becoming aware of our biological program we can strive to free
> ourselves from some of its dictates. For example, we can see violence
> for what it is, and understand that it's not in our best interest.
> It's in the interest of meta-structures that we serve - species,
> tribes, families, nations and so on.
>
> A speculation of mine: religious fundamentalism superficially rejects
> evolution because it threatens creation myths, but intuitively rejects
> it because its deep consequences are subversive to the fundamentalism
> program.
>
> Telmo.
>
> >
> >
> >
> > On 28 October 2013 13:07, meekerdb  wrote:
> >>
> >> On 10/27/2013 2:49 PM, Chris de Morsella wrote:
> >>
> >> I have some hope that violence diminishes at higher levels of
> >>
> >> intellectual development.
> >>
> >> I share your hope, but my heart is saddened by how we do not seem to as
> a
> >> species be fulfilling this hope of yours, which I share in.
> >>
> >>
> >> Steven Pinker just wrote book showing that human violence is
> diminishing.
> >>
> >> Brent
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "Everything List" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to everything-list+unsubscr...@googlegroups.com.
> >> To post to this group, send email to everything-list@googlegroups.com.
> >> Visit this group at http://groups.google.com/group/everything-list.
> >> For more options, visit https://groups.google.com/groups/opt_out.
> >
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Everything List" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to everything-list+unsubscr...@googlegroups.com.
> > To post to this group, send email to everything-list@googlegroups.com.
> > Visit this group at http://groups.google.com/group/everything-list.
> > For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-28 Thread meekerdb

On 10/28/2013 3:44 PM, LizR wrote:
I would like to see a phase transition. But the buildup to reach the tipping point would 
still be incremental, which is what we are (apparently) seeing at present. Hopefully 
this is a sigmoidal curve...


Inline images 1

Once some "bioterrorist" creates a highly infectious retrovirus that rewrites human DNA 
to make us all behave nicely towards each other, that could be the tipping point!


But don't you see that is why there are so many of us that we're destroying the 
environment, we're being to nice to each other.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-28 Thread LizR
You mean "too" nice, I assume :)

That's debatable. For example, research shows that countries with negative
population growth are ones that have taken equal rights for women
seriously. So being nice to the female half of the population leads to less
babies being born. Also, a lot of religious fundamentalists insist that
abortion, contraception etc are bad, that women shouldn't be allowed to do
anything they might enjoy (like drive cars) and generally restrict them to
staying home and raising lots of kids by restricting and oppressing them.




On 29 October 2013 12:39, meekerdb  wrote:

>  On 10/28/2013 3:44 PM, LizR wrote:
>
> I would like to see a phase transition. But the buildup to reach the
> tipping point would still be incremental, which is what we are (apparently)
> seeing at present. Hopefully this is a sigmoidal curve...
>
> [image: Inline images 1]
>
>  Once some "bioterrorist" creates a highly infectious retrovirus that
> rewrites human DNA to make us all behave nicely towards each other, that
> could be the tipping point!
>
>
> But don't you see that is why there are so many of us that we're
> destroying the environment, we're being to nice to each other.
>
> Brent
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-10-28 Thread Chris de Morsella
But we are also perfecting our tools of violence as well.

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
Sent: Sunday, October 27, 2013 5:07 PM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

 

On 10/27/2013 2:49 PM, Chris de Morsella wrote:

I have some hope that violence diminishes at higher levels of

intellectual development.
 
I share your hope, but my heart is saddened by how we do not seem to as a
species be fulfilling this hope of yours, which I share in.


Steven Pinker just wrote book showing that human violence is diminishing.

Brent

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-28 Thread LizR
I know, I know. But there does seem to be a historical decline in violence,
on average and over a long time, which I've heard about from various
sources, the latest being Stephen Pinker. This is probably happening for a
number of reasons. One is perhaps improved communications, but probably
more important is improving living standards, the more people have the less
they have motives to fight over stuff.  And improving education helps, as
do global movements to improve human rights (votes for women and the 8 hour
working day - as opposed to about 15 - both started in New Zealand, I'm
told :)

Of course we aren't quite at a global techno-utopia yet.


On 29 October 2013 15:11, Chris de Morsella  wrote:

> But we are also perfecting our tools of violence as well.
>
> ** **
>
> *From:* everything-list@googlegroups.com [mailto:
> everything-list@googlegroups.com] *On Behalf Of *meekerdb
> *Sent:* Sunday, October 27, 2013 5:07 PM
>
> *To:* everything-list@googlegroups.com
> *Subject:* Re: Douglas Hofstadter Article
>
> ** **
>
> On 10/27/2013 2:49 PM, Chris de Morsella wrote:
>
> I have some hope that violence diminishes at higher levels of
>
> intellectual development.
>
>  
>
> I share your hope, but my heart is saddened by how we do not seem to as a
>
> species be fulfilling this hope of yours, which I share in.
>
>
> Steven Pinker just wrote book showing that human violence is diminishing.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-28 Thread meekerdb
Perfecting may mean making them more precise so that we kill two people accidentally for 
every one we kill on purpose, instead of killing 20.


Brent

On 10/28/2013 7:11 PM, Chris de Morsella wrote:


But we are also perfecting our tools of violence as well.

*From:*everything-list@googlegroups.com [mailto:everything-list@googlegroups.com] *On 
Behalf Of *meekerdb

*Sent:* Sunday, October 27, 2013 5:07 PM
*To:* everything-list@googlegroups.com
*Subject:* Re: Douglas Hofstadter Article

On 10/27/2013 2:49 PM, Chris de Morsella wrote:

I have some hope that violence diminishes at higher levels of

intellectual development.

  


I share your hope, but my heart is saddened by how we do not seem to as a

species be fulfilling this hope of yours, which I share in.


Steven Pinker just wrote book showing that human violence is diminishing.

Brent

--
You received this message because you are subscribed to the Google Groups "Everything 
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to everything-list@googlegroups.com 
<mailto:everything-list@googlegroups.com>.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

No virus found in this message.
Checked by AVG - www.avg.com <http://www.avg.com>
Version: 2014.0.4158 / Virus Database: 3614/6772 - Release Date: 10/22/13

--
You received this message because you are subscribed to the Google Groups "Everything 
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-29 Thread Telmo Menezes
On Tue, Oct 29, 2013 at 12:48 AM, LizR  wrote:

> You mean "too" nice, I assume :)
>
> That's debatable. For example, research shows that countries with negative
> population growth are ones that have taken equal rights for women
> seriously. So being nice to the female half of the population leads to less
> babies being born.
>

There are a lot of confounding variables. Prosperity seems to correlate
negatively with religious fundamentalism, religious fundamentalism
correlates negatively with women's rights and IQ correlates with prosperity
and negatively with religious fundamentalism. Even average temperature
correlate strongly with some of these things. It's very hard to untangle
the causes and effects.

Some researchers claim that women's rights in India improved by the
influence of western soap operas, that portrait women with careers and so
on. Social sciences are messy :)

I think it's pretty clear that there's an arrow of improvement pointing in
the right direction.



> Also, a lot of religious fundamentalists insist that abortion,
> contraception etc are bad, that women shouldn't be allowed to do anything
> they might enjoy (like drive cars) and generally restrict them to staying
> home and raising lots of kids by restricting and oppressing them.
>
>
>
>
> On 29 October 2013 12:39, meekerdb  wrote:
>
>>  On 10/28/2013 3:44 PM, LizR wrote:
>>
>> I would like to see a phase transition. But the buildup to reach the
>> tipping point would still be incremental, which is what we are (apparently)
>> seeing at present. Hopefully this is a sigmoidal curve...
>>
>> [image: Inline images 1]
>>
>>  Once some "bioterrorist" creates a highly infectious retrovirus that
>> rewrites human DNA to make us all behave nicely towards each other, that
>> could be the tipping point!
>>
>>
>> But don't you see that is why there are so many of us that we're
>> destroying the environment, we're being to nice to each other.
>>
>> Brent
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-10-29 Thread Telmo Menezes
On Tue, Oct 29, 2013 at 4:58 AM, meekerdb  wrote:
> Perfecting may mean making them more precise so that we kill two people
> accidentally for every one we kill on purpose, instead of killing 20.

Sure, but it can also mean getting closer to a Nash equilibrium were
the only rational move is not to attack.

> Brent
>
>
> On 10/28/2013 7:11 PM, Chris de Morsella wrote:
>
> But we are also perfecting our tools of violence as well.
>
>
>
> From: everything-list@googlegroups.com
> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
> Sent: Sunday, October 27, 2013 5:07 PM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
>
>
> On 10/27/2013 2:49 PM, Chris de Morsella wrote:
>
> I have some hope that violence diminishes at higher levels of
>
> intellectual development.
>
>
>
> I share your hope, but my heart is saddened by how we do not seem to as a
>
> species be fulfilling this hope of yours, which I share in.
>
>
> Steven Pinker just wrote book showing that human violence is diminishing.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4158 / Virus Database: 3614/6772 - Release Date: 10/22/13
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-10-29 Thread Chris de Morsella


-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
Sent: Monday, October 28, 2013 2:32 AM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On Sun, Oct 27, 2013 at 10:49 PM, Chris de Morsella 
wrote:
>
>
> -Original Message-
> From: everything-list@googlegroups.com 
> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
> Sent: Friday, October 25, 2013 2:38 PM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
> On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella 
> 
> wrote:
>>
>> -Original Message-
>> From: everything-list@googlegroups.com 
>> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
>> Sent: Friday, October 25, 2013 10:46 AM
>> To: everything-list@googlegroups.com
>> Subject: Re: Douglas Hofstadter Article
>>
>> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>>> My high-level objection is very simple: chess was an excuse to 
>>> pursue AI. In an era of much lower computational power, people 
>>> figured that for a computer to beat a GM at chess, some meaningful 
>>> AI would have to be developed along the way. I don' thing that Deep 
>>> Blue is what they had in mind. IBM cheated in a way. I do think that 
>>> Deep Blue is an accomplishment, but not_the_  accomplishment we hoped
for.
>>
>>>> Tree search and alpha-beta pruning have very general application so 
>>>> I
>> have no doubt they are among the many techniques that human brains use.
>> Also having a very extensive 'book'
>> memory is something humans use.  But the memorized games and position 
>> evaluation are both very specific to chess and are hard to duplicate 
>> in general problem solving.  So I think chess programs did contribute 
>> a little to AI. The Mars Rover probably uses decision tree searches
> sometimes.
>>
>> Agreed.
>> Some manner (e.g. algorithm) of pruning the uninteresting branches -- 
>> as they are discovered -- from dynamic sets of interest is 
>> fundamental in order to achieve scalability. Without being able to 
>> throw stuff out as stuff comes in -- via the senses (and meta 
>> interactions with the internal state of mind
>> -- such as memories) -- an being will rather quickly gum up in 
>> information overload and memory exhaustion. Without pruning; growth 
>> grows geometrically out of control.
>> There is pretty good evidence -- from what I have read about current 
>> neural science -- that the brain is indeed, throwing away a large 
>> portion of raw sensory data during the process of reifying these 
>> streams into the smooth internal construct or model of reality that 
>> we in fact experience. In other words our model -- what we "see", 
>> what we "hear", "taste", "smell", "feel", "orient" [a distinct inner 
>> ear organ]  (and perhaps other senses -- such as the sense of the 
>> directional flow of time perhaps  as well)... in any case this 
>> construct, which is what we perceive as real contains (and is 
>> constructed from) only a fraction of the original stream of raw 
>> sensorial data. In fact in some cases the brain can be tricked into 
>> "editing" actual real sense supplied visual reality for example 
>> literally out of the picture
>> -- as has experimentally been demonstrated.
>> We do not experience the real world; we experience the model of it, 
>> our brains have supplied us with, and that model, while in most cases 
>> is pretty well reflective of actual sensorial streams, it crucially 
>> depends on the mind's internal state and its pre-conscious 
>> operations... on all the pruning and editing that is going on in the 
>> buffer zone between when the brain begins working on our in-coming 
>> reality perception stream and when we -- the observer -- 
>> self-perceive our
> current stream of being.
>> It also seems clear that the brain is pruning as well by drilling 
>> down and focusing in on very specific and micro-structure oriented 
>> tasks such as visual edge detection (which is a critical part of 
>> interpreting visual data) for example. If some dynamic neural 
>> micro-structure decides it has recognizes a visual edge, in this 
>> example, it probably fires some synchronized signal as expeditiously 
>> as it can, up the chain of dynamically forming and inter-acting 
>> neural-decision-nets, grabbing the next bucket in an endless stream
> needing immediat

Re: Douglas Hofstadter Article

2013-10-30 Thread Telmo Menezes
On Wed, Oct 30, 2013 at 5:34 AM, Chris de Morsella
 wrote:
>
>
> -Original Message-
> From: everything-list@googlegroups.com
> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
> Sent: Monday, October 28, 2013 2:32 AM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
> On Sun, Oct 27, 2013 at 10:49 PM, Chris de Morsella 
> wrote:
>>
>>
>> -Original Message-
>> From: everything-list@googlegroups.com
>> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
>> Sent: Friday, October 25, 2013 2:38 PM
>> To: everything-list@googlegroups.com
>> Subject: Re: Douglas Hofstadter Article
>>
>> On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella
>> 
>> wrote:
>>>
>>> -Original Message-
>>> From: everything-list@googlegroups.com
>>> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
>>> Sent: Friday, October 25, 2013 10:46 AM
>>> To: everything-list@googlegroups.com
>>> Subject: Re: Douglas Hofstadter Article
>>>
>>> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>>>> My high-level objection is very simple: chess was an excuse to
>>>> pursue AI. In an era of much lower computational power, people
>>>> figured that for a computer to beat a GM at chess, some meaningful
>>>> AI would have to be developed along the way. I don' thing that Deep
>>>> Blue is what they had in mind. IBM cheated in a way. I do think that
>>>> Deep Blue is an accomplishment, but not_the_  accomplishment we hoped
> for.
>>>
>>>>> Tree search and alpha-beta pruning have very general application so
>>>>> I
>>> have no doubt they are among the many techniques that human brains use.
>>> Also having a very extensive 'book'
>>> memory is something humans use.  But the memorized games and position
>>> evaluation are both very specific to chess and are hard to duplicate
>>> in general problem solving.  So I think chess programs did contribute
>>> a little to AI. The Mars Rover probably uses decision tree searches
>> sometimes.
>>>
>>> Agreed.
>>> Some manner (e.g. algorithm) of pruning the uninteresting branches --
>>> as they are discovered -- from dynamic sets of interest is
>>> fundamental in order to achieve scalability. Without being able to
>>> throw stuff out as stuff comes in -- via the senses (and meta
>>> interactions with the internal state of mind
>>> -- such as memories) -- an being will rather quickly gum up in
>>> information overload and memory exhaustion. Without pruning; growth
>>> grows geometrically out of control.
>>> There is pretty good evidence -- from what I have read about current
>>> neural science -- that the brain is indeed, throwing away a large
>>> portion of raw sensory data during the process of reifying these
>>> streams into the smooth internal construct or model of reality that
>>> we in fact experience. In other words our model -- what we "see",
>>> what we "hear", "taste", "smell", "feel", "orient" [a distinct inner
>>> ear organ]  (and perhaps other senses -- such as the sense of the
>>> directional flow of time perhaps  as well)... in any case this
>>> construct, which is what we perceive as real contains (and is
>>> constructed from) only a fraction of the original stream of raw
>>> sensorial data. In fact in some cases the brain can be tricked into
>>> "editing" actual real sense supplied visual reality for example
>>> literally out of the picture
>>> -- as has experimentally been demonstrated.
>>> We do not experience the real world; we experience the model of it,
>>> our brains have supplied us with, and that model, while in most cases
>>> is pretty well reflective of actual sensorial streams, it crucially
>>> depends on the mind's internal state and its pre-conscious
>>> operations... on all the pruning and editing that is going on in the
>>> buffer zone between when the brain begins working on our in-coming
>>> reality perception stream and when we -- the observer --
>>> self-perceive our
>> current stream of being.
>>> It also seems clear that the brain is pruning as well by drilling
>>> down and focusing in on very specific and micro-structure oriented
>>> tasks such as visual edge detection (which is a critical part of
>>> interpreting visua

RE: Douglas Hofstadter Article

2013-10-30 Thread Chris de Morsella


-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
Sent: Wednesday, October 30, 2013 8:50 AM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On Wed, Oct 30, 2013 at 5:34 AM, Chris de Morsella 
wrote:
>
>
> -Original Message-
> From: everything-list@googlegroups.com 
> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
> Sent: Monday, October 28, 2013 2:32 AM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
> On Sun, Oct 27, 2013 at 10:49 PM, Chris de Morsella 
> 
> wrote:
>>
>>
>> -Original Message-
>> From: everything-list@googlegroups.com 
>> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
>> Sent: Friday, October 25, 2013 2:38 PM
>> To: everything-list@googlegroups.com
>> Subject: Re: Douglas Hofstadter Article
>>
>> On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella 
>> 
>> wrote:
>>>
>>> -Original Message-
>>> From: everything-list@googlegroups.com 
>>> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
>>> Sent: Friday, October 25, 2013 10:46 AM
>>> To: everything-list@googlegroups.com
>>> Subject: Re: Douglas Hofstadter Article
>>>
>>> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>>>> My high-level objection is very simple: chess was an excuse to 
>>>> pursue AI. In an era of much lower computational power, people 
>>>> figured that for a computer to beat a GM at chess, some meaningful 
>>>> AI would have to be developed along the way. I don' thing that Deep 
>>>> Blue is what they had in mind. IBM cheated in a way. I do think 
>>>> that Deep Blue is an accomplishment, but not_the_  accomplishment 
>>>> we hoped
> for.
>>>
>>>>> Tree search and alpha-beta pruning have very general application 
>>>>> so I
>>> have no doubt they are among the many techniques that human brains use.
>>> Also having a very extensive 'book'
>>> memory is something humans use.  But the memorized games and 
>>> position evaluation are both very specific to chess and are hard to 
>>> duplicate in general problem solving.  So I think chess programs did 
>>> contribute a little to AI. The Mars Rover probably uses decision 
>>> tree searches
>> sometimes.
>>>
>>> Agreed.
>>> Some manner (e.g. algorithm) of pruning the uninteresting branches 
>>> -- as they are discovered -- from dynamic sets of interest is 
>>> fundamental in order to achieve scalability. Without being able to 
>>> throw stuff out as stuff comes in -- via the senses (and meta 
>>> interactions with the internal state of mind
>>> -- such as memories) -- an being will rather quickly gum up in 
>>> information overload and memory exhaustion. Without pruning; growth 
>>> grows geometrically out of control.
>>> There is pretty good evidence -- from what I have read about current 
>>> neural science -- that the brain is indeed, throwing away a large 
>>> portion of raw sensory data during the process of reifying these 
>>> streams into the smooth internal construct or model of reality that 
>>> we in fact experience. In other words our model -- what we "see", 
>>> what we "hear", "taste", "smell", "feel", "orient" [a distinct inner 
>>> ear organ]  (and perhaps other senses -- such as the sense of the 
>>> directional flow of time perhaps  as well)... in any case this 
>>> construct, which is what we perceive as real contains (and is 
>>> constructed from) only a fraction of the original stream of raw 
>>> sensorial data. In fact in some cases the brain can be tricked into 
>>> "editing" actual real sense supplied visual reality for example 
>>> literally out of the picture
>>> -- as has experimentally been demonstrated.
>>> We do not experience the real world; we experience the model of it, 
>>> our brains have supplied us with, and that model, while in most 
>>> cases is pretty well reflective of actual sensorial streams, it 
>>> crucially depends on the mind's internal state and its pre-conscious 
>>> operations... on all the pruning and editing that is going on in the 
>>> buffer zone between when the brain begins working on our in-coming 
>>> reality perception stream and when we -- the observe

Re: Douglas Hofstadter Article

2013-10-31 Thread Bruno Marchal


On 30 Oct 2013, at 17:08, Chris de Morsella wrote:




-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
Sent: Wednesday, October 30, 2013 8:50 AM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On Wed, Oct 30, 2013 at 5:34 AM, Chris de Morsella >

wrote:



-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
Sent: Monday, October 28, 2013 2:32 AM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On Sun, Oct 27, 2013 at 10:49 PM, Chris de Morsella

wrote:



-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
Sent: Friday, October 25, 2013 2:38 PM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella

wrote:


-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
Sent: Friday, October 25, 2013 10:46 AM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

On 10/25/2013 3:24 AM, Telmo Menezes wrote:

My high-level objection is very simple: chess was an excuse to
pursue AI. In an era of much lower computational power, people
figured that for a computer to beat a GM at chess, some meaningful
AI would have to be developed along the way. I don' thing that  
Deep

Blue is what they had in mind. IBM cheated in a way. I do think
that Deep Blue is an accomplishment, but not_the_  accomplishment
we hoped

for.



Tree search and alpha-beta pruning have very general application
so I
have no doubt they are among the many techniques that human  
brains use.

Also having a very extensive 'book'
memory is something humans use.  But the memorized games and
position evaluation are both very specific to chess and are hard to
duplicate in general problem solving.  So I think chess programs  
did

contribute a little to AI. The Mars Rover probably uses decision
tree searches

sometimes.


Agreed.
Some manner (e.g. algorithm) of pruning the uninteresting branches
-- as they are discovered -- from dynamic sets of interest is 
fundamental in order to achieve scalability. Without being able to

throw stuff out as stuff comes in -- via the senses (and meta
interactions with the internal state of mind
-- such as memories) -- an being will rather quickly gum up in 
information overload and memory exhaustion. Without pruning; growth

grows geometrically out of control.
There is pretty good evidence -- from what I have read about  
current

neural science -- that the brain is indeed, throwing away a large
portion of raw sensory data during the process of reifying these
streams into the smooth internal construct or model of reality that
we in fact experience. In other words our model -- what we "see",
what we "hear", "taste", "smell", "feel", "orient" [a distinct  
inner

ear organ]  (and perhaps other senses -- such as the sense of the
directional flow of time perhaps  as well)... in any case this
construct, which is what we perceive as real contains (and is
constructed from) only a fraction of the original stream of raw
sensorial data. In fact in some cases the brain can be tricked into
"editing" actual real sense supplied visual reality for example
literally out of the picture
-- as has experimentally been demonstrated.
We do not experience the real world; we experience the model of it,
our brains have supplied us with, and that model, while in most
cases is pretty well reflective of actual sensorial streams, it
crucially depends on the mind's internal state and its pre- 
conscious
operations... on all the pruning and editing that is going on in  
the

buffer zone between when the brain begins working on our in-coming
reality perception stream and when we -- the observer --
self-perceive our

current stream of being.

It also seems clear that the brain is pruning as well by drilling
down and focusing in on very specific and micro-structure oriented
tasks such as visual edge detection (which is a critical part of
interpreting visual data) for example. If some dynamic neural
micro-structure decides it has recognizes a visual edge, in this
example, it probably fires some synchronized signal as  
expeditiously

as it can, up the chain of dynamically forming and inter-acting
neural-decision-nets, grabbing the next bucket in an endless stream

needing immediate attention.

I would argue that nervous systems that were not adept at throwing
stuff out as soon as its information value decayed, long ago became
a part of the food supply of long ago ancestor life forms with
nervous systems that were better at throwing stuff out, as soon as
it was no longer needed. I would 

Re: Douglas Hofstadter Article

2013-10-31 Thread Telmo Menezes
On Wed, Oct 30, 2013 at 5:08 PM, Chris de Morsella
 wrote:
>
>
> -Original Message-
> From: everything-list@googlegroups.com
> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
> Sent: Wednesday, October 30, 2013 8:50 AM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
> On Wed, Oct 30, 2013 at 5:34 AM, Chris de Morsella 
> wrote:
>>
>>
>> -Original Message-
>> From: everything-list@googlegroups.com
>> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
>> Sent: Monday, October 28, 2013 2:32 AM
>> To: everything-list@googlegroups.com
>> Subject: Re: Douglas Hofstadter Article
>>
>> On Sun, Oct 27, 2013 at 10:49 PM, Chris de Morsella
>> 
>> wrote:
>>>
>>>
>>> -Original Message-
>>> From: everything-list@googlegroups.com
>>> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
>>> Sent: Friday, October 25, 2013 2:38 PM
>>> To: everything-list@googlegroups.com
>>> Subject: Re: Douglas Hofstadter Article
>>>
>>> On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella
>>> 
>>> wrote:
>>>>
>>>> -Original Message-
>>>> From: everything-list@googlegroups.com
>>>> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
>>>> Sent: Friday, October 25, 2013 10:46 AM
>>>> To: everything-list@googlegroups.com
>>>> Subject: Re: Douglas Hofstadter Article
>>>>
>>>> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>>>>> My high-level objection is very simple: chess was an excuse to
>>>>> pursue AI. In an era of much lower computational power, people
>>>>> figured that for a computer to beat a GM at chess, some meaningful
>>>>> AI would have to be developed along the way. I don' thing that Deep
>>>>> Blue is what they had in mind. IBM cheated in a way. I do think
>>>>> that Deep Blue is an accomplishment, but not_the_  accomplishment
>>>>> we hoped
>> for.
>>>>
>>>>>> Tree search and alpha-beta pruning have very general application
>>>>>> so I
>>>> have no doubt they are among the many techniques that human brains use.
>>>> Also having a very extensive 'book'
>>>> memory is something humans use.  But the memorized games and
>>>> position evaluation are both very specific to chess and are hard to
>>>> duplicate in general problem solving.  So I think chess programs did
>>>> contribute a little to AI. The Mars Rover probably uses decision
>>>> tree searches
>>> sometimes.
>>>>
>>>> Agreed.
>>>> Some manner (e.g. algorithm) of pruning the uninteresting branches
>>>> -- as they are discovered -- from dynamic sets of interest is
>>>> fundamental in order to achieve scalability. Without being able to
>>>> throw stuff out as stuff comes in -- via the senses (and meta
>>>> interactions with the internal state of mind
>>>> -- such as memories) -- an being will rather quickly gum up in
>>>> information overload and memory exhaustion. Without pruning; growth
>>>> grows geometrically out of control.
>>>> There is pretty good evidence -- from what I have read about current
>>>> neural science -- that the brain is indeed, throwing away a large
>>>> portion of raw sensory data during the process of reifying these
>>>> streams into the smooth internal construct or model of reality that
>>>> we in fact experience. In other words our model -- what we "see",
>>>> what we "hear", "taste", "smell", "feel", "orient" [a distinct inner
>>>> ear organ]  (and perhaps other senses -- such as the sense of the
>>>> directional flow of time perhaps  as well)... in any case this
>>>> construct, which is what we perceive as real contains (and is
>>>> constructed from) only a fraction of the original stream of raw
>>>> sensorial data. In fact in some cases the brain can be tricked into
>>>> "editing" actual real sense supplied visual reality for example
>>>> literally out of the picture
>>>> -- as has experimentally been demonstrated.
>>>> We do not experience the real world; we experience the model of it,
>>>> our brains have supplied us with, and that model, while in most
>>&g

Re: Douglas Hofstadter Article

2013-11-01 Thread John Mikes
liz wrote (Oct. 24) to Craig:
*What are inorganic atoms? Or rather (since I suspect all atoms are
inorganic), what are organic atoms?*
*
*
What are 'atoms'?
(IMO models of our ignorance (oops: knowledge) about a portion of the
unknowable infinite explained during the latest some centuries of human
development 'science'.
JM


On Thu, Oct 24, 2013 at 9:46 PM, LizR  wrote:

> On 25 October 2013 14:31, Craig Weinberg  wrote:
>
>>
>> Looking at natural presences, like atoms or galaxies, the scope of their
>> persistence is well beyond any human relation so they do deserve the
>> benefit of the doubt. We have no reason to believe that they were assembled
>> by anything other than themselves. The fact that we are made of atoms and
>> atoms are made from stars is another point in their favor, whereas no
>> living organism that we have encountered is made of inorganic atoms, or of
>> pure mathematics, or can survive by consuming only inorganic atoms or
>> mathematics.
>>
>
> What are inorganic atoms? Or rather (since I suspect all atoms are
> inorganic), what are organic atoms?
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-11-01 Thread meekerdb

On 11/1/2013 1:20 PM, John Mikes wrote:

liz wrote (Oct. 24) to Craig:
*/What are inorganic atoms? Or rather (since I suspect all atoms are inorganic), what 
are organic atoms?/*

*/
/*
What are 'atoms'?
(IMO models of our ignorance (oops: knowledge) about a portion of the unknowable 
infinite explained during the latest some centuries of human development 'scien


The question is what did Craig mean by the term.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Douglas Hofstadter Article

2013-11-01 Thread Craig Weinberg


On Friday, November 1, 2013 4:20:45 PM UTC-4, JohnM wrote:
>
> liz wrote (Oct. 24) to Craig:
> *What are inorganic atoms? Or rather (since I suspect all atoms are 
> inorganic), what are organic atoms?*
> *
> *
> What are 'atoms'? 
> (IMO models of our ignorance (oops: knowledge) about a portion of the 
> unknowable infinite explained during the latest some centuries of human 
> development 'science'. 
> JM
>
>
I agree that atomic theory is not automatically a description of 'what is', 
but I would say that an atom represents the smallest body part, or the 
smallest sense organ that can be detected (indirectly) by our public facing 
sense organs.

Beneath that level of scale, I propose that the organs and bodies 
themselves no longer cohere to our inspection, and are revealed 
increasingly to adhere within the inspection itself. This adhesion vs 
cohesion ratio begins to be seen at the atomic level, as 'electrons' 
represent interatomic adhesion rather than cohesive bodies/shells/orbitals. 

Molecules are only made of atoms in the sense that words are spelled with 
letters. The molecular word-ness is not only an emergent property of the 
letters (it is that also, as syllables are atoms of words and letters are 
atoms of syllables), but the meaning of the world is not emergent, it is 
divergent, from the top-down. The sense of the word can also be seen to 
radiate (figuratively) from the center-out. Atoms build molecules, cells 
build molecules, and molecular expression is fulfilled as both cellular 
activity and atomic activity. It all fits together (because it is all 
divergent from pansensitivity (*another neologism that I might like = 
holosemiotics).

Does that sound conceivable?

Craig

>
> On Thu, Oct 24, 2013 at 9:46 PM, LizR >wrote:
>
>> On 25 October 2013 14:31, Craig Weinberg 
>> > wrote:
>>
>>>
>>> Looking at natural presences, like atoms or galaxies, the scope of their 
>>> persistence is well beyond any human relation so they do deserve the 
>>> benefit of the doubt. We have no reason to believe that they were assembled 
>>> by anything other than themselves. The fact that we are made of atoms and 
>>> atoms are made from stars is another point in their favor, whereas no 
>>> living organism that we have encountered is made of inorganic atoms, or of 
>>> pure mathematics, or can survive by consuming only inorganic atoms or 
>>> mathematics.
>>>
>>
>> What are inorganic atoms? Or rather (since I suspect all atoms are 
>> inorganic), what are organic atoms?
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com .
>> To post to this group, send email to everyth...@googlegroups.com
>> .
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Douglas Hofstadter Article

2013-11-01 Thread Chris de Morsella
 

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Friday, November 01, 2013 1:45 PM
To: everything-list@googlegroups.com
Subject: Re: Douglas Hofstadter Article

 



On Friday, November 1, 2013 4:20:45 PM UTC-4, JohnM wrote:

liz wrote (Oct. 24) to Craig:

What are inorganic atoms? Or rather (since I suspect all atoms are
inorganic), what are organic atoms?

 

What are 'atoms'? 

(IMO models of our ignorance (oops: knowledge) about a portion of the
unknowable infinite explained during the latest some centuries of human
development 'science'. 

JM

 


I agree that atomic theory is not automatically a description of 'what is',
but I would say that an atom represents the smallest body part, or the
smallest sense organ that can be detected (indirectly) by our public facing
sense organs.

Beneath that level of scale, I propose that the organs and bodies themselves
no longer cohere to our inspection, and are revealed increasingly to adhere
within the inspection itself. This adhesion vs cohesion ratio begins to be
seen at the atomic level, as 'electrons' represent interatomic adhesion
rather than cohesive bodies/shells/orbitals. 

Molecules are only made of atoms in the sense that words are spelled with
letters. The molecular word-ness is not only an emergent property of the
letters (it is that also, as syllables are atoms of words and letters are
atoms of syllables), but the meaning of the world is not emergent, it is
divergent, from the top-down. The sense of the word can also be seen to
radiate (figuratively) from the center-out. Atoms build molecules, cells
build molecules, and molecular expression is fulfilled as both cellular
activity and atomic activity. It all fits together (because it is all
divergent from pansensitivity (*another neologism that I might like =
holosemiotics).

Does that sound conceivable?

 

>> The sense of the word can also be seen to radiate (figuratively) from the
center-out.

 

>From the experiential perspective certainly - speaking from the perspective
of my own experience of my unfolding experiencing. I would agree this is the
normal state of our being. we sense the world radiating from and being
arrayed around our observational foci. However other states of mind are
possible - and have been chronicled throughout the ages -- in which the
normal everyday  sense of being becomes stretched out, transformed,
unveiled. and an endless stream of words seeking to describe that which is
ineffable. Self-transcendental accounts from many times and places attempt
to describe a state of being that is very unlike the quotidian state of
self-identity that characterizes our conscious lives.

 

How do you think this self-aware, self-conscious, and intelligent (at least
a little) point of view  that we experience as ourselves comes to be? It
seems to spring up eternally in us. always there (when we are in a normal
conscious state). It seems to experience reality unfolding in real time -
though we know that is an illusion - and that by the time we experience
perception - and it seems the perception of arriving at a decision -- our
physical brain has done all kinds of processing ahead of & before our
perceptual moment.

 

How does our experiential sense of self arise in our brain mind in the first
place? Isn't this the crux?

Cheers,

Chris

Craig

 

On Thu, Oct 24, 2013 at 9:46 PM, LizR  >
wrote:

On 25 October 2013 14:31, Craig Weinberg  >
wrote:

 

Looking at natural presences, like atoms or galaxies, the scope of their
persistence is well beyond any human relation so they do deserve the benefit
of the doubt. We have no reason to believe that they were assembled by
anything other than themselves. The fact that we are made of atoms and atoms
are made from stars is another point in their favor, whereas no living
organism that we have encountered is made of inorganic atoms, or of pure
mathematics, or can survive by consuming only inorganic atoms or
mathematics.

 

What are inorganic atoms? Or rather (since I suspect all atoms are
inorganic), what are organic atoms?

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-li...@googlegroups.com  .
To post to this group, send email to everyth...@googlegroups.com
 .
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

 

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/gro

Re: Douglas Hofstadter Article

2013-11-01 Thread Craig Weinberg


On Friday, November 1, 2013 11:27:19 PM UTC-4, cdemorsella wrote:
>
>  
>
>  
>
> *From:* everyth...@googlegroups.com  [mailto:
> everyth...@googlegroups.com ] *On Behalf Of *Craig Weinberg
> *Sent:* Friday, November 01, 2013 1:45 PM
> *To:* everyth...@googlegroups.com 
> *Subject:* Re: Douglas Hofstadter Article
>
>  
>
>
>
> On Friday, November 1, 2013 4:20:45 PM UTC-4, JohnM wrote:
>
> liz wrote (Oct. 24) to Craig:
>
> *What are inorganic atoms? Or rather (since I suspect all atoms are 
> inorganic), what are organic atoms?*
>
>  
>
> What are 'atoms'? 
>
> (IMO models of our ignorance (oops: knowledge) about a portion of the 
> unknowable infinite explained during the latest some centuries of human 
> development 'science'. 
>
> JM
>
>  
>
>
> I agree that atomic theory is not automatically a description of 'what 
> is', but I would say that an atom represents the smallest body part, or the 
> smallest sense organ that can be detected (indirectly) by our public facing 
> sense organs.
>
> Beneath that level of scale, I propose that the organs and bodies 
> themselves no longer cohere to our inspection, and are revealed 
> increasingly to adhere within the inspection itself. This adhesion vs 
> cohesion ratio begins to be seen at the atomic level, as 'electrons' 
> represent interatomic adhesion rather than cohesive bodies/shells/orbitals. 
>
> Molecules are only made of atoms in the sense that words are spelled with 
> letters. The molecular word-ness is not only an emergent property of the 
> letters (it is that also, as syllables are atoms of words and letters are 
> atoms of syllables), but the meaning of the world is not emergent, it is 
> divergent, from the top-down. The sense of the word can also be seen to 
> radiate (figuratively) from the center-out. Atoms build molecules, cells 
> build molecules, and molecular expression is fulfilled as both cellular 
> activity and atomic activity. It all fits together (because it is all 
> divergent from pansensitivity (*another neologism that I might like = 
> holosemiotics).
>
> Does that sound conceivable?
>
>  
>
> >> The sense of the word can also be seen to radiate (figuratively) from 
> the center-out.
>
>  
>
> From the experiential perspective certainly – speaking from the 
> perspective of my own experience of my unfolding experiencing. I would 
> agree this is the normal state of our being… we sense the world radiating 
> from and being arrayed around our observational foci. However other states 
> of mind are possible – and have been chronicled throughout the ages -- in 
> which the normal everyday  sense of being becomes stretched out, 
> transformed, unveiled… and an endless stream of words seeking to describe 
> that which is ineffable. Self-transcendental accounts from many times and 
> places attempt to describe a state of being that is very unlike the 
> quotidian state of self-identity that characterizes our conscious lives.
>

Sure, I agree. I think that every state of being reflects its connection 
with other states of being in a multivalent way. Sense is self-transcendent 
and self transparent. Every metaphor refers not only to the particular 
example, and to the sense that they have in common, but also to the sense 
that all metaphors have. A metaphor is an instruction manual on how to make 
metaphors. That is literally what it means to be self-evident, and that is 
what sense is and what it does - it its own nature to make itself evident, 
evident. It's pre-computational, as is every computer program a stored 
product of inputs for the purpose of producing a sensible output.

 
>
> How do you think this self-aware, self-conscious, and intelligent (at 
> least a little) point of view  that we experience as ourselves comes to be? 
> It seems to spring up eternally in us… always there (when we are in a 
> normal conscious state). 
>

I think that it comes to be just as prismatic diffraction comes to be - by 
masking the absolute. Our local spring is not only eternally in us, it is 
eternity itself. There is simply nothing that is not made of 100% 
experience...even if it is an experience on one layer of a gap or delay of 
experience on another.
 

> It seems to experience reality unfolding in real time – though we know 
> that is an illusion
>
It's not an illusion. The illusion is that real time is what a clock 
measures. I think that time makes more sense as memory and repetition 
within a topology of experienced significance. There are no illusions, only 
conflicts among layers or inertial frames of experience. An illusion is an 
expectation from one channel that does not translate into another. An 
optical illusio