Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ed Porter wrote:


  Richard,

/(the second half of this post, that starting with the all capitalized 
heading, is the most important)/


I agree with your extreme cognitive semantics discussion. 

I agree with your statement that one criterion for “realness” is the 
directness and immediateness of something’s phenomenology.


I agree with your statement that, based on this criterion for 
“realness,” many conscious phenomena, such as qualia, which have 
traditionally fallen under the hard problem of consciousness seem to be 
“real.”


But I have problems with some of the conclusions you draw from these 
things, particularly in your “Implications” section at the top of the 
second column on Page 5 of your paper.


There you state

“…the correct explanation for consciousness is that all of its various 
phenomenological facets deserve to be called as “real” as any other 
concept we have, because there are no meaningful /objective /standards 
that we could apply to judge them otherwise.”


That aspects of consciousness seem real does not provides much of an 
“explanation for consciousness.”  It says something, but not much.  It 
adds little to Descartes’ “I think therefore I am.”  I don’t think it 
provides much of an answer to any of the multiple questions Wikipedia 
associates with Chalmer’s hard problem of consciousness.


I would respond as follows.  When I make statements about consciousness 
deserving to be called "real", I am only saying this as a summary of a 
long argument that has gone before.  So it would not really be fair to 
declare that this statement of mine "says something, but not much" 
without taking account of the reasons that have been building up toward 
that statement earlier in the paper.  I am arguing that when we probe 
the meaning of "real" we find that the best criterion of realness is the 
way that the system builds a population of concept-atoms that are (a) 
mutually consistent with one another, and (b) strongly supported by 
sensory evidence (there are other criteria, but those are the main 
ones).  If you think hard enough about these criteria, you notice that 
the qualia-atoms (those concept-atoms that cause the analysis mechanism 
to bottom out) score very high indeed.  This is in dramatic contrast to 
other concept-atoms like hallucinations, which we consider 'artifacts' 
precisely because they score so low.  The difference between these two 
is so dramatic that I think we need to allow the qualia-atoms to be 
called "real" by all our usual criteria, BUT with the added feature that 
they cannot be understood in any more basic terms.


Now, all of that (and more) lies behind the simple statement that they 
should be called real.  It wouldn't make much sense to judge that 
statement by itself.  Only judge the argument behind it.



You further state that some aspects of consciousness have a unique 
status of being beyond the reach of scientific inquiry and give a 
purported reason why they are beyond such a reach. Similarly you say:


”…although we can never say exactly what the phenomena of consciousness 
are, in the way that we give scientific explanations for other things, 
we can nevertheless say exactly why we cannot say anything: so in the 
end, we can explain it.”


First, I would point out as I have in my prior papers that, given the 
advances that are expected to be made in AGI, brain scanning and brain 
science in the next fifty years, it is not clear that consciousness is 
necessarily any less explainable than are many other aspects of physical 
reality.  You admit there are easy problems of consciousness that can be 
explained, just as there are easy parts of physical reality that can be 
explained. But it is not clear that the percent of consciousness that 
will remain a mystery in fifty years is any larger than the percent of 
basic physical reality that will remain a mystery in that time frame.



The paper gives a clear argument for *why* it cannot be explained.

So contradict that argument (to say "it is not clear that consciousness 
is necessarily any less explainable than are many other aspects of 
physical reality") you have to say why the argument does not work.  It 
would make no sense for a person to simply assert the opposite of the 
argument's conclusion, without justification.


The argument goes into plenty of specific details, so there are many 
kinds of attack that you could make.



But even if we accept as true your statement that certain phenomena of 
consciousness are beyond analysis, that does little to explain 
consciousness.  In fact, it does not appear to answer any of the hard 
problems of consciousness.  For example, just because (a) we are 
conscious of the distinction used in our own mind’s internal 
representation between sensation of the colors red and blue, (b) we 
allegedly cannot analyze that difference further, and (c) that 
distinction seems subjectively real to us --- that does not shed much 
light on whether or not a p-zombie 

[agi] PhD study opportunity at Temple University

2008-11-19 Thread Pei Wang
Hi,

I may accept a few PhD students in 2009. Interested people please
visit http://www.cis.temple.edu/~pwang/students.html

Pei Wang
http://www.cis.temple.edu/~pwang/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
Ok, well I read part 2 three times and I seem not to be getting the
importance or the crux of it.

I hate to ask this, but could you possibly summarize it in some
different way, in the hopes of getting through to me??

I agree that the standard scientific approach to explanation breaks
when presented with consciousness.

I do not (yet) understand your proposed alternative approach to explanation.

If anyone on this list *does* understand it, feel free to chip in with
your own attempted summary...

thx
ben

On Wed, Nov 19, 2008 at 5:47 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Ben Goertzel wrote:
>>
>> Richard,
>>
>> So are you saying that: "According to the ordinary scientific standards of
>> 'explanation', the subjective experience of consciousness cannot be
>> explained ... and as a consequence, the relationship between subjective
>> consciousness and physical data (as required to be elucidated by any
>> solution to Chalmers' "hard problem" as normally conceived) also cannot be
>> explained."
>>
>> If so, then: according to the ordinary scientific standards of
>> explanation, you are not explaining consciousness, nor explaining the
>> relation btw consciousness and the physical ... but are rather **explaining
>> why, due to the particular nature of consciousness and its relationship to
>> the ordinary scientific standards of explanation, this kind of explanation
>> is not possible**
>>
>> ??
>
> No!
>
> If you write the above, then you are summarizing the question that I pose at
> the half-way point of the paper, just before the second part gets underway.
>
> The "ordinary scientific standards of explanation" are undermined by
> questions about consciousness.  They break.  You cannot use them.  They
> become internally inconsistent.  You cannot say "I hereby apply the standard
> mechanism of 'explanation' to Problem X", but then admit that Problem X IS
> the very mechanism that is responsible for determining the  'explanation'
> method you are using, AND the one thing you know about that mechanism is
> that you can see a gaping hole in the mechanism!
>
> You have to find a way to mend that broken standard of explanation.
>
> I do that in part 2.
>
> So far we have not discussed the whole paper, only part 1.
>
>
>
> Richard Loosemore
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects."  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ben Goertzel wrote:


Richard,

So are you saying that: "According to the ordinary scientific standards 
of 'explanation', the subjective experience of consciousness cannot be 
explained ... and as a consequence, the relationship between subjective 
consciousness and physical data (as required to be elucidated by any 
solution to Chalmers' "hard problem" as normally conceived) also cannot 
be explained."


If so, then: according to the ordinary scientific standards of 
explanation, you are not explaining consciousness, nor explaining the 
relation btw consciousness and the physical ... but are rather 
**explaining why, due to the particular nature of consciousness and its 
relationship to the ordinary scientific standards of explanation, this 
kind of explanation is not possible**


??


No!

If you write the above, then you are summarizing the question that I 
pose at the half-way point of the paper, just before the second part 
gets underway.


The "ordinary scientific standards of explanation" are undermined by 
questions about consciousness.  They break.  You cannot use them.  They 
become internally inconsistent.  You cannot say "I hereby apply the 
standard mechanism of 'explanation' to Problem X", but then admit that 
Problem X IS the very mechanism that is responsible for determining the 
 'explanation' method you are using, AND the one thing you know about 
that mechanism is that you can see a gaping hole in the mechanism!


You have to find a way to mend that broken standard of explanation.

I do that in part 2.

So far we have not discussed the whole paper, only part 1.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
Ed,

I'd be curious for your reaction to

http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html

which explores the limits of scientific and linguistic explanation, in
a different but possibly related way to Richard's argument.

Science and language are powerful tools for explanation but there is
no reason to assume they are all-powerful.  We should push them as far
as we can, but no further...

I agree with Richard that according to standard scientific notions of
explanation, consciousness and its relation to the physical world are
inexplicable.  My intuition and reasoning are probably not exactly the
same as his, but there seems some similarity btw our views...

-- Ben G


On Wed, Nov 19, 2008 at 5:27 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Richard,
>
>
>
> (the second half of this post, that starting with the all capitalized
> heading, is the most important)
>
>
>
> I agree with your extreme cognitive semantics discussion.
>
>
>
> I agree with your statement that one criterion for "realness" is the
> directness and immediateness of something's phenomenology.
>
>
>
> I agree with your statement that, based on this criterion for "realness,"
> many conscious phenomena, such as qualia, which have traditionally fallen
> under the hard problem of consciousness seem to be "real."
>
>
>
> But I have problems with some of the conclusions you draw from these things,
> particularly in your "Implications" section at the top of the second column
> on Page 5 of your paper.
>
>
>
> There you state
>
>
>
> "…the correct explanation for consciousness is that all of its various
> phenomenological facets deserve to be called as "real" as any other concept
> we have, because there are no meaningful objective standards that we could
> apply to judge them otherwise."
>
>
>
> That aspects of consciousness seem real does not provides much of an
> "explanation for consciousness."  It says something, but not much.  It adds
> little to Descartes' "I think therefore I am."  I don't think it provides
> much of an answer to any of the multiple questions Wikipedia associates with
> Chalmer's hard problem of consciousness.
>
>
>
> You further state that some aspects of consciousness have a unique status of
> being beyond the reach of scientific inquiry and give a purported reason why
> they are beyond such a reach. Similarly you say:
>
>
>
> "…although we can never say exactly what the phenomena of consciousness are,
> in the way that we give scientific explanations for other things, we can
> nevertheless say exactly why we cannot say anything: so in the end, we can
> explain it."
>
>
>
> First, I would point out as I have in my prior papers that, given the
> advances that are expected to be made in AGI, brain scanning and brain
> science in the next fifty years, it is not clear that consciousness is
> necessarily any less explainable than are many other aspects of physical
> reality.  You admit there are easy problems of consciousness that can be
> explained, just as there are easy parts of physical reality that can be
> explained. But it is not clear that the percent of consciousness that will
> remain a mystery in fifty years is any larger than the percent of basic
> physical reality that will remain a mystery in that time frame.
>
>
>
> But even if we accept as true your statement that certain phenomena of
> consciousness are beyond analysis, that does little to explain
> consciousness.  In fact, it does not appear to answer any of the hard
> problems of consciousness.  For example, just because (a) we are conscious
> of the distinction used in our own mind's internal representation between
> sensation of the colors red and blue, (b) we allegedly cannot analyze that
> difference further, and (c) that distinction seems subjectively real to us
> --- that does not shed much light on whether or not a p-zombie would be
> capable of acting just like a human without having consciousness of red and
> blue color qualia.
>
>
>
> It is not even clear to me that your paper shows consciousness is not an
> "artifact, " as your abstract implies.  Just because something is "real"
> does not mean it is not an "artifact", in many senses of the word, such as
> an unintended, secondary, or unessential, aspect of something.
>
>
>
>
>
> THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS ON THE
> PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND NOT ENOUGH
> ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS DESPITE
> SUCH BOTTOMING OUT  -- THE SENSE OF REALNESS THAT IS MOST ESSENTIAL TO
> CONSCIOUSNESS.
>
>
>
> It is my belief that if you want to understand consciousness in the context
> of the types of things discussed in your paper, you should focus the part of
> the molecular framework, which you imply it is largely in the foreground,
> that prevents the system from returning with no answer, even when trying to
> analyze a node such as a lowest level input n

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
Richard,

So are you saying that: "According to the ordinary scientific standards of
'explanation', the subjective experience of consciousness cannot be
explained ... and as a consequence, the relationship between subjective
consciousness and physical data (as required to be elucidated by any
solution to Chalmers' "hard problem" as normally conceived) also cannot be
explained."

If so, then: according to the ordinary scientific standards of explanation,
you are not explaining consciousness, nor explaining the relation btw
consciousness and the physical ... but are rather **explaining why, due to
the particular nature of consciousness and its relationship to the ordinary
scientific standards of explanation, this kind of explanation is not
possible**

??

ben g




On Wed, Nov 19, 2008 at 4:05 PM, Richard Loosemore <[EMAIL PROTECTED]>wrote:

> Ben Goertzel wrote:
>
>> Richard,
>>
>>My first response to this is that you still don't seem to have taken
>>account of what was said in the second part of the paper  -  and, at
>>the same time, I can find many places where you make statements that
>>are undermined by that second part.
>>
>>To take the most significant example:  when you say:
>>
>>
>> > But, I don't see how the hypothesis
>> >
>> > "Conscious experience is **identified with** unanalyzable
>> mind-atoms"
>> >
>> > could be distinguished empirically from
>> >
>> > "Conscious experience is **correlated with** unanalyzable
>> mind-atoms"
>>
>>... there are several concepts buried in there, like [identified
>>with], [distinguished empirically from] and [correlated with] that
>>are theory-laden.  In other words, when you use those terms you are
>>implictly applying some standards that have to do with semantics and
>>ontology, and it is precisely those standards that I attacked in
>>part 2 of the paper.
>>
>>However, there is also another thing I can say about this statement,
>>based on the argument in part one of the paper.
>>
>>It looks like you are also falling victim to the argument in part 1,
>>at the same time that you are questioning its validity:  one of the
>>consequences of that initial argument was that *because* those
>>concept-atoms are unanalyzable, you can never do any such thing as
>>talk about their being "only correlated with a particular cognitive
>>event" versus "actually being identified with that cognitive event"!
>>
>>So when you point out that the above distinction seems impossible to
>>make, I say:  "Yes, of course:  the theory itself just *said* that!".
>>
>>So far, all of the serious questions that people have placed at the
>>door of this theory have proved susceptible to that argument.
>>
>>
>>
>> Well, suppose I am studying your brain with a super-advanced
>> brain-monitoring device ...
>>
>> Then, suppose that I, using the brain-monitoring device, identify the
>> brain response pattern that uniquely occurs when you look at something red
>> ...
>>
>> I can then pose the question: Is your experience of red *identical* to
>> this brain-response pattern ... or is it correlated with this brain-response
>> pattern?
>>
>> I can pose this question even though the "cognitive atoms" corresponding
>> to this brain-response pattern are unanalyzable from your perspective...
>>
>> Next, note that I can also turn the same brain-monitoring device on
>> myself...
>>
>> So I don't see why the question is unaskable ... it seems askable, because
>> these concept-atoms in question are experience-able even if not
>> analyzable... that is, they still form mental content even though they
>> aren't susceptible to explanation as you describe it...
>>
>> I agree that, subjectively or empirically, there is no way to distinguish
>>
>> "Conscious experience is **identified with** unanalyzable mind-atoms"
>>
>> from
>>
>> "Conscious experience is **correlated with** unanalyzable mind-atoms"
>>
>> and it seems to me that this indicates you have NOT solved the hard
>> problem, but only restated it in a different (possibly useful) way
>>
>
> There are several different approaches and comments that I could take with
> what you just wrote, but let me focus on just one;  the last one.
>
> When you make a statement such as "... it seems to me that .. you have NOT
> solved the hard problem, but only restated it", you are implicitly bringing
> to the table a set of ideas about what it means to "solve" this problem, or
> "explain" consciousness.
>
> Fine so far:  everyone uses the rules of explanation that they have
> acquired over a lifetime - and of course in science we all roughly agree on
> a set of ideas about what it means to explain things.
>
> But what I am trying to point out in this paper is that because of the
> nature of intelligent systems and how they must do their job, the very
> concept of *explanation* is undermined by the topic that in this case we are
> trying to explain.  You canno

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

My first response to this is that you still don't seem to have taken
account of what was said in the second part of the paper  -  and, at
the same time, I can find many places where you make statements that
are undermined by that second part.

To take the most significant example:  when you say:


 > But, I don't see how the hypothesis
 >
 > "Conscious experience is **identified with** unanalyzable mind-atoms"
 >
 > could be distinguished empirically from
 >
 > "Conscious experience is **correlated with** unanalyzable mind-atoms"

... there are several concepts buried in there, like [identified
with], [distinguished empirically from] and [correlated with] that
are theory-laden.  In other words, when you use those terms you are
implictly applying some standards that have to do with semantics and
ontology, and it is precisely those standards that I attacked in
part 2 of the paper.

However, there is also another thing I can say about this statement,
based on the argument in part one of the paper.

It looks like you are also falling victim to the argument in part 1,
at the same time that you are questioning its validity:  one of the
consequences of that initial argument was that *because* those
concept-atoms are unanalyzable, you can never do any such thing as
talk about their being "only correlated with a particular cognitive
event" versus "actually being identified with that cognitive event"!

So when you point out that the above distinction seems impossible to
make, I say:  "Yes, of course:  the theory itself just *said* that!".

So far, all of the serious questions that people have placed at the
door of this theory have proved susceptible to that argument.



Well, suppose I am studying your brain with a super-advanced 
brain-monitoring device ...


Then, suppose that I, using the brain-monitoring device, identify the 
brain response pattern that uniquely occurs when you look at something 
red ...


I can then pose the question: Is your experience of red *identical* to 
this brain-response pattern ... or is it correlated with this 
brain-response pattern?


I can pose this question even though the "cognitive atoms" corresponding 
to this brain-response pattern are unanalyzable from your perspective...


Next, note that I can also turn the same brain-monitoring device on 
myself...


So I don't see why the question is unaskable ... it seems askable, 
because these concept-atoms in question are experience-able even if not 
analyzable... that is, they still form mental content even though they 
aren't susceptible to explanation as you describe it...


I agree that, subjectively or empirically, there is no way to distinguish

"Conscious experience is **identified with** unanalyzable mind-atoms"

from

"Conscious experience is **correlated with** unanalyzable mind-atoms"

and it seems to me that this indicates you have NOT solved the hard 
problem, but only restated it in a different (possibly useful) way


There are several different approaches and comments that I could take 
with what you just wrote, but let me focus on just one;  the last one.


When you make a statement such as "... it seems to me that .. you have 
NOT solved the hard problem, but only restated it", you are implicitly 
bringing to the table a set of ideas about what it means to "solve" this 
problem, or "explain" consciousness.


Fine so far:  everyone uses the rules of explanation that they have 
acquired over a lifetime - and of course in science we all roughly agree 
on a set of ideas about what it means to explain things.


But what I am trying to point out in this paper is that because of the 
nature of intelligent systems and how they must do their job, the very 
concept of *explanation* is undermined by the topic that in this case we 
are trying to explain.  You cannot just go right ahead and apply a 
standard of explanation right out of the box (so to speak) because 
unlike explaining atoms and explaining stars, in this case you are 
trying to explain something that interferes with the notion of 
"explanation".


So when you imply that the theory I propose is weak *because* it 
provides no way to distinguish:


"Conscious experience is **identified with** unanalyzable mind-atoms"

from

"Conscious experience is **correlated with** unanalyzable mind-atoms"

You are missing the main claim that the theory tries to make:  that such 
distinctions are broken precisely *because* of what is going on with the 
explanandum.


You have got to get this point to be able to understand the paper.

I mean, it is okay to disagree with the point and say why (to talk about 
what it means to explain things'  to talk about the connection between 
the explanandum and the methods and basic terms of the thing that we 
call "explaining things").  That would be fine.


But at the moment it seems to me that y

Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Daniel Yokomizo <[EMAIL PROTECTED]> wrote:

> I just want to be clear, you agree that an agent is able to create a
> better version of itself, not just in terms of a badly defined measure
> as IQ but also as a measure of resource utilization.

Yes, even bacteria can do this.

> Do you agree with the statement: "the global economy in which we live
> is a result of actions of human beings"? How would it be different for
> AGIs? Do you disagree that better agents would be able to build an
> equivalent global economy much faster than the time it took humans
> (assuming all the centuries it took since the last big ice age)?

You cannot separate AGI from the human dominated economy. AGI cannot produce 
smarter AGI without help from the 10^10 humans that are already here until 
machines have completely replaced the humans.

> I'm asking for your comments on the technical issues regardind seed AI
> and RSI, regardless of environment. Is there any technical
> impossibilities for an AGI to improve its own code in all possible
> environments? Also it's not clear to me which types of environments
> (if it's the boxing that makes it impossible, if it's an open
> environment with access to the internet, if it's both or neither) you
> see problems with RSI, could you ellaborate it further?

My paper on RSI refutes one proposed approach to AGI, which would be a self 
improving system developed in isolation. I think that is good because such a 
system would be very dangerous if it were possible. However, I am not aware of 
any serious proposals to do it this way, simply because cutting yourself off 
from the internet just makes the problem harder.

To me, RSI in an open environment is not pure RSI. It is a combination of self 
improvement and learning. My position on this approach is not that it won't 
work but that the problem is not as easy as it seems. I believe that if you do 
manage to create an AGI that is n times smarter than a human, then the result 
would be the same as if you hired O(n log n) people. (The factor of log n 
allows for communication overhead and overlapping knowledge). We don't really 
know what it means to be n times smarter, since we have no way to test it. But 
we would expect that such an AGI could work n times faster, learn n times 
faster, know n times as much, make n times as much money, and make prediction 
as accurately as a vote by n people. I am not sure what other measures we could 
apply that would distinguish greater intelligence from just more people.

So to make real progress, you need to make AGI cheaper than human labor for n = 
about 10^9. And that is expensive. The global economy has a complexity of 10^17 
to 10^18 bits. Most of that knowledge is not written down. It is in human 
brains. Unless we develop new technology like brain scanning, the only way to 
extract it is by communication at the rate of 2 bits per second per person.

> I want to keep this discussion focused on the technical
> impossibilities of RSI, so I'm going to ignore for now this side
> discussion about the global economy but later we can go
> back to it.

My AGI proposal does not require any technical breakthroughs. But for something 
this expensive, you can't ignore the economic model. It has to be 
decentralized, and there has to be economic incentives for people to transfer 
their knowledge to it, and it has to be paid for. That is the obstacle you need 
to think about.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
Richard,

My first response to this is that you still don't seem to have taken account
> of what was said in the second part of the paper  -  and, at the same time,
> I can find many places where you make statements that are undermined by that
> second part.
>
> To take the most significant example:  when you say:
>
> > But, I don't see how the hypothesis
> >
> > "Conscious experience is **identified with** unanalyzable mind-atoms"
> >
> > could be distinguished empirically from
> >
> > "Conscious experience is **correlated with** unanalyzable mind-atoms"
>
> ... there are several concepts buried in there, like [identified with],
> [distinguished empirically from] and [correlated with] that are
> theory-laden.  In other words, when you use those terms you are implictly
> applying some standards that have to do with semantics and ontology, and it
> is precisely those standards that I attacked in part 2 of the paper.
>
> However, there is also another thing I can say about this statement, based
> on the argument in part one of the paper.
>
> It looks like you are also falling victim to the argument in part 1, at the
> same time that you are questioning its validity:  one of the consequences of
> that initial argument was that *because* those concept-atoms are
> unanalyzable, you can never do any such thing as talk about their being
> "only correlated with a particular cognitive event" versus "actually being
> identified with that cognitive event"!
>
> So when you point out that the above distinction seems impossible to make,
> I say:  "Yes, of course:  the theory itself just *said* that!".
>
> So far, all of the serious questions that people have placed at the door of
> this theory have proved susceptible to that argument.



Well, suppose I am studying your brain with a super-advanced
brain-monitoring device ...

Then, suppose that I, using the brain-monitoring device, identify the brain
response pattern that uniquely occurs when you look at something red ...

I can then pose the question: Is your experience of red *identical* to this
brain-response pattern ... or is it correlated with this brain-response
pattern?

I can pose this question even though the "cognitive atoms" corresponding to
this brain-response pattern are unanalyzable from your perspective...

Next, note that I can also turn the same brain-monitoring device on
myself...

So I don't see why the question is unaskable ... it seems askable, because
these concept-atoms in question are experience-able even if not
analyzable... that is, they still form mental content even though they
aren't susceptible to explanation as you describe it...

I agree that, subjectively or empirically, there is no way to distinguish

"Conscious experience is **identified with** unanalyzable mind-atoms"

from

"Conscious experience is **correlated with** unanalyzable mind-atoms"

and it seems to me that this indicates you have NOT solved the hard problem,
but only restated it in a different (possibly useful) way

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
>
> Lastly, about your question re. consciousness of extended objects that are
> not concept-atoms.
>
> I think there is some confusion here about what I was trying to say (my
> fault perhaps).  It is not just the fact of those concept-atoms being at the
> end of the line, it is actually about what happens to the analysis
> mechanism.  So, what I did was point to the clearest cases where people feel
> that a subjective experience is in need of explanation - the qualia - and I
> showed that in that case the explanation is a failure of the analysis
> mechanism because it bottoms out.
>
> However, just because I picked that example for the sake of clarity, that
> does not mean that the *only* place where the analysis mechanism can get
> into trouble must be just when it bumps into those peripheral atoms.  I
> tried to explain this in a previous reply to someone (perhaps it was you):
>  it would be entirely possible that higher level atoms could get built to
> represent [a sum of all the qualia-atoms that are part of one object], and
> if that happened we might find that this higher level atom was partly
> analyzable (it is composed of lower level qualia) and partly not (any
> analysis hits the brick wall after one successful unpacking step).
>


OK, I think I get that... I think that's the easy part ;-)

Indeed, the analysis  mechanism can get into trouble just due to its limited
capacity

Other aspects of the mind can pack together complex mental structures, which
the analysis mechanism perceives as tokens with some evocative power, but
which the analysis mechanism lacks the capacity to decompose into parts.
So, these can appear to it as indecomposable too, in a related but slightly
different sense from peripheral atoms...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Ben Goertzel
I wasn't trying to be pejorative, just pointing out an apparent
correspondence...

I have nothing against Hegel; I think he was a great philosopher.  His
"Logic" is really fantastic reading.  And, having grown up surrounded by
Marxist wannabe-revolutionaries (most of whom backed away from strict
Marxism in the mid-70s when the truth about the Soviet Union came out in
America), I am also aware there is a lot of deep truth in Marx's thought, in
spite of the evil that others wrought with it after his death...

I just think that Hegel's dialectical philosophy is clearer than your
"reverse reductio ad absurdum", and so I'm curious to know what you think
your formulation *adds* to the classic Hegelian one...

>From what I understand, your RRA heuristic says that, sometimes, when both X
and ~X are appealing to rational people, there is some common assumption
underlying the two, which when properly questioned and modified can yield a
new Y that transcends and in some measure synthesizes aspects of X and ~X

I suppose Hegel would have called Y the dialectical synthesis of X and ~X,
right?

BTW, we are certainly not seeing the fall of capitalism now.  Marx's
dialectics-based predictions made a lot of errors; for instance, both he and
Hegel failed to see the emergence of the middle class as a sort of
dialectical synthesis of the ruling class and the proletariat ;-) ... but, I
digress!!

So, how would you apply your species of dialectics to solve the problem of
consciousness?  This is a case where, clearly, rational intelligent and
educated people hold wildly contradictory opinions, e.g.

X1 = consciousness does not exist

X2 = consciousness is a special extra-physical entity that correlates with
certain physical systems at certain times

X3 = consciousness is a kind of physical entity

X4 = consciousness is a property immanent in everything, that gets
focused/structured differently via interaction with different physical
systems

All these positions contradict each other.  How do you suggest to
dialectically synthesize them?  ;-)

ben g

-- Ben

On Wed, Nov 19, 2008 at 1:26 PM, Steve Richfield
<[EMAIL PROTECTED]>wrote:

> Ben:
>
> On 11/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>>
>> This sounds an awful lot like the Hegelian dialectical method...
>
>
> Your point being?
>
> We are all stuck in Hegal's Hell whether we like it or not. Reverse
> Reductio ad Absurdum is just a tool to help guide us through it.
>
> There seems to be a human tendency to say that something "sounds an awful
> lot like (something bad)" to dismiss it, but the crucial thing is often the
> details rather than the broad strokes. For example, the Communist Manifesto
> detailed the coming fall of Capitalism, which we may now be seeing in the
> current financial crisis. Sure, the "solution" proved to be worse than the
> problem, but that doesn't mean that the identification of the problems was
> in error.
>
> From what I can see, ~100% of the (mis?)perceived threat from AGI comes
> from a lack of understanding of RRAA (Reverse Reductio ad Absurdum), both by
> those working in AGI and those by the rest of the world. This clearly has
> the potential of affecting your own future success, so it is probably worth
> the extra 10 minutes or so to dig down to the very bottom of it, understand
> it, discuss it, and then take your reasoned position regarding it. After
> all, your coming super-intelligent AGI will probably have to master RRAA to
> be able to resolve intractable disputes, so you will have to be on top of
> RRAA if you are to have any chance of debugging your AGI.
>
> Steve Richfield
> ==
>
>>  On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield <
>> [EMAIL PROTECTED]> wrote:
>>
>>> Martin,
>>>
>>> On 11/18/08, martin biehl <[EMAIL PROTECTED]> wrote:
>>>
 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.
>>>
>>>
>>> HERE is the crux of my argument, as other forms of logic fall short of
>>> being adequate to run a world with. Reverse Reductio ad Absurdum is the
>>> first logical tool with the promise to resolve most intractable disputes,
>>> ranging from the abortion debate to the middle east problem.
>>>
>>> Some people get it easily, and some require long discussions, so I'll
>>> post the "Cliff Notes" version here, and if you want it in smaller doses,
>>> just send me an off-line email and we can talk on the phone.
>>>
>>> Reductio ad absurdum has worked unerringly for centuries to test bad
>>> assumptions. This constitutes a proof by lack of counterexample that the
>>> ONLY way to reach an absurd result is by a bad assumption, as otherwise,
>>> reductio ad absurdum would sometimes fail.
>>>
>>> Hence, when two intelligent people reach conflicting conclusions, but
>>> neither can see any errors in the other's logic, it would seem that they
>>> absolutely MUST have at least one bad assumption. Starting from the
>>> absurdity and searching

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Jiri Jelinek
Trent,

Feynman's page on wikipedia has it as: "If you can't explain something
to a first year student, then you haven't really understood it." but
Feynman reportedly said it in a number of ways, including the
grandmother variant. I learned about it when taking physics classes a
while ago so I don't have a very useful source info, but I remember
one of my professors saying that Feynman also says it in his books.
But yes, I did a quick search and noticed that many attribute the
grandmother variant to Einstein (which I didn't know - sorry). Some
attribute it to Ernest Rutherford, some talk about Kurt Vonnegut, and
yes, some about Bible... Well, I guess it's not that important. But
one of my related thoughts is that when teaching AGIs, we should start
with very high-level basic concepts/explanations/world_model and not
dive into great granularity before the high-level concepts are
relatively well understood [/correctly used when generating
solutions]. I oppose the idea of throwing tons of raw data (from very
different granularity levels [and possibly different contexts]) at the
AGI and expecting that it will somehow sort everything [or most of it]
out correctly.

Jiri

On Wed, Nov 19, 2008 at 3:39 AM, Trent Waddington
<[EMAIL PROTECTED]> wrote:
> On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>>>Trent Waddington wrote:
>>>Apparently, it was Einstein who said that if you can't explain it to
>>>your grandmother then you don't understand it.
>>
>> That was Richard Feynman
>
> When?  I don't really know who said it.. but everyone else on teh
> internets seems to attribute it to Einstein.  I've seen at least one
> site attribute it to the bible (but of course they give no reference).
>
> As such, I think there's two nuggets of wisdom here:  If you can't
> provide references, then your opinion is just as good as mine, and if
> you can provide references, that doesn't excuse you from explaining
> what you're talking about so that everyone can understand.
>
> Two points that many members of this list would do well to heed now and then.
>
> Trent
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Steve Richfield
Back to reality for a moment...

I have greatly increased the IQs of some pretty bright people since I
started doing this in 2001 (the details are way off topic here, so contact
me off-line for more if you are interested), and now, others are also doing
this. I think that these people give us a tiny glimpse into what directions
an AGI might do. Here are my impressions:

1. They come up with some really bright stuff, like Mike's FQ theory of how
like-minded groups of people tend to stagnate technology, which few people
can grasp in the minute or so that is available to interest other people.
Hence, their ideas do NOT spread widely, except among others who are bright
enough to get it fairly quickly. From what I have seen, their enhanced IQs
haven't done much for their life success as measured in dollars, but they
have gone in very different directions than they were previously headed, now
that they have some abilities that they didn't previously have.

2.  Enhancing their IQs did NOT seem to alter their underlying belief
system. For example, Dan was and still remains a Baptist minister. However,
he now reads more passages as being metaphorical. We have no problem
carrying on lively political and religious discussions from our VERY
different points of view, with each of us translating our thoughts into the
other's paradigm.

3.  Blind ambition seemed to disappear, being replaced with a "long view" of
things. They seem to be nicer people for the experience. However, given
their long view, I wouldn't ever recommend becoming an adversary, as they
have no problem with gambits - loosing a skirmish to facilitate winning a
greater battle. If you think you are winning, then you had best stop and
look where this might all end up.

4.  They view most people a little like honey bees - useful but stupid. They
often attempt to help others by pointing them in better directions, but
after little/no success for months/years, they eventually give up and just
let everyone destroy their lives and kill themselves. This results in what
might at first appear to be a callous disregard for human life, but which in
reality is just a realistic view of the world. I suspect that future AGIs
would encounter the same effect.

Hence, unless/until someone displays some reason why an AGI might want to
take over the world, I remain unconcerned. What DOES concern me is stupid
people who think that the population can be controlled, without allowing for
the few bright people who can figure out how to be the butterfly that starts
the hurricane, as chaos theory presumes non-computability of things that, if
computable, will be computed. The resulting hurricane might be blamed on the
butterfly, when in reality, there would have been a hurricane anyway - it
just would have been somewhat different. In short, don't blame the AGI for
the fallen bodies of those who would exert unreasonable control.

I see the hope for the future being in the hands of these cognitively
enhanced people. It shouldn't be too much longer until these people start
rising to the top of the AI (and other) ranks. Imagine Loosemore with dozens
more IQ points and the energy to go along with it. Hence, it will be these
people who will make the decisions as to whether we have AGIs and what their
place in the future is.

Then, "modern" science will be reformed enough to avoid having unfortunate
kids have their metabolic control systems trashed by general anesthetics,
etc. (now already being done at many hospitals, including U of W and
Evergreen here in the Seattle area), and we will stop making people who can
be cognitively enhanced. Note that for every such candidate person, there
are dozens of low IQ gas station attendants, etc., who was subjected to the
same stress, but didn't do so well. Then, either we will have our AGIs in
place, or with no next generation of cognitively enhanced people, we will be
back to the stone age of stupid people. Society has ~50 years to make their
AGI work before this generation of cognitively enhanced people is gone.

Alternatively, some society might intentionally trash kids metabolism just
to induce this phenomenon, as a means to secure control when things crash.
At that point, either there is an AGI to take over, or that society will
take over.

In short, this is a complex area that is really worth understanding if you
are interested in where things are going.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

I re-read your paper and I'm afraid I really don't grok why you think it 
solves Chalmers' hard problem of consciousness...


It really seems to me like what you're suggesting is a "cognitive 
correlate of consciousness", to morph the common phrase "neural 
correlate of consciousness" ...


You seem to be stating that when X is an unanalyzable, pure atomic 
sensation from the perspective of cognitive system C, then C will 
perceive X as a raw quale ... unanalyzable and not explicable by 
ordinary methods of explication, yet, still subjectively real...


But, I don't see how the hypothesis

"Conscious experience is **identified with** unanalyzable mind-atoms"

could be distinguished empirically from

"Conscious experience is **correlated with** unanalyzable mind-atoms"

I think finding cognitive correlates of consciousness is interesting, 
but I don't think it constitutes solving the hard problem in Chalmers' 
sense...


I grok that you're saying "consciousness feels inexplicable because it 
has to do with atoms that the system can't explain, due to their role as 
its primitive atoms" ... and this is a good idea, but, I don't see how 
it bridges the gap btw subjective experience and empirical data ...


What it does is explain why, even if there *were* no hard problem, 
cognitive systems might feel like there is one, in regard to their 
unanalyzable atoms


Another worry I have is: I feel like I can be conscious of my son, even 
though he is not an unanalyzable atom.  I feel like I can be conscious 
of the unique impression he makes ... in the same way that I'm conscious 
of redness ... and, yeah, I feel like I can't fully explain the 
conscious impression he makes on me, even though I can explain a lot of 
things about him...


So I'm not convinced that atomic sensor input is the only source of raw, 
unanalyzable consciousness...


My first response to this is that you still don't seem to have taken 
account of what was said in the second part of the paper  -  and, at the 
same time, I can find many places where you make statements that are 
undermined by that second part.


To take the most significant example:  when you say:

> But, I don't see how the hypothesis
>
> "Conscious experience is **identified with** unanalyzable mind-atoms"
>
> could be distinguished empirically from
>
> "Conscious experience is **correlated with** unanalyzable mind-atoms"

... there are several concepts buried in there, like [identified with], 
[distinguished empirically from] and [correlated with] that are 
theory-laden.  In other words, when you use those terms you are 
implictly applying some standards that have to do with semantics and 
ontology, and it is precisely those standards that I attacked in part 2 
of the paper.


However, there is also another thing I can say about this statement, 
based on the argument in part one of the paper.


It looks like you are also falling victim to the argument in part 1, at 
the same time that you are questioning its validity:  one of the 
consequences of that initial argument was that *because* those 
concept-atoms are unanalyzable, you can never do any such thing as talk 
about their being "only correlated with a particular cognitive event" 
versus "actually being identified with that cognitive event"!


So when you point out that the above distinction seems impossible to 
make, I say:  "Yes, of course:  the theory itself just *said* that!".


So far, all of the serious questions that people have placed at the door 
of this theory have proved susceptible to that argument.


That was essentially what I did when talking to Chalmers.  He came up 
with an objection very like the one you gave above, so I said: "Okay, 
the answer is that the theory itself predicts that you *must* find that 
question to be a stumbling block . AND, more importantly, you should 
be able to see that the strategy I am using here is a strategy that I 
can flexibly deploy to wipe out a whole class of objections, so the only 
way around that strategy (if you want to bring down this theory) is to 
come up a with a counter-strategy that demonstrably has the structure to 
undermine my strategy and I don't believe you can do that."


His only response, IIRC, was "Huh!  This looks like it might be new. 
Send me a copy."


To make further progress in this discussion it is important, I think, to 
understand both the fact that I have that strategy, and also to 
appreciate that the second part of the paper went far beyond that.



Lastly, about your question re. consciousness of extended objects that 
are not concept-atoms.


I think there is some confusion here about what I was trying to say (my 
fault perhaps).  It is not just the fact of those concept-atoms being at 
the end of the line, it is actually about what happens to the analysis 
mechanism.  So, what I did was point to the clearest cases where people 
feel that a subjective experience is in need of explanation - the qua

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Steve Richfield
Ben:

On 11/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>
> This sounds an awful lot like the Hegelian dialectical method...


Your point being?

We are all stuck in Hegal's Hell whether we like it or not. Reverse Reductio
ad Absurdum is just a tool to help guide us through it.

There seems to be a human tendency to say that something "sounds an awful
lot like (something bad)" to dismiss it, but the crucial thing is often the
details rather than the broad strokes. For example, the Communist Manifesto
detailed the coming fall of Capitalism, which we may now be seeing in the
current financial crisis. Sure, the "solution" proved to be worse than the
problem, but that doesn't mean that the identification of the problems was
in error.

>From what I can see, ~100% of the (mis?)perceived threat from AGI comes from
a lack of understanding of RRAA (Reverse Reductio ad Absurdum), both by
those working in AGI and those by the rest of the world. This clearly has
the potential of affecting your own future success, so it is probably worth
the extra 10 minutes or so to dig down to the very bottom of it, understand
it, discuss it, and then take your reasoned position regarding it. After
all, your coming super-intelligent AGI will probably have to master RRAA to
be able to resolve intractable disputes, so you will have to be on top of
RRAA if you are to have any chance of debugging your AGI.

Steve Richfield
==

>  On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield <
> [EMAIL PROTECTED]> wrote:
>
>> Martin,
>>
>> On 11/18/08, martin biehl <[EMAIL PROTECTED]> wrote:
>>
>>> I don't know what reverse reductio ad absurdum is, so it may not be a
>>> precise counterexample, but I think you get my point.
>>
>>
>> HERE is the crux of my argument, as other forms of logic fall short of
>> being adequate to run a world with. Reverse Reductio ad Absurdum is the
>> first logical tool with the promise to resolve most intractable disputes,
>> ranging from the abortion debate to the middle east problem.
>>
>> Some people get it easily, and some require long discussions, so I'll post
>> the "Cliff Notes" version here, and if you want it in smaller doses, just
>> send me an off-line email and we can talk on the phone.
>>
>> Reductio ad absurdum has worked unerringly for centuries to test bad
>> assumptions. This constitutes a proof by lack of counterexample that the
>> ONLY way to reach an absurd result is by a bad assumption, as otherwise,
>> reductio ad absurdum would sometimes fail.
>>
>> Hence, when two intelligent people reach conflicting conclusions, but
>> neither can see any errors in the other's logic, it would seem that they
>> absolutely MUST have at least one bad assumption. Starting from the
>> absurdity and searching for the assumption is where the reverse in reverse
>> reductio ad absurdum comes in.
>>
>> If their false assumptions were different, than one or both parties would
>> quickly discover them in discussion. However, when the argument stays on the
>> surface, the ONLY place remaining to hide an invalid assumption is that they
>> absolutely MUSH share the SAME invalid assumptions.
>>
>> Of course if our superintelligent AGI approaches them and points out their
>> shared invalid assumption, then they would probably BOTH attack the AGI, as
>> their invalid assumption may be their only point of connection. It appears
>> that breaking this deadlock absolutely must involve first teaching both
>> parties what reverse reductio ad absurdum is all about, as I am doing here.
>>
>> For example, take the abortion debate. It is obviously crazy to be making
>> and killing babies, and it is a proven social disaster to make this illegal
>> - an obvious reverse reductio ad absurdum situation.
>>
>> OK, so lets look at societies where abortion is no issue at all, e.g.
>> Muslim societies, where it is freely available, but no one gets them. There,
>> children are treated as assets, where in all respects we treat them as
>> liabilities. Mothers are stuck with unwanted children. Fathers must pay
>> child support, They can't be bought or sold. There is no expectation that
>> they will look after their parents in their old age, etc.
>>
>> In short, BOTH parties believe that children should be treated as
>> liabilities, but when you point this out, they dispute the claim. Why should
>> mothers be stuck with unwanted children? Why not allow sales to parties who
>> really want them? There are no answers to these and other similar questions
>> because the underlying assumption is clearly wrong.
>>
>> The middle east situation is more complex but constructed on similar
>> invalid assumptions.
>>
>> Are we on the same track now?
>>
>> Steve Richfield
>>  
>>
>>> 2008/11/18 Steve Richfield <[EMAIL PROTECTED]>
>>>
  To all,

 I am considering putting up a web site to "filter the crazies" as
 follows, and would appreciate all comments, suggestions, etc.

 Everyone visiting

[agi] Special Issue of Neurocomputing on Brain Building: Call for papers

2008-11-19 Thread Ben Goertzel
Hello all,

I'm helping my friends John Taylor and Hugo de Garis recruit authors for
this special issue of Neurocomputing

http://goertzel.org/neurocomp.htm

If anyone on this list has relevant papers to submit, the deadline is Jan.
15, and submission instructions are in the web page linked above.

thanks
Ben Goertzel



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Daniel Yokomizo
On Wed, Nov 19, 2008 at 1:21 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Wed, 11/19/08, Daniel Yokomizo <[EMAIL PROTECTED]> wrote:
>
>> On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney
>> <[EMAIL PROTECTED]> wrote:
>> > Seed AI is a myth.
>> > http://www.mattmahoney.net/agi2.html (section 2).
>>
>> (I'm assuming you meant the section "5.1.
>> Recursive Self Improvement")
>
> That too, but mainly in the argument for the singularity:
>
> "If humans can produce smarter than human AI, then so can they, and faster"
>
> I am questioning the antecedent, not the consequent.
>
> RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ 
> of 190.

I just want to be clear, you agree that an agent is able to create a
better version of itself, not just in terms of a badly defined measure
as IQ but also as a measure of resource utilization.


> Individual humans can't produce much of of anything beyond spears and clubs 
> without the global economy in which we live. To count as self improvement, 
> the global economy has to produce a smarter global economy. This is already 
> happening.


Do you agree with the statement: "the global economy in which we live
is a result of actions of human beings"? How would it be different for
AGIs? Do you disagree that better agents would be able to build an
equivalent global economy much faster than the time it took humans
(assuming all the centuries it took since the last big ice age)?


> My paper on RSI referenced in section 5.1 (and submitted to JAGI) only 
> applies to systems without external input. It would apply to the unlikely 
> scenario of a program that could understand its own source code and rewrite 
> itself until it achieved vast intelligence while being kept in isolation for 
> safety reasons. This scenario often came up on the SL4 list. It was referred 
> to AI boxing. It was argued that a superhuman AI could easily trick its 
> relatively stupid human guards into releasing it, and there were some 
> experiments where people played the role of the AI and proved just that, even 
> without vastly superior intelligence.
>
> I think that the boxed AI approach has been discredited by now as being 
> impractical to develop for reasons independent of its inherent danger and my 
> proof that it is impossible. All of the serious projects in AI are taking 
> place in open environments, often with data collected from the internet, for 
> simple reasons of expediency. My argument against seed AI is in this type of 
> environment.


I'm asking for your comments on the technical issues regardind seed AI
and RSI, regardless of environment. Is there any technical
impossibilities for an AGI to improve its own code in all possible
environments? Also it's not clear to me which types of environments
(if it's the boxing that makes it impossible, if it's an open
environment with access to the internet, if it's both or neither) you
see problems with RSI, could you ellaborate it further?


> It is extremely expensive to produce a better global economy. The current 
> economy is worth about US$ 1 quadrillion. No small group is going to control 
> any significant part of it.

I want to keep this discussion focused on the technical
impossibilities of RSI, so I'm going to ignore for now this side
discussion about the global economy but later we can go back to it.

> -- Matt Mahoney, [EMAIL PROTECTED]

Best regards,
Daniel Yokomizo


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Ben Goertzel
BTW, for those who are newbies to this list, Matt's argument attempting to
refute RSI was extensively discussed on this list a few months ago.

In my view, I refuted his argument pretty clearly, although he does not
agree.

His mathematics is correct, but seemed to me irrelevant to real-life RSI for
two reasons:

a) assuming a system isolated from the environment, which won't actually be
the case

b) using an intelligence measure focused solely on description length rather
than incorporating runtime

ben g

On Wed, Nov 19, 2008 at 10:21 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Wed, 11/19/08, Daniel Yokomizo <[EMAIL PROTECTED]> wrote:
>
> > On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney
> > <[EMAIL PROTECTED]> wrote:
> > > Seed AI is a myth.
> > > http://www.mattmahoney.net/agi2.html (section 2).
> >
> > (I'm assuming you meant the section "5.1.
> > Recursive Self Improvement")
>
> That too, but mainly in the argument for the singularity:
>
> "If humans can produce smarter than human AI, then so can they, and faster"
>
> I am questioning the antecedent, not the consequent.
>
> RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ
> of 190. Individual humans can't produce much of of anything beyond spears
> and clubs without the global economy in which we live. To count as self
> improvement, the global economy has to produce a smarter global economy.
> This is already happening.
>
> My paper on RSI referenced in section 5.1 (and submitted to JAGI) only
> applies to systems without external input. It would apply to the unlikely
> scenario of a program that could understand its own source code and rewrite
> itself until it achieved vast intelligence while being kept in isolation for
> safety reasons. This scenario often came up on the SL4 list. It was referred
> to AI boxing. It was argued that a superhuman AI could easily trick its
> relatively stupid human guards into releasing it, and there were some
> experiments where people played the role of the AI and proved just that,
> even without vastly superior intelligence.
>
> I think that the boxed AI approach has been discredited by now as being
> impractical to develop for reasons independent of its inherent danger and my
> proof that it is impossible. All of the serious projects in AI are taking
> place in open environments, often with data collected from the internet, for
> simple reasons of expediency. My argument against seed AI is in this type of
> environment. It is extremely expensive to produce a better global economy.
> The current economy is worth about US$ 1 quadrillion. No small group is
> going to control any significant part of it.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects."  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Daniel Yokomizo <[EMAIL PROTECTED]> wrote:

> On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > Seed AI is a myth.
> > http://www.mattmahoney.net/agi2.html (section 2).
> 
> (I'm assuming you meant the section "5.1.
> Recursive Self Improvement")

That too, but mainly in the argument for the singularity:

"If humans can produce smarter than human AI, then so can they, and faster"

I am questioning the antecedent, not the consequent.

RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ of 
190. Individual humans can't produce much of of anything beyond spears and 
clubs without the global economy in which we live. To count as self 
improvement, the global economy has to produce a smarter global economy. This 
is already happening.

My paper on RSI referenced in section 5.1 (and submitted to JAGI) only applies 
to systems without external input. It would apply to the unlikely scenario of a 
program that could understand its own source code and rewrite itself until it 
achieved vast intelligence while being kept in isolation for safety reasons. 
This scenario often came up on the SL4 list. It was referred to AI boxing. It 
was argued that a superhuman AI could easily trick its relatively stupid human 
guards into releasing it, and there were some experiments where people played 
the role of the AI and proved just that, even without vastly superior 
intelligence.

I think that the boxed AI approach has been discredited by now as being 
impractical to develop for reasons independent of its inherent danger and my 
proof that it is impossible. All of the serious projects in AI are taking place 
in open environments, often with data collected from the internet, for simple 
reasons of expediency. My argument against seed AI is in this type of 
environment. It is extremely expensive to produce a better global economy. The 
current economy is worth about US$ 1 quadrillion. No small group is going to 
control any significant part of it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Daniel Yokomizo
On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Steve, what is the purpose of your political litmus test? If you are trying
> to assemble a team of seed-AI programmers with the "correct" ethics, forget
> it. Seed AI is a myth.
> http://www.mattmahoney.net/agi2.html (section 2).

(I'm assuming you meant the section "5.1. Recursive Self Improvement")

Why do you call it a myth? Assuming that an AI (not necessarily
general) that is capable of software programming is possible and such
AI is created using software, it's entirely plausible that it would be
able to find places for improvement in its source code, be it in time
or space usage, concurrency and parallelism missed opportunities,
improved caching, more efficient data-structures, etc.. In such
scenario the AI would be able to create a better version of itself,
how many times this process can be done depend heavily on the
cognitive capabilities of the AI and it's performance.

If we move to an AGI, it would be able to come up with better tools
(e.g. compilers, type systems, programming languages), improve it's
substrate (e.g. write a better OS, rewrite its the performance
critical parts in FPGA), come up with better chips, etc., without even
needing to come up with new theories (i.e. there's sufficient
information already out there that, if synthesized, can lead to better
tools). This will result in another version of the AGI with better
software and hardware, reduced space/time usage and more concurrent.

We can come up with the argument that it'll only be a faster/leaner
AGI and it will get stuck coming up with bad ideas very quickly. But
if it's truly general it would, at least be able to come up with all
science/tech human beings are eventually capable of and if the AGI can
progress further it means humans can't also progress further. If
humans are able to progress than an AGI would be able to progress, at
least as quickly as humans but probably much faster (due to it's own
performance enhancements).

I am really interested to see your comments on this line of reasoning.

> -- Matt Mahoney, [EMAIL PROTECTED]

Best regards,
Daniel Yokomizo


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> >My definition of pain is negative reinforcement in a system that learns.
> 
> IMO, pain is more like a data with the potential to cause disorder in
> hard-wired algorithms. I'm not saying this fully covers it but it's
> IMO already out of the Autobliss scope.

You might be thinking of continuous or uncontrollable pain. Like when a rat is 
shocked and can stop the shock by turning a paddle wheel, and a second rat 
receives identical shocks to the first but its paddle wheel has no effect. Only 
the second rat develops stomach ulcers.

When autobliss is run with two negative arguments so that it is punished no 
matter what it does, the neural network weights take on random values and it 
never learns a function. It also dies, but only because I programmed it that 
way.

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Definition of pain (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:
> add-rule kill-file "Matt Mahoney"

Mark, whatever happened to that friendliness-religion you caught a few months 
ago?

Anyway, with regard to grounding, internal feedback, and volition, autobliss 
already has two of these three properties, and the third could be added with an 
insignificant effect.

With respect to grounding, I assume you mean association of language symbols 
with nonverbal input. For example, a text-based AI could associate the symbols 
"red" with "rose" and "stop sign", but if it lacked vision then these symbols 
would not be grounded. To ground "red" it would need to be associated with red 
sensing pixels.

In this sense, autobliss has grounded the symbols "aah" and "ouch" which make 
up its limited language by associating them with the reinforcement signal. 
Thus, it adjusts its behavior to say "ouch" less often, which is just what a 
human would do if the negative reinforcement signal were pain. (Also, to 
address Jiri Jelinek's question, it makes no conceptual difference if we swap 
the symbols so that "aah" represents pain. I did it this way just to make it 
more clear what autobliss is doing. The essential property is reinforcement 
learning).

Also, autobliss has volition, meaning it has free will and makes decisions that 
increase its expected reward. Free will is implemented by the rand() function. 
Behaviorally, there is no distinction between free choice and random behavior. 
Belief in free will, which is a separate question, is implemented in humans by 
making random choices and then making up reasons that seem rational for making 
the choice we did. Monkeys do this too.
http://www.world-science.net/othernews/071106_rationalize.htm

Autobliss lacks internal feedback, although I don't see why this matters much. 
Neural networks often use lateral inhibition, activation fatigue, and weight 
decay as negative feedback loops to make them more stable. Autobliss has only 
one neuron (with 4 inputs) so lateral inhibition is not possible. However I 
could add weight decay by adding the following code inside the main loop:

  for (int i=0; i<4; ++i)
mem[i] *= 0.99;

This would keep the input weights from getting too large, but also cause 
autobliss to slowly forget its lessons. It would require occasional 
reinforcement to correct its mistakes. However, this effect could be made 
arbitrarily small by using a decay factor arbitrarily close to 1.

Anyway, I don't expect this to resolve Mark's disagreement. Intuitively, 
everyone "knows" that autobliss doesn't really experience pain, so Mark will 
just keep adding conditions until nothing less than a human brain meets his 
requirements, all the time denying that he is making choices about what feels 
pain and what doesn't.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Eric Baum

>> I completed the first draft of a technical paper on consciousness
>> the other day.  It is intended for the AGI-09 conference, and it
>> can be found at:


Ben> Hi Richard,

Ben> I don't have any comments yet about what you have written,
Ben> because I'm not sure I fully understand what you're trying to
Ben> say... I hope your answers to these questions will help clarify
Ben> things.

Ben> It seems to me that your core argument goes something like this:

Ben> That there are many concepts for which an introspective analysis
Ben> can only return the concept itself.  That this recursion blocks
Ben> any possible explanation.  That consciousness is one of these
Ben> concepts because "self" is inherently recursive.  Therefore,
Ben> consciousness is explicitly blocked from having any kind of
Ben> explanation.

Haven't read the paper yet, but the situation with introspection 
is the following:

Introspection accesses a meaning level, at which you can summon and
use concepts (subroutines) by name, but you are protected essentially 
by information hiding from looking at the code that implements them.

Consider for example summoning Microsoft Word to perform some task.
You know what you are doing, why you are doing it, how you intend to
use it, but you have no idea of the code within Microsoft Word. The
same is true for internal concepts within your mind.

Your mind is no more built to be able to look inside subroutines, than
my laptop is built to output the internal transistor values. Partial
results within subroutines are not meaningful, your conscious
processing is in terms of meaningful quantities.

What is Thought? (MIT Press, 2004) discusses this, in Chap 14 which 
answers most questions about consciousness IMO.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Trent Waddington
On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>>Trent Waddington wrote:
>>Apparently, it was Einstein who said that if you can't explain it to
>>your grandmother then you don't understand it.
>
> That was Richard Feynman

When?  I don't really know who said it.. but everyone else on teh
internets seems to attribute it to Einstein.  I've seen at least one
site attribute it to the bible (but of course they give no reference).

As such, I think there's two nuggets of wisdom here:  If you can't
provide references, then your opinion is just as good as mine, and if
you can provide references, that doesn't excuse you from explaining
what you're talking about so that everyone can understand.

Two points that many members of this list would do well to heed now and then.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Jiri Jelinek
>Matt Mahoney wrote:
>Autobliss...

Imagine that there is another human language which is the same as
English, just the pain/pleasure related words have the opposite
meaning. Then consider what would that mean for your Autobliss.

>My definition of pain is negative reinforcement in a system that learns.

IMO, pain is more like a data with the potential to cause disorder in
hard-wired algorithms. I'm not saying this fully covers it but it's
IMO already out of the Autobliss scope.

>Trent Waddington wrote:
>Apparently, it was Einstein who said that if you can't explain it to
>your grandmother then you don't understand it.

That was Richard Feynman

Regards,
Jiri Jelinek

PS: Sorry if I'm missing anything. Being busy, I don't read all posts.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com