Re: [agi] (video)The Future of Cognitive Computing

2007-01-23 Thread Eugen Leitl
On Mon, Jan 22, 2007 at 05:26:43PM -0800, Matt Mahoney wrote:

> The issues of consciousness have been discussed on the singularity list.  
> These are hard questions.

I'm not sure questions about anything as ill-defined as consciousness
are meaningful.
 
> - If your brain was scanned and backed up to disk, would you still be 
> conscious after you die?

Throught is a process, and a static image has only the potential
for a process, if it's being resumed at some point.

> - Does a thermostat want to keep the room at a constant temperature, or does 
> it only behave as if that is what it wants?  (Ask this question about human 
> behavior).

I don't understand your question. It depends on your definition
of "want".

> - Do you control your own thoughts, or is your brain a computer whose outputs 
> are predictable given its inputs and internal state?

Even a deterministic process is not necessarily predictable, if you're lacking
information and means of prediction. You seem to be rewording the "free will 
vs. determinism"
pseudodichotomy. A behaving robot can't predict anything about its internal
state. It lacks both information about its future input (assuming nontrivial
environment, which is the only environment worth making), information about its
internal state (full information cannot be represented by the system) and
by trying to predict it necessarily changes its own state, making the attempt 
utterly 
futile. People are not even deterministic systems, being nonlinear and noisy
physical system, so the question needs to be unasked.
 
> The questions are hard because humans (and other animals) are programmed 
> through evolution to fear death and to believe in free will (ability to 
> control ones thoughts and the environment).  Those animals that behaved 
> differently did not propagate their DNA.  I assume you would want to program 
> an AGI to behave as if it believed in its own consciousness.

If you just use survival as a fitness trait that's implicit.
 
> I don't expect to convince anyone, even myself, that consciousness does not 
> exist.  Such a belief could be fatal, if it were even possible.

What is consciousness? That word probably means for me something
else than for you, or anyone else in the room. 

> That said, I think it does not matter whether an AGI is created by copying 
> someone's brain or by modeling the child develpmental process and training.  
> Either way the result is an approximation of a human.  But the moral issue 
> does not go away.  The question of whether such a thing should have human 
> rights puts us in deep conflict with our own biological programming.

Rights are behavioural algorithms, co-evolved in many interaction
rounds among roughly equal players. By the time artificial systems
need rights, they will be not granted, but the systems will assert
their rights. If needed, rather forcefully.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] About the brain-emulation route to AGI

2007-01-23 Thread Eugen Leitl
On Mon, Jan 22, 2007 at 06:43:08PM -0800, Matt Mahoney wrote:

> I think AGI will be solved when computer scientists, psychologists, and 
> neurologists work together to solve the problem with a combination of 
> computer, human, and animal experiments.

I agree. (Though I would just put computational neuroscientists and
neuroscientists in your list. Psychology is too high-level to be
a useful source of constraints).

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] Project proposal: MindPixel 2

2007-01-23 Thread Stephen Reed
Given my experience while employed at Cycorp, I would say that there are two 
ways to work with them.  The first way is to collaborate with Cycorp on a 
sponsored project.  Collaborators are mainly universities (e.g. CMU & Stanford) 
and established research companies (e.g. SRI & SAIC) who have a track record of 
receiving government grants, and whose technologies are complementary to Cyc.   
I would not suggest this approach for MindPixel 2 yet.

The second approach involves no exchange of money.  Cycorp wants to promote its 
ontology - its commonsense vocabulary, and has released its definitions with a 
very permisive license as OpenCyc.  One can also obtain nearly the entire Cyc 
knowledge base with a Research Cyc license for research purposes without fee, 
but with the RCyc license you are not allowed to extract facts and rules for 
MindPixel 2.  

You could contact the Cyc Foundation, which is an independent organization run 
by a friend of mine and former Cycorp employee.  They are seeking to add 
knowledge to Cyc by using volunteers and I  believe that they would be very 
receptive to MindPixel 2 provided it uses a form of the OpenCyc vocabulary for 
knowledge representation.

I suggest obtaining an RCyc license to see how the Cyc inference engine handles 
large rule and fact sets, and to see if the Cyc vocabulary fits your idea of a 
commonsense representation language.


- Original Message 
From: Benjamin Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, January 19, 2007 5:35:51 PM
Subject: Re: [agi] Project proposal: MindPixel 2

Hi,

> Do you think Cyc has a rule/fact like "wet things can usually conduct
> electricity" (or "if X is wet then X may conduct electricity")?

Yes, it does...

> I'll also contact some Cyc folks to see if they're interested in
> collaborating...

IMO, to have any chance of interesting them, you will need to be able
to explain to them VERY CLEARLY why your current proposed approach is
superior to theirs -- given that it seems so philosophically similar
to theirs, and given that they have already encoded millions of
knowledge items and built an inference engine and language-processing
front end!

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303





 

Get your own web address.  
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-23 Thread Bob Mottram

I'm no expert on automated reasoning, but wasn't the original Mindpixel
based fundamentally upon probabilistic representations (coherence values)
whereas Cyc, from what I understand, doesn't represent facts or rules
probabilistically.

- Bob



On 23/01/07, Stephen Reed <[EMAIL PROTECTED]> wrote:


Given my experience while employed at Cycorp, I would say that there are
two ways to work with them.  The first way is to collaborate with Cycorp on
a sponsored project.  Collaborators are mainly universities (e.g. CMU &
Stanford) and established research companies (e.g. SRI & SAIC) who have a
track record of receiving government grants, and whose technologies are
complementary to Cyc.   I would not suggest this approach for MindPixel 2
yet.

The second approach involves no exchange of money.  Cycorp wants to
promote its ontology - its commonsense vocabulary, and has released its
definitions with a very permisive license as OpenCyc.  One can also obtain
nearly the entire Cyc knowledge base with a Research Cyc license for
research purposes without fee, but with the RCyc license you are not allowed
to extract facts and rules for MindPixel 2.

You could contact the Cyc Foundation, which is an independent organization
run by a friend of mine and former Cycorp employee.  They are seeking to add
knowledge to Cyc by using volunteers and I  believe that they would be very
receptive to MindPixel 2 provided it uses a form of the OpenCyc vocabulary
for knowledge representation.

I suggest obtaining an RCyc license to see how the Cyc inference engine
handles large rule and fact sets, and to see if the Cyc vocabulary fits your
idea of a commonsense representation language.


- Original Message 
From: Benjamin Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, January 19, 2007 5:35:51 PM
Subject: Re: [agi] Project proposal: MindPixel 2

Hi,

> Do you think Cyc has a rule/fact like "wet things can usually conduct
> electricity" (or "if X is wet then X may conduct electricity")?

Yes, it does...

> I'll also contact some Cyc folks to see if they're interested in
> collaborating...

IMO, to have any chance of interesting them, you will need to be able
to explain to them VERY CLEARLY why your current proposed approach is
superior to theirs -- given that it seems so philosophically similar
to theirs, and given that they have already encoded millions of
knowledge items and built an inference engine and language-processing
front end!

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303








Get your own web address.
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] About the brain-emulation route to AGI

2007-01-23 Thread Richard Loosemore


Eugen,

> So you're engaging in a critique of a field you know very
> little about.

If you can't express yourself without gratuitous sarcasm and allegations 
like the above, you'll just be ignored.


In fact, you misunderstood pretty much everything I tried to say, so it 
would have been a huge amount of work for me to sort out the mess.  I 
suppose I should thank you for being so rude that I don't need to 
bother.  ;-)


Richard Loosemore.

















Eugen Leitl wrote:

On Mon, Jan 22, 2007 at 01:11:57PM -0500, Richard Loosemore wrote:

This debate about the relative merits of the AGI and the Brain Emulation 
methods of building an intelligence seems confused to me.


What is the Brain Emulation method? Are you talking about computational
neuroscience, or something?
 

What exactly is meant by a "brain emulation" route anyway?


I'm not entirely sure (I haven't read it all yet), but the very 
beginning of this post strikes me as a desperate search for a strawman to demolish. 
 

Is it:

A) Copy the exact structure and functioning of the brain's hardware, and 
along the way get a precise understanding of the functional architecture 
of the human brain, at all the various levels at which such an 
architecture needs to be understood.


or

B) Copy the exact structure and functioning of the brain's hardware, but 
ignore the architecture.


?


Why do you think these are mutually exclusive alternatives? What makes
you think there is such a thing as architecture in the human sense sitting
in there for you to copy a blueprint from?
 
An illustration of the difference:  You know nothing about electronics, 
but you get hold of an extremely complex radio, and want to build one by 
exactly "emulating" your example.  Do you try to do your emulation 


Um, wrong comparison. CNS doesn't require any new physics. Some approaches
start with atomically accurate models of compartments, which allows you
to reach down to arbitrary low level of theory in order to fetch missing
parameters. That's bottom up. Simultaneously, you have top-down empirical
data from neuron and tissue activity. You can use both to eliminate
the large but shrinking amount of unknown in the middle.

without ever trying to understand the functions of transistors?  The 


Do you think that an atomically accurate copy of a radio wouldn't work?

functions of all the various hardware components?  The general idea of 
transmission of radio signals?  The modular structure of the radio set, 


But the brain is not a radio set. Specifically, it's not a human-designed
artifact, and has different signatures.

with its tuning, frequency multiplexing, amplitude demodulation and 
other circuits?  Do you ignore the functioning of the radio with respect 
to the humans who use it?  The existence and distribution of radio 
signal sources?


I don't understand your last two sentences. (In fact, I was going huh?
at a rate of about twice every sentence so far, but deconstructing your post
at this level would do no good so I won't).
 
You could decide to care about all that stuff - that would be Route A - 
or you could ignore it and just emulate the thing by brute force, cubic 
micrometer by cubic micrometer - that would be Route B.


Of course some people do A, and some do B, and several others go for C and D.
 
I presume that the brain emulation community is not being so daft as to 
try B  but honestly, when I read people talking about this, they 


Actually, it is not at all daft to model a cubic micron or so of biology
from first principles, if you can extract nonobservable parameters (such
as a switching behaviour of a particular ion channel type, for instance)
from a MD level simulation. Have you ever considered how to write a 
learning simulation that ascends, by incrementally building upper abstraction

layers, and co-evolving hardware representation as it goes along?
It's certainly demanding, but not nearly as demanding as a full-blown 
AGI by explicit coding.


often seem to be assuming a black and white division between A and B, 
and more often than not they ARE assuming that what "brain emulation" 
means is B - the dumb brute force method.


Maybe you're reading the wrong people. Or, misunderstand what they say.
 
I have to say that if B is what is meant, the idea seems insane.  You 
only need to get one little transistor junction out of place in your 
simulation of the radio, and the entire thing might not work ... and if 
you know nothing about the functionality, you are up the proverbial 
creek.  Ditto for the brain.


The brain is not a radio. It's designed to work in a noisy environment, so
it's autohomeostating. You don't have to tune the oscillator precision
down to ppb levels in order for it to work, or break down horribly.
 
How many errors can you afford to make before the brain simulation 
becomes just as useless as a broken radio?  The point is WHO KNOWS?!  It 


Of course injecting errors into the simulation and look at trajectory
sp

Re: [agi] Project proposal: MindPixel 2

2007-01-23 Thread Stephen Reed
Right, Cyc's deductive inference engine does not support probabilistic 
reasoning.  But there is no obstacle to extending Cyc's probabilistic 
vocabulary for the particular representation you want and then using an 
inference engine of your own design.

For my AGI project I use the OpenCyc vocabulary and content, but with my own 
object store (a relational database) and simple inference (look-up and 
subsumption within contexts).  The java dialog application that I am building 
does not require any more sophisticated deduction, so I postponing any complex 
inference until I can teach those algorithms to the system using English.
-Steve
http://sf.net/projects/texai

- Original Message 
From: Bob Mottram <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, January 23, 2007 1:13:14 PM
Subject: Re: [agi] Project proposal: MindPixel 2


I'm no expert on automated reasoning, but wasn't the original Mindpixel based 
fundamentally upon probabilistic representations (coherence values) whereas 
Cyc, from what I understand, doesn't represent facts or rules 
probabilistically. 

- Bob



On 23/01/07, Stephen Reed <[EMAIL PROTECTED]> wrote: Given my experience while 
employed at Cycorp, I would say that there are two ways to work with them.  The 
first way is to collaborate with Cycorp on a sponsored project.  Collaborators 
are mainly universities (e.g. CMU & Stanford) and established research 
companies ( e.g. SRI & SAIC) who have a track record of receiving government 
grants, and whose technologies are complementary to Cyc.   I would not suggest 
this approach for MindPixel 2 yet.

The second approach involves no exchange of money.  Cycorp wants to promote its 
ontology - its commonsense vocabulary, and has released its definitions with a 
very permisive license as OpenCyc.  One can also obtain nearly the entire Cyc 
knowledge base with a Research Cyc license for research purposes without fee, 
but with the RCyc license you are not allowed to extract facts and rules for 
MindPixel 2. 

You could contact the Cyc Foundation, which is an independent organization run 
by a friend of mine and former Cycorp employee.  They are seeking to add 
knowledge to Cyc by using volunteers and I  believe that they would be very 
receptive to MindPixel 2 provided it uses a form of the OpenCyc vocabulary for 
knowledge representation. 

I suggest obtaining an RCyc license to see how the Cyc inference engine handles 
large rule and fact sets, and to see if the Cyc vocabulary fits your idea of a 
commonsense representation language.


- Original Message  
From: Benjamin Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, January 19, 2007 5:35:51 PM
Subject: Re: [agi] Project proposal: MindPixel 2 

Hi,

> Do you think Cyc has a rule/fact like "wet things can usually conduct
> electricity" (or "if X is wet then X may conduct electricity")?

Yes, it does...

> I'll also contact some Cyc folks to see if they're interested in 
> collaborating...

IMO, to have any chance of interesting them, you will need to be able
to explain to them VERY CLEARLY why your current proposed approach is
superior to theirs -- given that it seems so philosophically similar 
to theirs, and given that they have already encoded millions of
knowledge items and built an inference engine and language-processing
front end!

-- Ben G

-
This list is sponsored by AGIRI:  http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303







 
Get your own web address.
Have a HUGE year through Yahoo! Small Business.
http://smallbusiness.yahoo.com/domains/?p=BESTDEAL

-
This list is sponsored by AGIRI:  http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


  This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 





 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] (video)The Future of Cognitive Computing

2007-01-23 Thread Matt Mahoney

--- Eugen Leitl <[EMAIL PROTECTED]> wrote:

> On Mon, Jan 22, 2007 at 05:26:43PM -0800, Matt Mahoney wrote:
> 
> > The issues of consciousness have been discussed on the singularity list. 
> These are hard questions.
> 
> I'm not sure questions about anything as ill-defined as consciousness
> are meaningful.

The question arises when we need to make moral decisions, such as is it moral
to upload a human brain into software, then manipulate that data in arbitrary
ways, e.g. simulate pain?

I think consciousness is poorly defined because any attempt to define it leads
to the conclusion that it does not exist.  You know what consciousness is, but
try to define it.

1. Consciousness is the little person in your head that observes everything
you sense and decides everything you do.

2. Consciousness (or self awareness) is what makes you different than everyone
else.

3. Consciousness is what makes the world today different than before you were
born.

4. If an exact copy of you was made, atom for atom, replicating all of your
memories and behavior, then the only distinction between you and your copy
would be that you have a consciousness.

But with any of these definitions, it becomes clear that there is no physical
justification for consciousness.  You believe that other people have
consciousnesses because you know that you do, and others are like you.  But
there is no way to know for sure.  How do you distinguish between a person who
has self awareness and one who only behaves as if he or she does?

Perhaps we can drop the insistence that consciousness exists.  Then a possible
definition would be any behavior consistent with a belief in self awareness or
free will.  But this has problems too.

> > - Does a thermostat want to keep the room at a constant temperature, or
> does it only behave as if that is what it wants?  (Ask this question about
> human behavior).
> 
> I don't understand your question. It depends on your definition
> of "want".

I mean that if an agent has goal directed behavior, then it behaves as if it
"wants" to satisfy its goals.

I use this example to show that goal directed behavior is not a criteria for
consciousness.

Do animals have consciousness?  Does an embryo?  These questions are
controversial.  AGI will raise new controversies.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] "10 Questions for György Buzsáki "

2007-01-23 Thread Ben Goertzel



Begin forwarded message:


From: Damien Broderick <[EMAIL PROTECTED]>
Date: January 23, 2007 3:37:32 PM EST
To: "'ExI chat list'" <[EMAIL PROTECTED]>,  
[EMAIL PROTECTED]

Subject: [extropy-chat] "10 Questions for György Buzsáki "
Reply-To: ExI chat list <[EMAIL PROTECTED]>

http://www.gnxp.com/blog/2007/01/10-questions-for-gyki.php

interesting interview e.g.:


6. Your discussion of the brain's first rhythm could make one feel
that we are close to understanding when meaningful cognition begins.
Does your knowledge of EEG patterns and their underpinnings influence
your thinking about beginning-of-life, end-of-life, or even animal
rights debates?

I believe that cognition begins once the 1/f features of cortical
rhythms emerge because this dynamics represents global (i.e.,
distributed) computation and only structures with these features
appear to generate conscious experience. The ontogenetic appearance
of 1/f dynamics coincides with the emergence of long-range
cortico-cortical projections. In the newborn human the 1/f global
feature of the EEG is already present. On the other hand, in preterm
babies, depending on the gestation age, long seconds of neuronal
silence alternate with short, spatially localized oscillatory bursts
(known as "delta brush"), like in sharks and lizards. These localized
intermittent cortical patterns in the premature brain, and similar
ones in the strictly locally organized adult cerebellum, cannot give
rise to conscious awareness, no matter the size. From this
perspective, the structure-function relations between the small world
network-like features of the cerebral cortex and the resultant global
rhythms appear as necessary conditions for awareness. Earlier
developmental stages without these properties simply do not have the
necessary ingredients of the product we call cognition.

___
extropy-chat mailing list
[EMAIL PROTECTED]
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303