Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Colin Hales

Try this one ...
http://www.bentham.org/open/toaij/openaccess2.htm
If the test subject can be a scientist, it is an AGI.
cheers
colin


Steve Richfield wrote:

Deepak,

An intermediate step is the reverse Turing test (RTT), wherein 
people or teams of people attempt to emulate an AGI. I suspect that 
from such a competition would come a better idea as to what to expect 
from an AGI.


I have attempted in the past to drum up interest in a RTT, but so far, 
no one seems interested.


Do you want to play a game?!

Steve

On Sun, Jul 18, 2010 at 5:15 AM, deepakjnath deepakjn...@gmail.com 
mailto:deepakjn...@gmail.com wrote:


I wanted to know if there is any bench mark test that can really
convince majority of today's AGIers that a System is true AGI?

Is there some real prize like the XPrize for AGI or AI in general?

thanks,
Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ | Modify
https://www.listbox.com/member/?; Your Subscription [Powered by
Listbox] http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Colin Hales

Ed,
Comments interspersed below:

Ed Porter wrote:


Colin,

 


Here are my comments re  the following parts of your below post:

 


===Colin said==

I merely point out that there are fundamental limits as to how 
computer science (CS) can inform/validate basic/physical science - (in 
an AGI context, brain science). Take the Baars/Franklin IDA 
project It predicts nothing neuroscience can poke a stick at...


 


===ED's reply===

Different AGI models can have different degrees of correspondence to, 
and different explanatory relevance to, what is believed to take place 
in the brain.  For example the Thomas Serre's PhD thesis Learning a 
Dictionary of Shape-Components in Visual Cortex:  Comparison with 
Neurons, Humans and Machines, at from 
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf 
, is a computer simulation which is rather similar to my concept of 
how a Novamente-like AGI could perform certain tasks in visual 
perception, and yet it is designed to model the human visual system to 
a considerable degree.  It shows that a certain model of how Serre and 
Poggio think a certain aspect of the human brain works, does in fact 
work surprisingly well when simulated in a computer.


 

A surprisingly large number of brain science papers are based on 
computer simulations, many of which are substantially simplified 
models, but they do given neuroscientists a way to poke a stick at 
various theories they might have for how the brain operates at various 
levels of organization.  Some of these papers are directly relevant to 
AGI.  And some AGI papers are directly relevant to providing answers 
to certain brain science questions.


You are quite right! Realistic models can be quite informative and feed 
back - suggesting new empirical approaches. There can be great 
cross-fertilisation.


However the point is irrelevant to the discussion at hand.

The phrase does in fact work surprisingly well when simulated in a 
computer illustrates the confusion. 'work'? according to whom? 
surprisingly well? by what criteria? The tacit assumption is that the 
model's thus implemented on a computer will/can 'behave' 
indistinguishably from the real thing, when what you are observing is a 
model of the real thing, not the real thing.


*HERE *If you targeting AGI with a benchmark/target of human intellect 
or problem solving skills, then the claim made on any/all models is that 
models can attain that goal. A computer implements a model. To make a 
claim that a model  completely captures the reality upon which it was 
based, you need to have a solid theory of the relationships between 
models and reality that is not wishful thinking or assumption, but solid 
science. Here's where you run into the problematic issue that basic 
physical sciences have with models. 

There's a boundary to cross - when you claim to have access to human 
level intellect - then you are demanding a equivalence with a real 
human, not a model of a human.


 


===Colin said==

I agree with your :

/At the other end of things, physicists are increasingly viewing 
physical reality as a computation, and thus the science of computation 
(and communication which is a part of it), such as information theory, 
have begun to play an increasingly important role in the most basic of 
all sciences./



===ED's reply===

We are largely on the same page here

 


===Colin said==

I disagree with:

But the brain is not part of an eternal verity.  It is the result of 
the engineering of evolution. 


Unless I've missed something ... The natural evolutionary 
'engineering' that has been going on has /not/ been the creation of a 
MODEL (aboutness) of things - the 'engineering' has evolved the 
construction of the /actual/ things. The two are not the same. The 
brain is indeed 'part of an eternal verity' - it is made of natural 
components operating in a fashion we attempt to model as 'laws of 
nature',,,


 


===ED's reply===

If you define engineering as a process that involves designing 
something in the abstract --- i.e., in your a MODEL (aboutness of 
things) --- before physically building it, you could claim evolution 
is not engineering. 

 

But if you define engineering as the designing of things (by a process 
that has intelligence what ever method) to solve a set of problems or 
constraints, evolution does perform engineering, and the brain was 
formed by such engineering.


 

How can you claim the human brain is an eternal verity, since it is 
only believed that it has existing in anything close to its current 
form in the last 30 to 100 thousand years, and there is no guarantee 
how much longer it will continue to exists.  Compared to much of what 
the natural sciences study, its existence appears quite fleeting.


I think this is just a terminology issue. The 'laws of nature' are the 
eternal verity, to me. The dynamical output they represent - of course 
that does whatever it does. The 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Colin Hales

Ed,
I wasn't trying to justify or promote a 'divide'. The two worlds must be 
better off in collaboration, surely? I merely point out that there are 
fundamental limits as to how computer science (CS) can inform/validate 
basic/physical science - (in an AGI context, brain science). Take the 
Baars/Franklin IDA project. Baars invents 'Global Workspace' = a 
metaphor of apparent brain operation. Franklin writes one. Afterwards, 
you're standing next to to it, wondering as to its performance. What 
part of its behaviour has any direct bearing on how a brain works? It 
predicts nothing neuroscience can poke a stick at. All you can say is 
that the computer is manipulating abstractions according to a model of 
brain material. At best you get to be quite right and prove nothing. If 
the beastie also underperforms then you have seeds for doubt that also 
prove nothing.


CS as 'science' has always had this problem. AGI merely inherits its 
implications in a particular context/speciality. There's nothing bad or 
good - merely justified limits as to how CS and AGI may interact via 
brain science.


I agree with your :

/At the other end of things, physicists are increasingly viewing 
physical reality as a computation, and thus the science of computation 
(and communication which is a part of it), such as information theory, 
have begun to play an increasingly important role in the most basic of 
all sciences./


I would advocate physical reality (all of it) as /literally /computation 
in the sense of information processing. Hold a pencil up in front of 
your face and take a look at it... realise that the universe is 
'computing a pencil'. Take a look at the computer in front of you: the 
universe is 'computing a computer'. The universe is literally computing 
YOU, too. The computation is not 'about' a pencil, a computer, a human. 
The computation IS those things. In exactly this same sense I want the 
universe to 'compute' an AGI (inorganic general intelligence). To me, 
then, this is /not/ manipulating abstractions ('aboutnesses') - which is 
the sense meant by CS generally and what actually happens in reality in CS.


So despite some agreement as to words  - it is in the details we are 
likely to differ. The information processing in the natural world is not 
that which is going on in a model of it. As Edelman said(1) /A theory 
to account for a hurricane is not a hurricane/. In exactly this way a 
computational-algorithmic process about intelligence cannot a-priori 
be claimed to be the intelligence of that which was modelled. Or - put 
yet another way: That {THING behaves 'abstract- RULE-ly'} does not 
entail that {anything manipulated according to abstract-RULE will become 
THING}. The only perfect algorithmic (100% complete information content) 
description of a thing is the actual thing, which includes all 
'information' at all hierarchical descriptive levels, simultaneously.


I disagree with:

But the brain is not part of an eternal verity.  It is the result of 
the engineering of evolution. 


Unless I've missed something ... The natural evolutionary 'engineering' 
that has been going on has /not/ been the creation of a MODEL 
(aboutness) of things - the 'engineering' has evolved the construction 
of the /actual/ things. The two are not the same. The brain is indeed 
'part of an eternal verity' - it is made of natural components operating 
in a fashion we attempt to model as 'laws of nature'. Those models, 
abstracted and shoehorned into a computer - are not the same as the 
original. To believe that they are is one of those Occam's Razor 
violations I pointed out before my xmas shopping spree (see previous-1 
post).

---

Anyway, for these reasons, folks who use computer models to study human 
brains/consciousness will encounter some difficulty justifying, to the 
basic physical sciences, claims made as to the equivalence of the model 
and reality. That difficulty is fundamental and cannot be 'believed 
away'. At the same time it's not a show-stopper; merely something to be 
aware of as we go about our duties. This will remain an issue - the only 
real, certain, known example of a general intelligence is the human. The 
intelligence originates in the brain. AGI and brain science must be 
literally joined at the hip or the AGI enterprise is arguably 
scientifically impoverished wishful thinking. Which is pretty much what 
Ben said...although as usual I have used too many damned words!


I expect we'll just have to agree to disagree... but there you have it :-)

colin hales
(1) Edelman, G. (2003). Naturalizing consciousness: A theoretical 
framework. Proc Natl Acad Sci U S A, 100(9), 5520--24.



Ed Porter wrote:


Colin,

 

From a quick read, the gist of what your are saying seems to be that 
AGI is just engineering, i.e., the study of what man can make and 
the properties thereof, whereas science relates to the eternal 
verities of reality

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales

J. Andrew Rogers wrote:


On Dec 18, 2008, at 10:09 PM, Colin Hales wrote:
I think I covered this in a post a while back but FYI... I am a 
little 'left-field' in the AGI circuit in that my approach involves 
literal replication of the electromagnetic field structure of brain 
material. This is in contrast to a computational model of the 
electromagnetic field structure.



Here is a silly question:

If you can specify it well enough to implement the desired result in 
hardware, why can't you implement it in software?  It is equivalent, 
after all.


And if you can't specify the dynamic well enough to implement it 
virtually, why would there be any reason at all to believe that it 
will do anything interesting?



The hallmark of a viable AGI theory/design is that you can explain why 
it *must* work in sufficient detail to be implementable in any medium.



J. Andrew Rogers



/If you can specify it well enough to implement the desired result in 
hardware, why can't you implement it in software?  It is equivalent, 
after all./ 


The answer to this is that you /can /implement it in software. But you 
won't do that because the result is not an AGI,  but an actor with a 
script. I actually started AGI believing that software would do it. When 
I got into the details of the issue of qualia (their role and origins) I 
found that software alone would not do the trick.


If an AGI is to be human equivalent, it must be able to do what humans 
do. One of those behaviours is science. Getting the 'logical dynamics' 
of software to cohere with a 'law of nature' is, I believe, impossible 
for software alone, because the software model of the dynamics cannot 
converge on an externally located and intrinsically unknown (there is no 
model!) knowledge. How a software model of a modeller of the 
intrinsically unknown (a scientist) can work is something I have had to 
grapple with. In the end I had to admit that software seemed less 
plausible than actually implementing the full physics of brain material. 
Hence my EM approach.


The simplest way to get to the position I inhabit is to consider that 
the electromagnetic field has access to more information (about the 
world outside the agent)  than that available through peripheral nerve 
signaling. It's the additional information that is thrown away with a 
/model/ of the electromagnetic field. It's replaced with the arbitrary 
and logically irrelevant electromagnetic fields  of the computer 
substrate (basically noise). The spatially expressed EM field inherits 
(dynamics is altered by)  information from the external world directly 
via space..


The EM fields play a very important role in the dynamics, adaptation and 
regulation of neural activity and none of it is captured with existing 
neural models - as it acts /orthogonally/ , coupling neurons spatially 
via EM events, not dendrite/axon routes. It's outside the neurons in the 
spaces in between.  It's the reason cortex is layered and columnar. 
Cortex is 50% astrocytes by volume. They are all charged up to -80mV. 
and are intimately involved in brain dynamics. Because the boundary of 
the cells and space is as much an information source as all the 
peripheral nerve 'boundaries' (the surface of your body), and the 
boundary is literally electromagnetism (there's nothing else there!), 
you can't model it for the same reason you can't model the peripheral 
nerve signals (you have to have the EM fields for the same reason that 
you need to have a retina or camera)... by extrapolation everything else 
follows.


The EM coupling effects are the subject of my PhD and will be out in 
detail ASAP. Bits of it will be published - It's been a real trial to 
get the work into print. I tried to get a publication into the AGI 
conference but ran out of time.


The original hodgkin-huxley model (upon which all modern neural 
modelling is based) threw out (or at least modelled out) the EM field. 
If you look in the original 1952 papers you'll see there are batteries, 
non-linear, time-varying resistors and ignored and off to one side 
all by itself, waiting patiently  a little capacitor. That little 
capacitor hides the entire EM field spatial behaviour. If you drew the 
model properly all the components in the model actually span the 
dielectric of the capacitor between its little plates. The capacitor is 
actually linked to lots of other capacitors in a large 3-dimensional 
mesh. You can't delete (via a model) the capacitors because their 
dynamics is controlled (very very lightly but significantly) by the 
external world.


So...My approach puts the fully spatially detailed EM field back into 
the model. The little HH capacitor turns into an entire new complex 
model operating orthogonally to the rest of the circuit. That capacitor 
radically changes the real model of brain material. There is spatial 
coupling to other neurons that happens using the field that has been 
averaged out and confined inside the dielectric

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales
). The laws 
in  T' are the rules of the 'natural' CA.


For the physics buffs amongst us: DAS explains why there are 2 kinds of 
'theory of everything' MATHS (uber-group-maths-theories - of - 
appearances) and 'STUFF' (branes, strings, loops, preons etc theories of 
structure). The reason for the 2 types is that thr former is the 
T-aspect version of the latter, which is the T' aspect equivalent. The 
poor buggers doing T' aspect don't know they're actually fighting over 
the same thing!


Only from a DAS perspective does EM-field (the U1 
group)-as-consciousness acquire any explanatory authority. /So to 
explain my chips I have to change the whole of science! /That's 1 
impossible thing before breakfast. Only 5 to go.


cheers
colin hales








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales



J. Andrew Rogers wrote:

 On Dec 19, 2008, at 12:13 PM, Colin Hales wrote:
 The answer to this is that you can implement it in software. But you 
won't do that because the result is not an AGI,  but an actor with a 
script. I actually started AGI believing that software would do it. When 
I got into the details of the issue of qualia (their role and origins) I 
found that software alone would not do the trick.



 Nonsense, an algorithmic system is describable entirely based on 
input and output without any regard for its internal structure.  If two 
blackbox systems produce identical output based on identical input, then 
they are mathematically equivalent in every meaningful sense even if 
they have radically different internal construction.



*I'm not clear how you came to the conclusion that I was discussing an 
'algorithmic system'. *


 You say actor with a script as if that means something important, 
ignoring that every process in our universe is necessarily equivalent to 
an actor with a script.  Your magical EM chip is, in fact, an actor 
with a script.



*If you mean that the laws of nature persist and govern the regularities 
around us and the nature of ourselves. Fine! We are all 'actors' in that 
sense.

*
*What I meant specifically is that we know already the outcome. It's the 
script. WE wrote it and gave it to our AGI. Consider my benchmark 
behaviour the artificial scientist. What 'algorithmic script' dictates 
the behaviour that will result in scientific encounter with the 
intrinsically a-priori unknown? If you can write that script you must 
know the outcome already or be willing to predefine the nature of all 
novelty!

*

 The simplest way to get to the position I inhabit is to consider 
that the electromagnetic field has access to more information (about the 
world outside the agent)  than that available through peripheral nerve 
signaling. It's the additional information that is thrown away with a 
model of the electromagnetic field.



 This does not even make sense.  Either the software model captures 
the measurable properties of the EM field or it does not, but either way 
it does not support your proposal.  In the former case, the external 
input and dynamic *must* be measurable and therefore can be reflected in 
the software model, and in the latter case it is nothing more than 
handwaving about something you are asserting exists in the complete 
absence of material evidence. I'm having a hard time accepting that 
there is something you can specify and measure that magically has no 
useful software description.  That is not even supposed to be possible 
as a kind of basic mathematical theorem thing.



 I mean, you are asserting that some very specific inputs to the 
system are not being modeled, and if you know this then you can very 
easily add them to the software-modeled system.  You have not explained 
why this is not possible, merely asserted it.


 J. Andrew Rogers




*My position is that the EM field as it exists, literally, in situ in 
the brain, is enacting exactly a process of 'input'. To us it appears 
as, for example, 'visual perception' (that you are using to read this). 
For the same reason you can't model your inputs - because you don't 
known them - you can't model the EM field.


Yes this looks like an assertion BUT:

a)I have preliminary physiological evidence that the EM fields are 
directly coupled to the distant natural world. It comes from studies of 
visual acuity in humans. I can detail it if you like.
b)DISTANT WORLD _is to_: RETINAL IMPACT is MANY:ONE. This is an 
inverse problem and unsolvable in principle. Yet humans solve it. 
Therefore vision, to a human, cannot be an inverse problem - it can't 
be! So we must have extra data. There are only 2 places to get it: (i) 
SENSORY I/O and (ii) SPACE. We already use (i) so SPACE it is. A 
Sherlock Holmes outcome.
c)The human brain goes to extravagant lengths (and lots of energy 
expenditure) to micro-manipulate an incredibly complex field pattern in 
space. That field pattern is gone in any model of it.
d) 75 years of computer-based-AGI failure - has sent me a message that 
no amount of hubris on my part can overcome. As a scientist I must be 
informed by empirical  outcomes, not dogma or wishful thinking.


Apart from that:Over XMAS I am finishing the proposal for an 
experiment to verify the EM field spatial link. I will be getting money 
from NICTA. I hope! If anyone out there has US$300K to spare and wants 
some science excitement let me know! I'll have the results in a couple 
of years.


A human scientist (my benchmark, testable behaviour) is a 'black box' 
with internal states somehow able to resonate (cohere) accurately with 
the distant and novel natural world, /not the inputs/. The inputs are 
redundantly related to the distant natural world on multiple levels.


**On balance - I go with the brain as my exemplar. There seems to be 
sufficient available evidence to doubt

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales



Ben Goertzel wrote:
 
Goodness.  I have to tell you, Colin, your style of discourse just 
SOUNDS so insane and off-base, it requires constant self-control on my 
part to look past that and focus on any interesting ideas that may 
exist amidst all the peculiarity!!
 
And if **I** react that way, others must react that way more strongly, 
because I'm rather tolerant of wackiness of most sorts...
 
So, I must suggest that if you want folks to take your ideas 
seriously, you should try to find different ways of expressing them...
 
ben


See the recent post and current neuroscience experimentation.

eg Milstein, J. N. and Koch, C. (2008). Dynamic moment analysis of the 
extracellular electric field of a biologically realistic spiking neuron. 
Neural Computation, 20(8), 2070-84.


There's a long way to go and of course dual aspect science has to be 
established to make any sense of it at all.


I can't help that it's new, confronting and awkward. I didn't make it 
up. I just looked. This area has been problematic for a reason. That 
reason is /us/ (scientists, I mean).


Be very careful in using the word 'extra-sensory'. and 'wacky'. It 
implies some kind of imagined space-cadet quackery and fringe science.


If a quite clear (but merely relatively unexplored) physical phenomenon 
enacted at the boundary of the atoms in brain material and space can be 
held accountable for a phsyically testable outcome - then the word 
'extra' will be invalid. There will be 6 basic 'senses'. No magic, just 
physics. I am as dry as an emprically informed scientist can get. You 
can't deliver any evidence at all that the processes I am investigating 
are invalid. Until they are properly investigated I'd prefer to leave 
these words out of the discourse - and I'll refrain from the use of any 
similar words in reference to the heartfelt beliefs of others in this list.


cheers
colin











---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Colin Hales



J. Andrew Rogers wrote:


On Dec 19, 2008, at 5:35 PM, Ben Goertzel wrote:
The problem is that **there is no way for science to ever establish 
the existence of a nonalgorithmic process**, because science deals 
only with finite sets of finite-precision measurements.



I suppose it would be more accurate to state that every process we can 
detect is algorithmic within the scope of our ability to measure it.  
Like with belief in god(s) and similar, the point can then be raised 
as to why we need to invent non-algorithmic processes when ordinary 
algorithmic processes are sufficient to explain everything we see.  
Non-algorithmic processes very conveniently have properties identical 
to the supernatural, and so I treat them similarly.  This is just 
another incarnation of the old unpredictable versus random discussions.


Sure, non-algorithmic processes could be running the mind machinery, 
but then so could elves, unicorns, the Flying Spaghetti Monster, and 
many other things that it is not necessary to invoke at this time.  
Absent the ability to ever detect such things and lacking the 
necessity of such explanations, I file non-algorithmic processes with 
vast number of other explanatory memes of woo-ness of which humans are 
fond.


Like the old man once said, entia non sunt multiplicanda praeter 
necessitatem.


Cheers,

J. Andrew Rogers

One of the tricks with Occam's razor is knowing when you've fallen foul 
of it. Consider the logic:


(a) Human scientists are intelligent (a useful AGI benchmark) and made 
of the natural world.

(b) Humans (scientists) model the natural world.
(c) The scientific models of the natural world are amenable to 
algorithmic representation.

does NOT entail that
(d) systems that involve algorithmic representation will necessarily be 
as intelligent as a human scientist (capable of human science).


Humans are not algorithmic representations or models of a scientist. We 
are actual scientists. There's some kind of serious confusion going on 
here between:
(i) intelligence capable of construction of algorithmic regularities 
(humans)

and
(ii) intelligence made of the algorithmic regularities thus constructed 
(a computationalist AGI)


(ii) requires subscription to a belief in something extra: that the 
universe is constructed of abstractions or is a computer running the 
laws of nature as a program or that magical emergentism is a law of 
nature or any one of 100 other oddities.


Furthermore (i) does NOT entail that the universe is non-algorithmic! I 
would say that the universe is an absolutely brilliant and wonderful 
huge and exquisite algorithm. *But it's NOT a model of anything.* It IS 
the thing itself. I choose to do (i) by using the same actual processes 
that humans are. That /is/ capable of human intelligence (scientists). 
/That I know for sure./ That is the only thing we know for sure. I also 
know that I have to invent (believe) something unproven and extra to 
make (ii) a route to AGI. Occam's razor prevents me from taking that 
position.


So the argument cuts both ways!

1+1=FROG.
On the planet Blortlpoose the Prolog language does nothing but construct 
cakes. :-)
This algorithmic nonsense was brought to you by the natural brain 
electrodynamics of Colin Hales' brain.


and ALL of it (humans, Blortlpoose and cakes) is made of whatever the 
universe is actually made of, which is NOT abstract representations 
produced by itself of itself.


This is just going to go round and round as usual so I'll jump off 
the merry-go-round for now.


*/Xmas is afoot! There's meaningless consumerism and pointless rituals 
to be enacted! Massive numbers of turkeys and chickens must die!  :-)/*


cheers
colin hales



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-18 Thread Colin Hales


Steve Richfield wrote:

Richard,

On 12/18/08, *Richard Loosemore* r...@lightlink.com 
mailto:r...@lightlink.com wrote:


Rafael C.P. wrote:

Cognitive computing: Building a machine that can learn from
experience
http://www.physorg.com/news148754667.html


Neuroscience vaporware.

 
It isn't neuroscience yet, because they haven't done any science yet.
 
It isn't vaporware yet because they have made no claims of functionality.
 
In short, it has a LONG way to go before it can be considered to be 
neuroscience vaporware.
 
Indeed, this article failed to make any case for any rational hope for 
success.
 
Steve Richfield
DARPA buys G.Tononi for 4.9 $Million!  For what amounts to little 
more than vague hopes that any of us here could have dreamed up. Here I 
am, up to my armpits in an actual working proposition with a real 
science basis... scrounging for pennies. hmmm...maybe if I sidle up and 
adopt an aging Nobel prizewinner...maybe that'll do it.


nah. too cynical for the festive season. There's always 2009! You never 
know


merry xmas, all

Colin




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-18 Thread Colin Hales

YKY (Yan King Yin) wrote:

DARPA buys G.Tononi for 4.9 $Million!  For what amounts to little more
than vague hopes that any of us here could have dreamed up. Here I am, up to
my armpits in an actual working proposition with a real science basis...
scrounging for pennies. hmmm...maybe if I sidle up and adopt an aging Nobel
prizewinner...maybe that'll do it.

nah. too cynical for the festive season. There's always 2009! You never
know



You talked about building your 'chips'.  Just curious what are you
working on?  Is it hardware-related?

YKY


  

Hi,
I think I covered this in a post a while back but FYI... I am a little 
'left-field' in the AGI circuit in that my approach involves literal 
replication of the electromagnetic field structure of brain material. 
This is in contrast to a computational model of the electromagnetic 
field structure. The process involves a completely new chip design which 
looks nothing like what we're used to. I have a crucial experiment to 
run over the next 2 years. The results should be (I hope) the basic 
parameters for early miniaturised prototype.


The part of my idea that freaks everyone out is that there is no 
programming involved. You can adjust the firmware settings for certain 
intrinsic properties of the dynamics of the EM fields. But none of these 
things correspond in any direct way to 'knowledge' or intelligence. The 
chips (will) do what brain material does, but without all the bio-overheads.


The thing that caught my eye in the thread subject Building a machine 
that can learn from experience... is that if you asked Tononi or anyone 
else exactly where the 'experience' is, they won't be able to tell you. 
The EM field approach deals with this very question /first/. The net EM 
field structure expressed in space literally /is/ the experiences. All 
learning is grounded in it. (Not input/output signals)


I wonder how anyone can claim that a machine that learns from 
experience when you haven't really got a cogent,  physical and 
biologically plausible, neuroscience informed  view of what 'experience' 
actually is. But there you go... guys like Tononi get listened to. And 
good luck to them!


So I guess my approach is likely to remain a bit of an oddity here until 
I get my results into the literature. The machines I plan to build will 
be very small and act like biology... I call them artificial fauna. My 
fave is the 'cane toad killer' that gets its kicks by killing nothing 
but cane toads (these are a major eco-disaster in northern australia). 
They can't reproduce and their learning capability (used to create them) 
is switched off. It's a bit like the twenty-something neural dieback in 
humans... after that you're set in your ways.


Initially I want to build something 'ant-like' to enter into robo-cup as 
a proof of concept anyway that's the plan.


So the basics are: all hardware. No programming. The chips don't exist 
yet, only their design concept (in a provisional patent application just 
now).


I think you get the idea. Thanks for your interest.

cheers
colin




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread Colin Hales

Hi,
I went through this exact process of vacillation in 2003.

I have a purely entrepreneurial outcome in mind, but found I needed to 
have folk listen to me. In order that some comfort be taken (by those 
with $$$) in my ideas, I found, to my chagrin...that having a 'license 
to think = PhD' (as opposed to actually being able to think), seemed to 
be necessary. Unfortunately the testicular fortitude of the folks 
needs careful attention - which is more than merely your assurances that 
you can think and have a clue.


So since 2004 I have been on a weird journey where my ideas have not 
changed, but my ability to convey the concepts to academics and 
$$$buckets has improved. Throughout this whole process my ability to 
think never altered a bit!


Having decided to do a PhD...in being in NICTA and melbourne uni I also 
encountered a whole pile of other course opportunities: 
commercialisation, Intellectual property, media-handling, presentation 
skills, interpersonal communications, multicultural and globalised 
research project management, grant application   all of which have  
rounded out my skills to cope with the post-doctoral process of getting 
my chips built.


I have also met/communicated with a raft of luminaries that I would 
otherwise not have encountered.


So - as a vehicle to connect to established $flows and get $creds - it 
seems that for me, anyway, a PhD is a necessary step.


Mind youif I had already had the $$ back in 2003 I would be building 
my chips by now... not doing this weird PhD Dance and being forced to 
kowtow to the spurious, precious and fashion-ridden gods of the academic 
publishing circuit... in that regard I regard the PhD as necessary 
collateral damage.


like Ben: SHOW ME THE MONEY! :-)

cheers
the not quite yet Dr Col


Ben Goertzel wrote:



On Wed, Dec 17, 2008 at 6:12 PM, YKY (Yan King Yin) 
generic.intellige...@gmail.com 
mailto:generic.intellige...@gmail.com wrote:


 If...you want a non-research career, a Ph.D. is definitely not
for you.

I want to be either an entrepreneur or a researcher... it's hard to
decide.  What does AGI need most?  Further research, or a sound
business framework?  It seems that both are needed...



To venture an explicitly self-centered comment:
Since I already have a workable design for human-level AGI ...
what AGI most needs is someone to give the OpenCog or Novamente
projects a large sum of money 8-D

ben


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-09 Thread Colin Hales

And 'deep blue' knows nothing about chess.

These machines are manipulating abstract symbols at the speed of light. 
The appearance of 'knowledge' of the natural world in the sense that 
humans know things, must be absent and merely projected by us as 
observers, because we are really really good at that kind of projection. 
The reason? ...Put any sufficient novelty in front of the machine and 
you get:


a) nonsense/fragility/breakdown.
or
b) the response which results from a human model of what 'novelty' looks 
like followed by the output of a human model of what to do with the novelty.


Surely the 'knowledge' in these creatures must ultimately be grounded in 
human style cognition and perception ... which is not perfect ... but 
it's a novelty-handler light-years ahead of the models of novelty 
handling we give these critters. Think about it...In order that we 
bestow on a machine a perfect (human level will do until  a better one 
turns up!) novelty handler, we have to have an abstract model of 
everything already. If we already know everything, then why build the 
machine?


The usefulness of these machines is their behaviour in the face of 
ignorance. We get to be clever by being serendipitously 'not wrong' by 
being allowed to be wrong in a non-fatal way. We get to choose to 'know' 
something very novel...eg invent a concept which may or may not have 
anything to do with reality... we then can test it to see if it makes 
sense as a model of reality (out there in the natural world). We then 
get to be 'not wrong', as opposed to being 'right'. Our intelligence 
operates completely backwards to the 'knowledge' models of the critters 
under discussion.


Or, put slightly more technically: the 'dynamics' of a human mind (that 
represents the gold standard of 'knowledge' and 'knowledge change') and 
the dynamics of a /model/ of the human mind can part company in 
significant ways ... In what sense has that departure been modelled or 
otherwise accounted for in the model?


There's an oxymoron lurking in these kind of expectations of our 
machines...or is it just me projecting?


cheers,

Colin


Matt Mahoney wrote:
No, I don't believe that Dr. Eliza knows nothing about normal health, 
or that Cyc knows nothing about illness.
 
-- Matt Mahoney, [EMAIL PROTECTED]




*From:* Steve Richfield [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Tuesday, December 9, 2008 3:21:18 PM
*Subject:* Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

Matt,
 
It appears that either you completely missed the point in my earlier 
post, that
 
Knowledge + Inverse Knowledge ~= Understanding (hopefully)
 
There are few things in the world that are known SO well that from 
direct knowledge thereof that you can directly infer all potential 
modes of failure. Especially with things that have been engineered (or 
divinely created), or evolved (vs accidental creations like 
mountains), the failures tend to come in the FLAWS in the 
understanding of their creators.
 
Alternatively, it is possible to encode just the flaws, which tend to 
spread via cause and effect chains and easily step out of the apparent 
structure. A really good example is where a designer with a particular 
misunderstanding of something produces a design that is prone to 
certain sorts of failures in many subsystems. Of course, these 
failures are the next step in the cause and effect chain that started 
with his flawed education and have nothing at all to do with the 
interrelationships of the systems that are failing.
 
Continuing...
 
On 12/9/08, *Matt Mahoney* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Steve, the difference between Cyc and Dr. Eliza is that Cyc has
much more knowledge. Cyc has millions of rules. The OpenCyc
download is hundreds of MB compressed. Several months ago you
posted the database file for Dr. Eliza. I recall it was a few
hundred rules and I think under 1 MB.

 
You have inadvertently made my point, that in areas of inverse 
knowledge that OpenCyc with its hundreds of MBs of data still falls 
short of Dr. Eliza with 1% of that knowledge. Similarly, Dr. Eliza's 
structure would prohibit it from being able to answer even simple 
questions regardless of the size of its KB. This is because OpenCyc is 
generally concerned with how things work, rather than how they fail, 
while Dr. Eliza comes at this from the other end.


Both of these databases are far too small for AGI because neither
has solved the learning problem.

 
... Which was exactly my point when I referenced the quadrillion 
dollars you mentioned. If you want to be able to do interesting things 
for only ~$1M or so, no problem IF you stick to an appropriate corner 
of the knowledge (as Dr. Eliza does). However, if come out of the 
corners, then be prepared to throw your $1Q at it.
 
Note here that I am NOT disputing your ~$1Q, but rather I am 

Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Colin Hales

Trent Waddington wrote:

On Tue, Nov 18, 2008 at 4:07 PM, Colin Hales
[EMAIL PROTECTED] wrote:
  

I'd like to dispel all such delusion in this place so that neurally inspired
AGI gets discussed accurately, even if your intent is to explain
P-consciousness away... know exactly what you are explaining away and
exactly where it is.



Could you be any more arrogant?  Could you try for me, cause I think
you're almost there, and with a little training, you could get some
kind of award.

Trent

  

It's a gift. :-) However I think I might have max'ed out.

Some people would call is saying it the way it is. As I get 
older/grumpier I find I have less time for treading preciously around 
the in garden of the mental darlings to get at the weeds. I also like to 
be told bluntly, like you did. Time is short. You'll be free of my 
swathe for a while...work is piling up again.


cheers

colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Colin Hales



Richard Loosemore wrote:

Colin Hales wrote:

Dear Richard,
I have an issue with the 'falsifiable predictions' being used as 
evidence of your theory.


The problem is that right or wrong...I have a working physical model 
for consciousness. Predictions 1-3 are something that my hardware can 
do easily. In fact that kind of experimentation is in my downstream 
implementation plan. These predictions have nothing whatsoever to do 
with your theory or mine or anyones. I'm not sure about prediction 4. 
It's not something I have thought about, so I'll leave it aside for 
now. In my case, in the second stage of testing of my chips, one of 
the things I want to do is literally 'Mind Meld', forming a bridge of 
4 sets of compared, independently generated qualia. Ultimately the 
chips may be implantable, which means a human could experience what 
they generate in the first person...but I digress


Your statement This theory of consciousness can be used to make some 
falsifiable predictions could be replaced by ANY theory of 
consciousness can be used to make falsifiable predictions 1..4 as 
follows.. Which basically says they are not predictions that falsify 
anything at all. In which case the predictions cannot be claimed to 
support your theory. The problem is that the evidence of predictions 
1-4 acts merely as a correlate. It does not test any particular 
critical dependency (causality origins). The predictions are merely 
correlates of any theory of consciousness. They do not test the 
causal necessities. In any empirical science paper the evidence could 
not be held in support of the claim and they would be would be 
discounted as evidence of your mechanism. I could cite 10 different 
computationalist AGI knowledge metaphors in the sections preceding 
the 'predictions' and the result would be the same.


SoIf I was a reviewer I'd be unable to accept the claim that your 
'predictions' actually said anything about the theory preceding them. 
This would seem to be the problematic issue of the paper. You might 
want to take a deeper look at this issue and try to isolate something 
unique to your particular solution - which has  a real critical 
dependency in it. Then you'll  have an evidence base of your own that 
people can use independently. In this way your proposal  could be 
seen to be scientific in the dry empirical sense.


By way of example... a computer program is  not scientific evidence 
of anything. The computer materials, as configured by the program, 
actually causally necessitate the behaviour. The program is a 
correlate. A correlate has the formal evidentiary status of 
'hearsay'. This is the sense in which I invoke the term 'correlate' 
above.


BTW I have fallen foul of this problem myself...I had to look 
elsewhere for real critical dependency, like I suggested above. You 
never know, you might find one in there someplace! I found one after 
a lot of investigation. You might, too.


Regards,

Colin Hales


Okay, let me phrase it like this:  I specifically say (or rather I 
should have done... this is another thing I need to make more 
explicit!) that the predictions are about making alterations at 
EXACTLY the boundary of the analysis mechanisms.


So, when we test the predictions, we must first understand the 
mechanics of human (or AGI) cognition well enough to be able to locate 
the exact scope of the analysis mechanisms.


Then, we make the tests by changing things around just outside the 
reach of those mechanisms.


Then we ask subjects (human or AGI) what happened to their subjective 
experiences.  If the subjects are ourselves - which I strongly suggest 
must be the case - then we can ask ourselves what happened to our 
subjective experiences.


My prediction is that if the swaps are made at that boundary, then 
things will be as I state.  But if changes are made within the scope 
of the analysis mechanisms, then we will not see those changes in the 
qualia.


So the theory could be falsified if changes in the qualia are NOT 
consistent with the theory, when changes are made at different points 
in the system.  The theory is all about the analysis mechanisms being 
the culprit, so in that sense it is extremely falsifiable.


Now, correct me if I am wrong, but is there anywhere else in the 
literature where you have you seen anyone make a prediction that the 
qualia will be changed by the alteration of a specific mechanism, but 
not by other, fairly similar alterations?





Richard Loosemore

At the risk of lecturing the already-informed ---Qualia generation has 
been highly localised into specific regions in *cranial *brain material 
already. Qualia are not in the periphery. Qualia are not in the spinal 
CNS, Qualia are not in the cranial periphery eg eyes or lips. Qualia are 
generated in specific CNS cortex and basal regions. So anyone who thinks 
they have a mechanism consistent with physiological knowledge could 
conceive of alterations reconnecting periphery

[agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-17 Thread Colin Hales

Mike Tintner wrote:
Colin:Qualia generation has been highly localised into specific 
regions in *cranial *brain material already. Qualia are not in the 
periphery. Qualia are not in the spinal CNS, Qualia are not in the 
cranial periphery eg eyes or lips
 
Colin,
 
This is to a great extent nonsense. Which sensation/emotion - (qualia 
is a word strictly for philosophers not scientists, I suggest) - is 
not located in the body? When you are angry, you never frown or bite 
or tense your lips? The brain helps to generate the emotion - (and 
note helps). But emotions are bodily events - and *felt* bodily.
 
This whole discussion ignores the primary paradox about consciousness, 
(which is first and foremost sentience) :  *the brain doesn't feel a 
thing* - sentience/feeling is located in the body outside the brain. 
When a surgeon cuts your brain, you feel nothing. You feel and are 
conscious of your emotions in and with your whole body.
I am talking about the known, real actual origins of *all* phenomenal 
fields. This is anatomical/physiological fact for 150 years. You don't 
see with your eyes. You don't feel with your skin. Vision is in the 
occipital cortex. The eyes provide data. Skin provides the data, CNS 
somatosensory field delivers the experience of touch and projects it to 
the skin region. ALL perceptions, BAR NONE, including all emotions, 
imagination, everything - ALL of it is actually generated in cranial 
CNS.  Perceptual fields are projected from the CNS to appear AS-IF they 
originate in the periphery. The sensory measurements themselves convey 
no sensations at all. 

I could give you libraries of data. Ask all doctors. They specifically 
call NOCICEPTION the peripheral sensor and PAIN the CNS 
(basal...inferior colliculus or was it cingulate...can't remember 
exactly) percept. Pain in your back? NOPE. Pain is in the CNS and 
projected (Badly) to the location of your back, like a periscope-view. 
Pain in your gut? NOPE. You have nociceptors in the myenetric/submucosal 
plexuses that convey data to the CNS which generates PAIN and projects 
it at the gut. Feel sad? Your laterally offset amygdala create an 
omnidirectional percept centered on your medial cranium region. etc etc 
etc etc


YESBrains don't have their own sensors or self-represent with a 
perceptual field. So what? That's got nothing whatever to do with the 
matter at hand. CUT cortex and you can kill off what it is like 
percepts out there in the body (although in confusing ways). Touch 
appropriate exposed cortex with a non-invasive probe and you can create 
percepts apparently, but not actually, elsewhere in the body.


The entire neural correlates of consciousness (NCC) paradigm is 
dedicated to exploring CNS neurons for correlates of qualia. NOT 
peripheral neurons. Nobody anywhere else in the world thinks that 
sensation is generated in the periphery.


The *CNS* paints your world with qualia-paint in a projected picture 
constructed in the CNS using sensationless data from the periphery. 
Please internalise this brute fact. I didn't invent it or simply choose 
to believe it because it was convenient. I read the literature. It told 
me. It's there to be learned. Lots of people have been doing conclusive, 
real physiology for a very long time. Be empirically informed: Believe 
them. Or, if you are still convinced it's nonsense then tell them, not 
me.  They'd love to hear your evidence and you'll get a nobel prize for 
an amazing about-turn in medical knowledge. :-)


This has been known, apparently perhaps by everybody but computer 
scientists, for 150 years.Can I consider this a general broadcast once 
and for all? I don't ever want to have to pump this out again. Life is 
too short.


regards,

Colin Hales






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Colin Hales



Richard Loosemore wrote:

Colin Hales wrote:



Richard Loosemore wrote:

Colin Hales wrote:

Dear Richard,
I have an issue with the 'falsifiable predictions' being used as 
evidence of your theory.


The problem is that right or wrong...I have a working physical 
model for consciousness. Predictions 1-3 are something that my 
hardware can do easily. In fact that kind of experimentation is in 
my downstream implementation plan. These predictions have nothing 
whatsoever to do with your theory or mine or anyones. I'm not sure 
about prediction 4. It's not something I have thought about, so 
I'll leave it aside for now. In my case, in the second stage of 
testing of my chips, one of the things I want to do is literally 
'Mind Meld', forming a bridge of 4 sets of compared, independently 
generated qualia. Ultimately the chips may be implantable, which 
means a human could experience what they generate in the first 
person...but I digress


Your statement This theory of consciousness can be used to make 
some falsifiable predictions could be replaced by ANY theory of 
consciousness can be used to make falsifiable predictions 1..4 as 
follows.. Which basically says they are not predictions that 
falsify anything at all. In which case the predictions cannot be 
claimed to support your theory. The problem is that the evidence of 
predictions 1-4 acts merely as a correlate. It does not test any 
particular critical dependency (causality origins). The predictions 
are merely correlates of any theory of consciousness. They do not 
test the causal necessities. In any empirical science paper the 
evidence could not be held in support of the claim and they would 
be would be discounted as evidence of your mechanism. I could cite 
10 different computationalist AGI knowledge metaphors in the 
sections preceding the 'predictions' and the result would be the same.


SoIf I was a reviewer I'd be unable to accept the claim that 
your 'predictions' actually said anything about the theory 
preceding them. This would seem to be the problematic issue of the 
paper. You might want to take a deeper look at this issue and try 
to isolate something unique to your particular solution - which 
has  a real critical dependency in it. Then you'll  have an 
evidence base of your own that people can use independently. In 
this way your proposal  could be seen to be scientific in the dry 
empirical sense.


By way of example... a computer program is  not scientific evidence 
of anything. The computer materials, as configured by the program, 
actually causally necessitate the behaviour. The program is a 
correlate. A correlate has the formal evidentiary status of 
'hearsay'. This is the sense in which I invoke the term 'correlate' 
above.


BTW I have fallen foul of this problem myself...I had to look 
elsewhere for real critical dependency, like I suggested above. You 
never know, you might find one in there someplace! I found one 
after a lot of investigation. You might, too.


Regards,

Colin Hales


Okay, let me phrase it like this:  I specifically say (or rather I 
should have done... this is another thing I need to make more 
explicit!) that the predictions are about making alterations at 
EXACTLY the boundary of the analysis mechanisms.


So, when we test the predictions, we must first understand the 
mechanics of human (or AGI) cognition well enough to be able to 
locate the exact scope of the analysis mechanisms.


Then, we make the tests by changing things around just outside the 
reach of those mechanisms.


Then we ask subjects (human or AGI) what happened to their 
subjective experiences.  If the subjects are ourselves - which I 
strongly suggest must be the case - then we can ask ourselves what 
happened to our subjective experiences.


My prediction is that if the swaps are made at that boundary, then 
things will be as I state.  But if changes are made within the scope 
of the analysis mechanisms, then we will not see those changes in 
the qualia.


So the theory could be falsified if changes in the qualia are NOT 
consistent with the theory, when changes are made at different 
points in the system.  The theory is all about the analysis 
mechanisms being the culprit, so in that sense it is extremely 
falsifiable.


Now, correct me if I am wrong, but is there anywhere else in the 
literature where you have you seen anyone make a prediction that the 
qualia will be changed by the alteration of a specific mechanism, 
but not by other, fairly similar alterations?





Richard Loosemore

At the risk of lecturing the already-informed ---Qualia generation 
has been highly localised into specific regions in *cranial *brain 
material already. Qualia are not in the periphery. Qualia are not in 
the spinal CNS, Qualia are not in the cranial periphery eg eyes or 
lips. Qualia are generated in specific CNS cortex and basal regions. 


You are assuming that my references to the *foreground* periphery 
correspond to the physical

[agi] test

2008-11-13 Thread Colin Hales




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread Colin Hales

Dear Matt,
Try running yourself with empirical results instead of metabelief 
(belief about belief). You'll get someplace .i.e. you'll resolve the 
inconsistencies. When inconsistencies are *testably *absent, no matter 
how weird the answer, it will deliver maximally informed choices. Not 
facts. Facts will only ever appear differently after choices are made. 
This too is a fact...which I have chosen to make choices about. :-) If 
you fail to resolve your inconsistency then you are guaranteeing that 
your choices are minimally informed. Tricky business, science: an 
intrinsically dynamic process in which choice is the driver (epistemic 
state transition) and the facts (the epistemic state) are forever 
transitory , never certain. You can only make so-called facts certain by 
failing to choose. Then they lodge in your brain (and nowhere else) like 
dogma-crud between your teeth, and the rot sets in. The plus side - you 
get to be 100% right. Personally I'd rather get real AGI built and be 
testably wrong a million times along the way.

cheers,
colin hales


Matt Mahoney wrote:

--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:

  

Matt Mahoney wrote:


If you don't define consciousness in terms of an objective test, then
you can say anything you want about it.
  

We don't entirely disagree about that. An objective test is absolutely
crucial. I believe where we disagree is that I expect there to be such a
test one day, while you claim there can never be.



It depends on the definition. The problem with the current definition (what 
most people think it means) is that it leads to logical inconsistencies. I 
believe I have a consciousness, a little person inside my head that experiences 
things and makes decisions. I also believe that my belief is false, that my 
brain would do exactly the same thing without this little person. I know these 
two views are inconsistent. I just accept that they are and leave it at that.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Colin Hales

Matt Mahoney wrote:

snip ... accepted because the theory makes predictions that can be tested. 
But there are absolutely no testable predictions that can be made from a theory of 
consciousness.

-- Matt Mahoney, [EMAIL PROTECTED]


  

This is simply wrong.

It is difficult but you can test for it objectively by demanding that an 
entity based on your 'theory of consciousness' deliver an authentic 
scientific act on the a-priori unknown using visual experience for 
scientific evidence. To the best _indirect_ evidence we have, that act 
is critically dependent on the existence of a visual phenomenal field 
within the tested entity. Visual P-consciousness and scientific evidence 
are literal identities in that circumstance. Degrade visual 
experience...scientific outcome is disrupted. You can use this to 
actually discover the physics of qualia as follows:


1) Concoct your theory of consciousness.
2) Build a scientist with it with (amongst other necessities) visual 
phenomenal consciousness which you believe to be there because of your 
theory of consciousness. Only autonomous, embodied entities are valid, 
because it involved actually interacting with an environment the way 
humans do.
3) Test it for delivery of an authentic act of science on the a-priori 
unknown by testing for ignorance at the start followed by the 
acquisition of the requisite knowledge followed by the application of 
the knowledge on a completely novel problem.

4) FAIL: = your physics is wrong or your design is bad.
   PASS = design and physics are good.

REPEAT THE ABOVE for all putative physics END when you get 
success...voila...the physics you dreamt up is the right one or as good 
as the right one.


If the entity delivers the 'law of nature' then it has to have all the 
essential aspects of a visual experience needed for a successful 
scientific act. You can argue about the 'experience' within the entity 
afterwards...on a properly informed basis of real knowledge. Until then 
you're just waffling about theories.


Such a test might involve reward through reverse-engineering chess. 
Initially chess ignorance is demonstrated...followed by repeated 
exposure to chess behaviour on a real board.followed by a demand to 
use chess behaviour in a completely environment and in a different 
manner...say to operate a machine that has nothing to do with chess but 
is metaphorically labelled to signal that chess rules apply to some 
aspect of its behaviour This proves that the laws underneath the 
external behaviour of the original chess pieces was internalised and 
abstracted...which contains all the essential ingredients of a 
scientific act on the unknown. You cannot do this without authentic 
connection to the distal external world of the chess pieces.


You cannot train such an entity. The scientific act itself is the 
training. Neither testers nor tested can have any knowledge of the 'law 
of nature' or the environments to be encountered. A completely novel 
'game' could be substituted for chess, for example. Any entity dependent 
on any sort of training will fail. You can't train for scientific 
outcomes. You can only build the necessities of scientific behaviour and 
then let it loose.


You run this test on all putative theories of consciousness. If you 
can;'t build it you have no theory. If you build it and it fails, tough. 
If you build it and it passes your theory is right.


You can't test for consciousness is a cultural catch phrase identical 
to man cannot fly.
Just like the Wright Bros, we need to start to fly. Not pretend to fly. 
Or not fly and say we did


Objective testing for consciousness is easy. Building the test and the 
entity...well that's not so easy but it is possible. A 'definition' 
of consciousness is irrelevant. Like every other circumstance in 
science...'laws' and physical phenomena that operate according to them 
are discovered, not defined. Humans did not wait for a definition of 
fire before cooking dinner with it. Why should consciousness be any 
different?


cheers
colin hales



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Colin Hales



Matt Mahoney wrote:

--- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote:

  

It is difficult but you can test for it objectively by
demanding that an entity based on your 'theory of
consciousness' deliver an authentic scientific act on
the a-priori unknown using visual experience for scientific
evidence.



So a blind person is not conscious?

-- Matt Mahoney, [EMAIL PROTECTED]


  
A blind person cannot behave scientifically in the manner of the 
sighted. The blind person cannot be a scientist 'of that which is 
visually evidenced'. As an objective test specifically for visual 
P-consciousness, the blind person's  failure would prove the blind 
person has no visual P-consciousness. If a monkey passed the test then 
it would be proved visually P-conscious (as well as mighty smart!). A 
blind-sighted person would fail because they can't handle the radical 
novelty in the test. Again the test would prove they have no visual 
P-consciousness. A computer, if it passed, must have created inside 
itself all of the attributes of P-consciousness as utilised in vision 
applied to scientific evidence. You can argue about the details of any 
'experience' only when armed with the physics _after_ the test is 
passed, when you can discuss the true nature of the physics involved 
from an authoritative position. If the requisite physics is missing the 
test subject will fail.


That is the characteristic of  a useful test. Unambiguous outcomes 
critically dependent on the presence of a claimed phenomenon. You don't 
even have to know the physics details. External behaviour is decisive 
and anyone could administer the test, provided it was set up properly.


Note that experimental-scientists and applied scientists are literally 
scientific evidence of consciousness. They don't have to deliver 
anything except their normal science deliverables to complete the proof. 
They do nothing else but prove they are visually P-conscious for their 
entire lives.


cheers,
colin






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Colin Hales



[EMAIL PROTECTED] wrote:

When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent.  This isn't the only possible
consideration.

One other consideration is our stance relative to that agent.  Are we just
acting in a selfish way, using the agent as simply a means to achieve our
goals?  I'll just leave that idea open as there are traditions that see
value in de-emphasizing greed and personal acquisitiveness.

Another consideration is the inherent value of self-determination.  This
is above any suffering that might be caused by being a completely
controlled subject.  One of the problems of slavery was just that it
simply works better if you let people decide things for themselves. 
Similarly, just letting an artificial agent have autonomy for its own sake

may just be a more effective thing than having it simply be a controlled
subject.

So I don't even think the consciousness of an artificial intelligent
agent is completely necessary in considering the ethics of our stance
towards it.  We can consider our own emotional position and the inherent
value of independence of thinking.
andi


  
I'm inclined to agree - this will be an issue in the future... if you 
have a robot helper and someone comes by and beats it to death in 
front of your kids, who have some kind of attachment to it...a 
relationship... then  crime (i) may be said to be the psychological 
damage to the children. Crime (ii) is then the murder and whatever one 
knows of suffering inflicted on the robot helper. Ethicists are gonna 
have all manner of novelty to play with.

cheers
colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-10 Thread Colin Hales

Matt Mahoney wrote:

--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:

  

Do you agree that there is no test to distinguish a
  

conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?

Disagree.




What test would you use?

  
The test will be published in the next couple of months in the Open AI 
journal.
= An objective test for scientific behaviour. I call it the 'PCST' for 
P-Conscious Scientist Test.


You can't be a scientist without being visually P-conscious to 
experience your evidence.
You can't deny the test without declaring scientists devoid of 
consciousness whilst demanding it be used for all scientific evidence in 
a verifiable way AND whilst investing in an entire science paradigm 
Neural Correlates of Consciousness dedicated to scientific exploration 
of P-consciousnessThe logic's pretty good and it's easy to design an 
objective test demanding delivery of a 'law of nature'. The execution, 
however, is logistically difficult++. BUT...At least it's doable. A hard 
test is better than no test at all, which is what we currently have.

When it comes out I'll let you know.

RE ETHICS..I say this in the paper:

As was recognised by Gamez [35], one cannot help but notice that there 
is also a secondary ethical 'bootstrap' process. Once a single subject 
passes the PCST, for the first time ever in certain circumstances there 
will be a valid scientific reason obliging all scientists to consider 
the internal life of an artefact as potentially having some level of 
equivalence to that of a laboratory animal, possibly deserving of 
similar ethical treatment. Until that event occurs, however, all such 
discussions are best considered moot.


cheers
colin




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Machine Consciousness Workshop, Hong Kong, June 2009

2008-10-30 Thread Colin Hales

Hi,
I was wondering as to the formatwho does what, how...speaking etc 
etc.. what sort of airing do the contributors get for their material?

regards
colin


Ben Goertzel wrote:

Hi all,

I wanted to let you know that Gino Yu and I are co-organizing  a 
Workshop on Machine

Consciousness, which will be held in  Hong Kong in June 2008: see

http://novamente.net/machinecs/index.html

for details. 

It is colocated with a larger, interdisciplinary conference on 
consciousness research,

which has previously been announced:

http://www.consciousness.arizona.edu/

As an aside, I also note that the date for submitting papers to
AGI-09 has been extended, by popular demand, till November 12;
see

http://agi-09.org/

AGI-09 will welcome quality papers on any strong-AI
related topics.

thanks!
ben

--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]


A human being should be able to change a diaper, plan an invasion, 
butcher a hog, conn a ship, design a building, write a sonnet, balance 
accounts, build a wall, set a bone, comfort the dying, take orders, 
give orders, cooperate, act alone, solve equations, analyze a new 
problem, pitch manure, program a computer, cook a tasty meal, fight 
efficiently, die gallantly. Specialization is for insects.  -- Robert 
Heinlein




*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-16 Thread Colin Hales

Ben Goertzel wrote:


Colin,

There's a difference between

1)
Discussing in detail how you're going to build a non-digital-computer 
based AGI


2)
Presenting general, hand-wavy theoretical ideas as to why 
digital-computer-based AGI can't work


I would be vastly more interested in 1 than 2 ... and I suspect many 
others on the

list feel similarly...

-- Ben G


RE: (1)
OK. I'll deposit (1) in nice easy to digestible slices as and when 
appropriate... if that can be slotted into your original (1) for 
discussion every now and then on the forum ...Alrighty then!


RE:(2)
Well I'm just a messenger from science and other logicians who are 
adding to an ever growing pile labelled cognition is not computation 
which has had yet more stuff added this year. It seems to be invalid 
from so many angles it's hard to keep up with...But the 3 main existing 
papers I have already cited: I believe them. I also have 2 of my own in 
review. They are based on existing physics and empirical work...no 
handwavy anything. I didn't reach the position lightly, because it makes 
the problem about a million times harder...


OK. The message delivered. I can do no more than that. The bottom line:

If I am wrong and COMP is true, we get AGI.
If COMP is false and I am right, we get AGI.

Sounds good to me! Let's leave it there.  If minimal postings to the 
above (1) fit into your original (1) then that makes me feel that the 
forum is sidling up to a scientifically sound enthusiasm for AGI. 
Strength in diversity.


Think about it. One day a real AGI is going to read this email forum 
squabbling away- if those involved have the ideas that work, this 
discussion will part of their personal history, a gestation of sorts, 
and I hope the AGI will see the work of caring parents on a well 
informed mission dedicated to their very genesis. That's how I'd want my 
mum(s) and dad(s) to be. :-)  I'd rather be in that lineage than not. 
Wouldn't you? Far more interesting.


Gotta go write the AGI-09 paper, amongst 46782318 other thingsI 
won't be back without (1)-style deliverables.


cheers
Colin


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Colin Hales

Hi Trent,
You guys are forcing me to voice all sort of things in odd ways.
It's a hoot...but I'm running out of hours!!!


Trent Waddington wrote:

On Wed, Oct 15, 2008 at 4:48 PM, Colin Hales
[EMAIL PROTECTED] wrote:
  

you have to be exposed directly to all the actual novelty in the natural world, 
not the novelty
recognised by a model of what novelty is. Consciousness (P-consciousness and
specifically and importantly visual P-consciousness) is the mechanism by
which novelty in the actual DISTANT natural world is made apparent to the
agent. Symbolic grounding in Qualia NOT I/O. You do not get that information
through your retina data. You get it from occipital visual P-consciousness.
The Turing machine abstracts the mechanism of access to the distal natural
world and hence has to be informed by a model, which you don't have...



Wow.  I know I don't know what P-consciousness is.. and clearly I
must not no what Qualia is.. The capital must change the meaning
from the normal definition.

But basically I think you have to come out right now and say what your
philosophy of reality is.
  
Let me say right away that if you don't know what qualia or 
P-consciousness are then you're missing 150 years of biology and things 
are gonna look kind of odd. I suggest a rapid wiki-googling  exercise 
(also in a recent post I delivered a whole pile of definitions and 
references.)


I don't have a philosophy of reality. I exist, at a practical; level, 
within the confines of the standard particle model, 4 forces, 4 
transmitter and the particle zoo. I don't need anything else to make a 
cogent case for my model that stacks up empirically the normal way.


I do have a need to alter science, however, to become a dual aspect 
epistemology about a  monism, entirely consistent with all existing 
science. Only the options of scientists changes and the structure of 
knowledge changes. In that case, the objective view I use has a very 
simple extension which accounts for subjectivity with physical, causal 
teeth.





If your complaint is that a robot senses are not as rich or as complex
as a human senses and therefore an AI hooked up to robot senses cannot
possibly have the same qualia as humans then can you *stipulate for
the sake of argument* that it may be possible to supply human senses
to an AI so that it does have the same qualia?  Or are you saying that
there's some mystical magical thing about humans that makes it
impossible for an AI to have the same qualia.

And if you're not happy with the idea of an AI having the same qualia
as humans, then surely you're willing to agree that a human that was
born wired into solely robot senses (suppose its for humanitarian
reasons, rather than just nazi doctors having fun if you like) would
have fundamentally different qualia.  You believe this human would not
produce an original scientific act on the a-priori unknown -
whatever that means - or does the fact that this evil human-robot
hybrid is somehow half human give it a personal blessing from God?

Trent

  
I'm not complaining about anything! I am dealing with brute reality. 
You are simply unaware of the job that AGI faces...and are not aware of 
the 150 years of physiological evidence that the periphery (peripheral 
nervous system and periphery of the central nervous system like retina) 
is not 'experienced'. None of it. I have already been through this in my 
original posting, I think. IO signals (human and robot)  _are not 
perceived_, generate no sensations  i.e. are  Qualia-Null. Experience 
happens in the cranial central nervous system, and is merely projected 
as-if it comes from the periphery. If feels like you have vision 
centered on your eyes, yes? Well surprise..all an illusion. Vision 
happens in the back of your head and is projected to appear as if your 
eyes generated it. You need to get a hold of some basic physiology.


So the surprise for everyone who's been operating under the assumption 
that symbol grounding is simply I/O wiring: WRONG. We are symbolically 
grounded in qualia: something that happens in the cranial CNS.  Not even 
the spinal CNS does any sensations. Pain in your back anyone? WRONG. The 
pain comes from cortex, NOT your spine. It's projected and mostly badly.


As you must know from my postings...qualia are absolutely mandatory for 
handling novelty for a whole pile of complex reasons. And robots will 
need them too. But they will not have them from simply wiring up I/O 
signals and manipulating abstractions. You need the equivalent of the 
complete CRANIAL central nervous system electrodynamics to achieve that, 
not a model of it.


So I demand that robots have qualia. For good physical, sensible, 
verifiable reasons...Whether they are exactly like humans..is another 
question. A human with artificial but equivalent peripheral sensory 
transduction would have qualia because the CNS generates them, not 
because are delivered by the I/O. And that human would be able to do

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Colin Hales

Oops I forgot...

Ben Goertzel wrote:


About self: you don't like Metzinger's neurophilosophy I presume?  
(Being No One is a masterwork in my view)




I got the book out and started to read it. But I found it incredibly 
dense and practically  useless. It told me nothing. I came out the other 
end with no clarity whatever. Just a whole pile of self-referential 
jargon I couldn't build. No information gained. Maybe in time it'll 
become more meaningful and useful. It changed nothing. I expected a 
whole lot more.



It was kind of like Finnegan's Wake. You can read it or have a gin and 
tonic and hit yourself over the head with it. The result is pretty much 
the same.


:-)
colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Consciousness

2008-10-15 Thread Colin Hales


John LaMuth wrote:

Colin

Consc. by nature is subjective ...

Can never prove this in a machine -- or other human beings for that 
matter
Yes you can. This is a fallacy. You can prove it in humans and you can 
prove it in a machine.
You simply demand it do science. Not simple - but possible. I have 
written this up.

Should be published this year? Not sure.


We are underutilizing about 4 Billion + human Cons's on the earth today

What goal -- besides vanity -- is there to simulate this mechanically ??

We need to simulate communication, if anything at all ...

John L

The practical mission for me is not human-level AGI.
I aim merely for A-fauna that is adaptive like biology is adaptive.
I target markets where natural fauna struggles or where normal solutions 
are damaging.
The rationale for creating human level AGI will be decided  by forces 
yet to be determined. Relentless, tireless home help for the aged might 
be one case.


cheers
colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales



Ben Goertzel wrote:


Hi,

My main impression of the AGI-08 forum was one of over-dominance
by singularity-obsessed  and COMP thinking, which must have
freaked me out a bit.


This again is completely off-base ;-)


I also found my feeling about -08 as slightly coloured by first hand 
experience from an attendee who came away with the impression I put. 
I'll try and bolt down my paranioa at tad...


COMP, yes ... Singularity, no.  The Singularity was not a theme of 
AGI-08 and the vast majority of participant researchers are not 
seriously into Singularitarianism, futurism, and so forth.
Good, although I'll be vigorously adding non-COMP approaches to the mix, 
and trusting that is OK




There was a post-conference workshop on the Future of AGI, which about 
half of the conference attendees attended, at which the Singularity 
and related issues were discussed, among other issues.  For instance, 
the opening talk at the workshop was given by Natasha Vita-More, who 
so far as I know is not a Singularitarian per se, though an excellent 
futurist.  And one of the more vocal folks in the discussions in the 
workshop was Selmer Bringsjord, who believes COMP is false and has a 
different theory of intelligence than you or me, tied into his 
interest in Christian philosophy.



The only reason for not connecting consciousness with AGI is a
situation where one can see no mechanism or role for it.



Seeing a mechanism or role for consciousness requires a specific 
theory of consciousness that not everybody holds --- and as you surely 
know, not even everyone in the machine consciousness community holds.


Personally I view the first-person, second-person and third-person 
views as different perspectives on the universe, so I think it's a 
category error to talk about mechanisms of consciousness ... though 
one can talk about mechanisms that are correlated with particularly 
intense consciousness, for example.


See my presentation from the Nokia workshop on Machine Consciousness 
in August ... where I was the only admitted panpsychist ;-)


http://goertzel.org/NokiaMachineConsciousness.ppt
ouch 10MB safely squirreled away under GforGoertzel, thank goodness for 
the uni bandwidth.. :-)


I think I rest my case. You cannot see a physical mechanism or a role. I 
can.


Inventing/adopting a whole mental rationale that avoids the problem 
based on an assumption about a 'received view'  is not something I can 
do...I have a real physical process I can point to objectively, and a 
perspective from which it makes perfect sense that it be responsible for 
a first person perspective of the kind we receive.and I 
can't/won't talk it away just because 'Ben said so', even when the 
'category error' stick, is wielded. That old rubric excuse for an 
argument doesn't scare me a bit ... :-)  Consciousness is a problem for 
a reason, and that reason is mostly us thinking our 'categories' are right.


Interestingly, my model, if you stand back and squint a bit, can be 
interpreted as having an 'as-if pan-psychism was real' appearance. Only 
an appearance tho. It's not real.


Anyway... let's just let my story unfold, eh? It's a big one, so it'll 
take a while. Fun to be had!


Thanks for the 'Hidden Pattern' link... I shall digest it.

cheers
colin




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales



Ben Goertzel wrote:


OK, but you have not yet explained what your theory of consciousness 
is, nor what the physical mechanism nor role for consciousness that 
you propose is ... you've just alluded obscurely to these things.  So 
it's hard to react except with raised eyebrows and skepticism!!


ben g

Of course... that's only to be expected at this stage. It can't be helped.
The physical mechanism is easy: quantum electrodynamics.
The tricky bit is the perspective from which it can be held accountable 
for subjective qualities.


ahem!... this is my theory... ahem!...
:-)
A. Elk...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

Hi Terren,
They are not 'communities' in the sense that you mean. They are labs in 
various institutions that work on M/C-consciousness (or pretend to be 
doing cog sci, whilst actually doing it :-). All I can do is point you 
at the various references in the paper and get you to keep an eye on 
them. Not terribly satisfactory, but...well that's the way it is. It is 
why I was quite interested in the AGI forum...it's a potential nexus for 
the whole lot of us.

regards
Colin

Terren Suydam wrote:


Hi Colin,

Are there other forums or email lists associated with some of the 
other AI communities you mention?  I've looked briefly but in vain ... 
would appreciate any helpful pointers.


Thanks,
Terren

--- On *Tue, 10/14/08, Colin Hales /[EMAIL PROTECTED]/* 
wrote:


From: Colin Hales [EMAIL PROTECTED]
Subject: Re: [agi] Advocacy Is no Excuse for Exaggeration
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 12:43 AM

Hi Matt,
... The Gamez paper situation is now...erm...resolved. You are
right: the paper doesn't argue that solving consciousness is
necessary for AGI. What has happened recently is a subtle shift  -
those involved simple fail to make claims about the consciousness
or otherwise of the machines! This does not entail that they are
not actually working on it. They are just being cautious...Also,
you correctly observe that solving AGI on a purely computational
basis is not prohibited by the workers involved in the GAMEZ
paper.. indeed most of their work assumes it!... I don't have a
problem with this...However...'attributing' consciousness to it
based on its behavior is probably about as unscientific as it
gets. That outcome betrays no understanding whatever of
consciousness, its mechanism or its roleand merely assumes
COMP is true and creates an agreement based on ignorance. This is
fatally flawed non-science.

[BTW: We need an objective test (I have one - I am waiting for it
to get published...). I'm going to try and see where it's at in
that process. If my test is acceptable then I predict all COMP
entrants will fail, but I'll accept whatever happens... - and
external behaviour is decisive. Bear with me a while till I get it
sorted.]

I am still getting to know the folks [EMAIL PROTECTED] And the group may
be diverse, as you say ... but if they are all COMP, then that
diversity is like a group dedicated to an unresolved argument over
the colour of a fish's bicycle. If we can attract the attention of
the likes of those in the GAMEZ paper... and others such as Hynna
and Boahen at Stanford, who have an unusual hardware neural
architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically
equivalent silicon models of voltage-dependent ion channels',
/Neural Computation/ vol. 19, no. 2, 2007. 327-350.) ...and others
... then things will be diverse and authoritative. In particular,
those who have recently essentially squashed the computational
theories of mind from a neuroscience perspective- the 'integrative
neuroscientists':

Poznanski, R. R., Biophysical neural networks : foundations of
integrative neuroscience, Mary Ann Liebert, Larchmont, NY, 2001,
pp. viii, 503 p.

Pomerantz, J. R., Topics in integrative neuroscience : from cells
to cognition, Cambridge University Press, Cambridge, UK ; New
York, 2008, pp. xix, 427 p.

Gordon, E., Ed. (2000). Integrative neuroscience : bringing
together biological, psychological and clinical models of the
human brain. Amsterdam, Harwood Academic.

The only working, known model of general intelligence is the
human. If we base AGI on anything that fails to account
scientifically and completely for /all/ aspects of human
cognition, including consciousness, then we open ourselves to
critical inferiority... and the rest of science will simply find
the group an irrelevant cultish backwater. Strategically the group
would do well to make choices that attract the attention of the
'machine consciousness' crowd - they are directly linked to
neuroscience via cog sci. The crowd that runs with JETAI (journal
of theoretical and experimental artificial intelligence) is also
another relevant one. It'd be nice if those people also saw the
AGI journal as a viable repository for their output. I for one
will try and help in that regard. Time will tell I suppose.

cheers,
colin hales


Matt Mahoney wrote:

--- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote:

  

In the wider world of science it is the current state of play that the


theoretical basis for real AGI is an open and multi-disciplinary
question.  A forum that purports to be invested in achievement of real
AGI as a target, one would expect that forum to a multidisciplianry
approach on many fronts, all competing

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales
' is an explanatory pariah, the needed fundamentals 
are missing from the toolkit...I don't expect any sense to come from 
anywhere. An entity with a P-conscious (occipital/visual scene) 
projected depiction of the external world automatically places a self 
(the projector) inside it and 'self' becomes the same as everything 
else: something about which knowledge is accrued, from which behaviour 
may emerge. There are heaps of papers on the self. I read them, but they 
tell me nothing I can build.


D Free Will. An interest of mine. I noted some reference that 
suggested a neuroscientific attempt to explain this (or perhaps 
explain it away). Know any more about this?


Free Will and Free Won't(my favourite!) are high level aspects which I 
don't have a clear story on just yet. I am focussed on the lower levels 
entirely. When that is consolidated I will have something cogent to say. 
I'd rather study it later empirically with the chips I want to build.  
The motivation for my AGI to do anything at all remains problematic...I 
have my ideas but it's early days...FW is an important idea, but I can't 
explicitly 'build it', so it's not an early design issue. As with most 
other aspects of cognition I suspect that FW is a high-level (organism 
level) emergent property which has its ultimate basis in quantum 
mechanical randomness/indeterminacy. My chip architecture will 
incorporate the entire causal chain, thus inheriting the same 
indeterminacy, so at this stage theres nothing much more for me to add. 
One day.


- - - - - - - -- - - - -
Not terribly satisfying. I know.  There's no quick route through the 
information.


The only guide I can give is that there is a 'trump card' approach that 
clears nomothetic dross like a hot blade through butter: /Base your AGI 
on an artificial scientist model/. The clarity that emerges is 
stunning, and it's all empirically testable.


regards,

Colin Hales



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

Ben Goertzel wrote:


Again, when you say that these neuroscience theories have squashed 
the computational theories of mind, it is not clear to me what you 
mean by the computational theories of mind.   Do you have a more 
precise definition of what you mean?


I suppose it's a bit ambiguous. There's computer modelling of mind, and 
then there's the implementation of an actual mind using actual 
computation, then there's the implementation of a brain using 
computation, in which a mind may be said to be operating. All sorts of 
misdirection.


I mean it in the sense given in:
Pylyshyn, Z. W., Computation and cognition : toward a foundation for 
cognitive science, MIT Press, Cambridge, Mass., 1984, pp. xxiii, 292 p.
That is, that a mind is a result of a brain-as-computation. Where 
computation is meant in the sense of abstract symbol manipulation 
according to rules. 'Rules' means any logic or calculii you'd care to 
cite, including any formally specified probablistic/stochastic language. 
This is exactly what I mean by COMP.


Another slant on it:
Poznanski, R. R., Biophysical neural networks : foundations of 
integrative neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. 
viii, 503 p.
The literature has highlighted the conceptual ineptness of the computer 
metaphor of the brain.  Computational neuroscience, which serves as a 
beacon for for the transfer of concepts regarding brain function to 
artificial nets for the design of neural computers, ignores the 
developmental theory of neuronal group selection and therefore seriously 
overestimates the computational nature of neuroscience. It attempts to 
explain brain function in terms of the abstract computational and 
information processing functions thought to be carried out in the brain 
{citations omitted}.


I don't know whether this answers your question,I hope so... it 
means that leaping to a 'brain = computation in the digital computer 
sense, is not what is going on. It also means that a computer model of 
the full structure is also out. You have to do what the brain does, not 
run a model of it. The brain is a electrodynamic entity, so your AGI has 
to be an electrodynamic entity manipulating natural electromagnetic 
symbols in a similar fashion. The 'symbols' are aggregate in the cohorts 
mentioned by Poznanski. The electrodynamics itself IS the 'computation' 
which occurs naturally in the trajectory through in the multidimensional 
vector space of the matter as a whole. Some symbols are experienced 
(qualia) and some are not.


cheers
colin
.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

Ben Goertzel wrote:


Sure, I know Pylyshyn's work ... and I know very few contemporary AI 
scientists who adopt a strong symbol-manipulation-focused view of 
cognition like Fodor, Pylyshyn and so forth.  That perspective is 
rather dated by now...


But when you say


Where computation is meant in the sense of abstract symbol 
manipulation according to rules. 'Rules' means any logic or calculii 
you'd care to cite, including any formally specified 
probablistic/stochastic language. This is exactly what I mean by COMP.



then things get very very confusing to me.  Do you include a formal 
neural net model as computation?  How about a cellular automaton 
simulation of QED?  Why is this cellular automaton model not abstract 
symbol manipulation?


If you interpret COMP to mean A human-level intelligence can be 
implemented on a digital computer or as A human level intelligence 
can be implemented on a digital computer connected to a robot body or 
even as A human level intelligence, conscious in the same sense that 
humans are, can be implemented on a digital computer connected to a 
robot body ... then I'll understand you.
We're really at cross-purposes here, aren't we?...this is a Colin/Ben 
calibration process :-) OK.


By COMP I mean any abstract symbol manipulation at all in any context. 
The important thing is that in COMP there's a model of some kind of 
learning mechanism being run by a language of some kind or a model of a 
modelling process implemented programmatically. In any event the 
manipulations that are occuring are manipulations of abstract 
representation of numbers according to the language and the model being 
implemented by the computer language.




But when you start defining COMP in a fuzzy, nebulous way, dismissing 
some dynamical systems as too symbolic for your taste (say, 
probabilistic logic) and accepting others as subsymbolic enough 
(say, CA simulations of QED) ... then I start to feel very confused...


I agree that Fodor and Pylyshyn's approaches, for instance, were too 
focused on abstract reasoning and not enough on experiential learning 
and grounding.  But I don't think this makes their approaches **more 
computational** than a CA model of QED ... it just makes them **bad 
computational models of cognition** ...




Maybe a rather stark non-COMP example would help: I would term non-COMP 
approach is /there is no 'model' of cognition being run by anything./ 
The electrodynamics of the matter itself /is the cognition/. Literally. 
No imposed abstract model tells it how to learn. No imposed model is 
populated with any imposed knowledge. No human involvement in any of it 
except construction. Electrodynamic representational objects are being 
manipulated by real natural electrodynamics... is all there is. The 
'computation', if you can call it that, is literally maxwell's equations 
(embedded on a QM substrate, of course) doing their natural dynamics 
dance in real matter, not an abstraction of maxwell's equations being 
run on a computer


In my AGI I have no 'model' of anything. I have the actual thing. A bad 
model of cognition, to me, is identical to a poor understanding of what 
the brain is actually doing. With a good understanding of brain function 
you then actually run the real thing, not a model of it. The trajectory 
of a model of the electrodynamics cannot be the trajectory of the real 
electrodynamics. for the fields inherit behavioural/dynamical properties 
from the deep structure of matter, which are thrown away by the model of 
the electrodynamics. The real electrodynamics is surrounded by the 
matter it is situated in, and operates in accordance with it.


Remember: A scientific model of a natural process cuts a layer across 
the matter hierarchy and throws away all the underlying structure. I am 
putting the entire natural hierarchy back into the picture by using real 
electrodyamics implemented in the fashion of a real brain, not a model 
of the electrodynamics of a real brain or any other abstraction of 
apparent brain operation.


Does that do it? It's very very different to a COMP approach.

cheers
colin





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales

Ben Goertzel wrote:


About self: you don't like Metzinger's neurophilosophy I presume?  
(Being No One is a masterwork in my view)


I agree that integrative biology is the way to go for understanding 
brain function ... and I was talking to Walter Freeman about his work 
in the early 90's when we both showed up at the Society for Chaos 
Theory in Psychology conferences ... however, I am wholly unconvinced 
that this work implies anything about the noncomputationality of 
consciousness.


You mention QED, and I note that the only functions computable 
according to QED are the Turing-computable ones.  I wonder how you 
square this with your view of QED-based brain dynamics as 
noncomputable?   Or do you follow the Penrose path and posit 
as-yet-undiscovered, mysteriously-noncomputable quantum-gravity 
phenomena in brain dynamics (which, I note, requires not only radical 
unknown neuroscience but also radical unknown physics and mathematics)


-- Ben G
The comment is of the kind when did you stop kicking your dog. You 
assume that dog kicking was an issue and any answer in some way 
verifies/validates my involvement in dog-kicking! No way! :-)


Turing computable or Xthing-computable...is irrelevant. I am not 
'following' anyone except the example of the natural world.There's 
no inventions of mysterious anything... this is in-your-face good old 
natural matter doing what it does. I have spent an entire career being 
beaten to a pulp by the natural world of electromagnetismThis is 
really really simple.


Nature managed to make a human capable of arguing about Turing 
computability and Godellian incompleteness without any 'functions' or 
abstractions or any 'model' of anything! I am following the same natural 
path of actual biology and real electrodynamics of real matter. I have a 
brilliant working prototype: /the human brain/. I am implementing the 
minimal subset of what it actually does, not a model of what it does. I 
have the skills to make an inorganic version of it. I don't need the ATP 
cycle, the full endocrine or inflammatory response and/or other 
immunochemistry systems or any of the genetic overheads. All the 
self-configuration and adaptation/tuning is easy to replicate in 
hardware. When you delete all those overheads what's left is really 
simple. Hooking it to I/O is easy - been doing it for decades...


Of course - like a good little engineer I am scoping out electromagnetic 
effects using computational models. Computational chemistry, in fact. 
Appalling stuff! However, as a result my understanding of the 
electromagnetics of brain material will improve. That will result in 
appropriately engineered real electromagnetics running in my AGI, not a 
model of electromagnetics running in my AGI. Quantum mechanics will be 
doing its bit without me lifting a finger - because i am using natural 
matter as it is used in brain material.


Brilliant tho it was, and as beautiful a piece of science that it was, 
Hodgkin and Huxley threw out the fields in 1952ish and there they 
languish, ignored until now. Putting back in the 50% that was thrown 
away 50 years ago can hardly be considered 'radical' neuroscience. 
Ignoring it for any more than 50 years when you can show it operating 
there for everyone to see...now that'd be radically stupid in anyone's book.


There's also a clinical side: the electrodynamics/field structure can be 
used in explanation of developmental chemistry/cellular transport cues 
and it also sorts out the actual origins of EEG, both of which are 
currently open problems.


It's a little brain-bending to get your head around.. but it'll sink in.

cheers
colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Colin Hales


Ben Goertzel wrote:


I still don't really get it, sorry... ;-(

Are you saying

A) that a conscious, human-level AI **can** be implemented on an 
ordinary Turing machine, hooked up to a robot body


or

B) A is false


B)

Yeah that about does it.

Specifically: It will never produce an original scientific act on the 
a-priori unknown. It is the unknown bit which is important. You can't 
deliver a 'model' of the unknown that delivers all of the aspects of the 
unknown without knowing it all already!catch 22...you have to be 
exposed /directly/ to all the actual novelty in the natural world, not 
the novelty recognised by a model of what novelty is. Consciousness 
(P-consciousness and specifically and importantly visual 
P-consciousness) is the mechanism by which novelty in the actual DISTANT 
natural world is made apparent to the agent. Symbolic grounding in 
Qualia NOT I/O. You do not get that information through your retina 
data. You get it from occipital visual P-consciousness.  The Turing 
machine abstracts the mechanism of access to the distal natural world 
and hence has to be informed by a model, which you don't have...


Because scientific behaviour is just a (formal, very testable) 
refinement of everyday intelligent behaviour, everyday intelligent 
behaviour of the kind humans have - goes down the drain with it.


With the TM precluded from producing a scientist, it is precluded as a 
mechanism for AGI.


I like scientific behaviour. A great clarifier.

cheers
colin








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-14 Thread Colin Hales



Matt Mahoney wrote:

--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:

  

The only reason for not connecting consciousness with AGI is a
situation where one can see no mechanism or role for it. That inability
is no proof there is noneand I have both to the point of having a
patent in progress.  Yes, I know it's only my claim at the moment...but
it's behind why I believe the links to machine consciousness are not
optional, despite the cultural state/history of the field at the moment
being less than perfect and folks cautiously sidling around
consciousness like it was bomb under their budgets.



Colin, I read your paper in publication that you were so kind to send me. For 
those who have not seen it, it is a well written, comprehensive survey of 
research in machine consciousness. It does not take a position on whether 
consciousness plays an essential role in AGI. (I understand that taking a 
controversial position probably would have resulted in rejection).

With regard to COMP, I assume you define COMP to be the position that 
everything the mind does is, in principle, computable. If I understand your 
position, consciousness does play a critical role in AGI. However, we don't 
know what it is. Therefore we need to find out by using scientific research, 
then duplicate that process (if possible) in a machine before it can achieve 
AGI.
  



Here and in your paper, you have not defined what consciousness is. Most 
philosophical arguments can be traced to disagreements about the meanings of 
words. In your paper you say that consciousness means having phenomenal states, 
but you don't define what a phenomenal state is.

Without a definition, we default to what we think it means. Everybody knows 
what consciousness is. It is something that all living humans have. We associate 
consciousness with properties of humans, such as having a name, a face, emotions, the 
ability to communicate in natural language, the ability to learn, to behave in ways we 
expect people to behave, to look like a human. Thus, we ascribe partial degrees of 
consciousness (with appropriate ethical treatment) to animals, video game characters, 
human shaped robots, and teddy bears.

To argue your position, you need to nail down a definition of consciousness. 
But that is hard. For example, you could define consciousness as having goals. 
So if a dog wants to go for a walk, it is conscious. But then a thermostat 
wants to keep the room at a set temperature, and a linear regression algorithm 
wants to find the best straight line fit to a set of points.

You could define consciousness as the ability to experience pleasure and pain. But then 
you need a test to distinguish experience from mere reaction, or else I could argue that 
simple reinforcement learners like http://www.mattmahoney.net/autobliss.txt experience 
pain. It boils down to how you define experience.

You could define consciousness as being aware of your own thoughts. But again, you must 
define aware. We distinguish conscious or episodic memories, such as when I 
recalled yesterday something that happened last month, and unconscious or procedural 
memories, such as the learned skills in coordinating my leg muscles while walking. We can 
do studies to show that conscious memories are stored in the hippocampus and higher 
layers of the cerebral cortex, and unconscious memories are stored in the cerebellum. But 
that is not really helpful for AGI design. The important distinction is that we remember 
remembering conscious memories but not unconscious. Reading from conscious memory also 
writes into it. But I can simulate this process in simple programs, for example, a 
database that logs transactions.

So if you can nail down a definition of consciousness without pointing to a 
human, I am willing to listen. Otherwise we default to the possibility of 
building AGI on COMP principles and then ascribing consciousness to it since it 
behaves just like a human.

-- Matt Mahoney, [EMAIL PROTECTED]
  


I am way past merely defining anything. I know what phenomenal fields 
are constructed of: Virtual Nambu Goldstone Bosons. Brain material is 
best regarded as a radically anisotropic quasi-fluid undergoing massive 
phase changes on multiple time scales. The problem is one of 
thermodynamics, not abstract computation. Duplicating the boson 
generation inorganically and applying that process to regulatory 
mechanisms of learning is exactly what I plan for my AGI chips. The 
virtual particles were named Qualeons by some weird guy here that i was 
talking to one day. I forgot is name. I better find that out! I digress. :-)


It would take 3 PhD dissertations to cover everything from quantum 
mechanics to psychology. You have to be a polymath. And to see how they 
explain consciousness you need to internalise 'dual aspect science', 
from which perspective its all obvious. I have to change the whole of 
science from single to dual aspect to make it understood

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales



Jim Bromer wrote:

On Mon, Oct 13, 2008 at 2:34 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
  

Galileo, Bruno of Nolan, etc.
OTOH, Paracelsus was quite personable.  So was, reputedly, Pythagoras.  (No
good evidence on Pythagoras, though.  Only stories from supporters.)  (Also,
consider that the Pythagoreans, possibly including Pythagoras, had a guy put
to death for discovering that sqrt(2) was irrational.  [As with most things
from this date, this is more legend than fact, but is quite probable.])

As a generality, with many exceptions, strongly opinionated persons are not
easy to get along with unless you agree with their opinions.  It appears to
be irrelevant whether their opinions are right, wrong, or undecidable.




I just want to comment that my original post was not about
agreeableness. It was about the necessity of being capable of
criticizing your own theories (and criticisms).  I just do not believe
that Newton, Galileo, Pythagoras and the rest of them were incapable
of examining their own theories from critical vantage points even
though they may have not accepted the criticisms others derived from
different vantage points.  As I said, there is no automatic equality
for criticisms.  Just because a theory is unproven it does not mean
that all criticisms have to be accepted as equally valid.

But when you see someone, theorist or critic, who almost never
demonstrates any genuine capacity for reexamining his own theories or
criticisms from any critical vantage point what so ever, then it's a
strong negative indicator.

Jim Bromer


  
The process of formulation of scientific theories has been characterised 
as a dynamical system nicely by Nicholas Rescher.


Rescher, N., Process philosophy : a survey of basic issues, University 
of Pittsburgh Press, Pittsburgh, 2000, p. 144.
Rescher, N., Nature and understanding : the metaphysics and method of 
science, Clarendon Press, Oxford, 2000, pp. ix, 186.


In that approach you can see critical argument operating operating as a 
brain process - competing brain electrodynamics that stabilises on the 
temporary 'winner', whose position may be toppled at any moment by the 
emergence of a more powerful criticism which destabilises the current 
equilibrium...and so on. The 'argument' may involve the provision of 
empirical evidence ... indeed that is the norm for most sciences.


In order that a discipline be seen to be real science, then, what one 
would expect to see such processes happening in a dialog between a 
diversity of views competing for ownership of scientific evidence 
through support for whatever theoretical framework seems apt. As a 
recent entrant here, and seeing the dialog and the issues as they unfold 
I would have some difficulty classifying what is going on as 
'scientific' in the sense that there is no debate calibrated against any 
overt fundamental scientific theoretical framework(s), nor defined 
testing protocols.


In the wider world of science it is the current state of play that the 
theoretical basis for real AGI is an open and multi-disciplinary 
question.  A forum that purports to be invested in achievement of real 
AGI as a target, one would expect that forum to a multidisciplianry 
approach on many fronts, all competing scientifically for access to real 
AGI.


I am not seeing that here. In having a completely different approach to 
AGI, I hope I can contribute to the diversity of ideas and bring the 
discourse closer to that of a solid scientific discipline, with formal 
testing metrics and so forth. I hope that I can attract the attention of 
the neuroscience and physics world to this area.


Of course whether I'm an intransigent grumpy theory-zealot of the 
Newtonian kind... well... just let the ideas speak for themselves... 
:-)  The main thing is the diversity of ideas and criticism .. which 
seems a little impoverished at the moment. Without the diversity of 
approaches actively seen to compete, an AGI forum will end up 
marginalised as a club of some kind: We do (what we assume will be) AGI 
by fiddling about with XYZ. This is scientific suicide.


Here's a start:: the latest survey in the key area. Like it or not AGI 
is directly in the running for solving the 'hard problem' and machine 
consciousness is where the game is at.


Gamez, D. 'Progress in machine consciousness', Consciousness and 
Cognition vol. 17, no. 3, 2008. 887-910.


I'll do my best to diversify the discourse... I'd like to see this 
community originate real AGI and be seen as real science. To do that 
this forum should attract cognitive scientists, psychologists, 
physicists, engineers, neuroscientists. Over time, maybe we can get that 
sort of diversity happening.  I have enthusiasm for such things..


cheers
colin hales






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales
 to finding out 
about the reality of human cognition and thereby gain access to AGI.


Theories and inventions are two sides of the same science process. The 
former looks at natural outcomes seeks rules that describe initial 
conditions. The latter seeks initial conditions that result in a certain 
outcome. Being precious about failure or critique is not part of the 
process. At all stages choices are made and results must be critically 
assessed.


As a nuance to the role of 'science' in 'computer science'... I would 
say that formally, computer 'science' is not scientific for the reason 
that software documents are not laws of nature the resultant machine 
behaviour originates from causality built into the physics of the 
hardware substrate. At best, computer programs are correlates, not the 
sought-after  critical dependency which we know gets us as close to 
natural causality as we can get. The computer substrate, configured as 
per the program, causally necessitates the resultant behaviour, not the 
program whic, at runtime, is completely absent from the  circumstance, 
even in 'interpretive' runtime environments.


So the role of scientific behaviour in the production of an AGI seems to 
be in need of quite a deal of review - especially if it is to be taken 
seriously in a multidisciplinary approach to AGI where all the folks are 
used to criticism and expect it (demand it!), and are OK with being 
wrong. The flip side of this in technological development  is that when 
something has a mounting pile of evidence of being based on a false 
premise, that continuing to fail to make choices that direct your 
efforts in other directions now that is something to be embarrassed 
about :-)


I understand how tied up people can get with a particular standpoint - 
if you have defended it for 15 years and suddenly the whole basis of it 
is gone - for obvious empirical reasons that you cannot deny - strange 
behaviours result for example... my supervisor one had a paper 
rejected thus:


This paper does not show that (SUCH and SUCH) is the case in an 
auditory cortex context and should be rejected until (SUCH and SUCH) is 
shown to be the case when the argument was based on empirical 
grounds and was quite sound! People can be so fashion-ridden and flighty 
and precious about their 'darlings'the editor ended up apologising 
for the idiot reviewer...who was too stupid to roll with the punches and 
was marginalised as a result.


In my case I am absolutely determined to be a 1 trick scientist, like 
the guy who discovered the neutrino. I adopt one particular approach to 
AGI and *go for it - scientifically*. I either get to be the guy who 
actually solved it or I get to be an author who writes about failing to 
make my idea work. Either way is OK with me - at least i will have shown 
one way NOT to reach AGI, and have been seen to respond to evidence when 
I encounter it, not cover my eyes and ears and go blah blah blah blah. 
:-) The thing to be proud of is /authentic science/. Not always easy to 
do... but a noble goal. In my case i have a crucial experiment planned, 
which will sort out the basis for my new chip design. If that experiment 
fails - I will NOT continue with the original idea. I will have bet on a 
dud idea. But that's OK.


Oh boy I'm blathering on again. So many words!~  Sorry!  better 
go...I think you get my drift.


cheers
colin hales





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales

Hi Matt,
... The Gamez paper situation is now...erm...resolved. You are right: 
the paper doesn't argue that solving consciousness is necessary for AGI. 
What has happened recently is a subtle shift  - those involved simple 
fail to make claims about the consciousness or otherwise of the 
machines! This does not entail that they are not actually working on it. 
They are just being cautious...Also, you correctly observe that solving 
AGI on a purely computational basis is not prohibited by the workers 
involved in the GAMEZ paper.. indeed most of their work assumes it!... I 
don't have a problem with this...However...'attributing' consciousness 
to it based on its behavior is probably about as unscientific as it 
gets. That outcome betrays no understanding whatever of consciousness, 
its mechanism or its roleand merely assumes COMP is true and creates 
an agreement based on ignorance. This is fatally flawed non-science.


[BTW: We need an objective test (I have one - I am waiting for it to get 
published...). I'm going to try and see where it's at in that process. 
If my test is acceptable then I predict all COMP entrants will fail, but 
I'll accept whatever happens... - and external behaviour is decisive. 
Bear with me a while till I get it sorted.]


I am still getting to know the folks [EMAIL PROTECTED] And the group may be 
diverse, as you say ... but if they are all COMP, then that diversity is 
like a group dedicated to an unresolved argument over the colour of a 
fish's bicycle. If we can attract the attention of the likes of those in 
the GAMEZ paper... and others such as Hynna and Boahen at Stanford, who 
have an unusual hardware neural architecture...(Hynna, K. M. and Boahen, 
K. 'Thermodynamically equivalent silicon models of voltage-dependent ion 
channels', /Neural Computation/ vol. 19, no. 2, 2007. 327-350.)...and 
others ... then things will be diverse and authoritative. In particular, 
those who have recently essentially squashed the computational theories 
of mind from a neuroscience perspective- the 'integrative neuroscientists':


Poznanski, R. R., Biophysical neural networks : foundations of 
integrative neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. 
viii, 503 p.


Pomerantz, J. R., Topics in integrative neuroscience : from cells to 
cognition, Cambridge University Press, Cambridge, UK ; New York, 2008, 
pp. xix, 427 p.


Gordon, E., Ed. (2000). Integrative neuroscience : bringing together 
biological, psychological and clinical models of the human brain. 
Amsterdam, Harwood Academic.


The only working, known model of general intelligence is the human. If 
we base AGI on anything that fails to account scientifically and 
completely for /all/ aspects of human cognition, including 
consciousness, then we open ourselves to critical inferiority... and the 
rest of science will simply find the group an irrelevant cultish 
backwater. Strategically the group would do well to make choices that 
attract the attention of the 'machine consciousness' crowd - they are 
directly linked to neuroscience via cog sci. The crowd that runs with 
JETAI (journal of theoretical and experimental artificial intelligence) 
is also another relevant one. It'd be nice if those people also saw the 
AGI journal as a viable repository for their output. I for one will try 
and help in that regard. Time will tell I suppose.


cheers,
colin hales


Matt Mahoney wrote:

--- On Mon, 10/13/08, Colin Hales [EMAIL PROTECTED] wrote:

  

In the wider world of science it is the current state of play that the


theoretical basis for real AGI is an open and multi-disciplinary
question.  A forum that purports to be invested in achievement of real
AGI as a target, one would expect that forum to a multidisciplianry
approach on many fronts, all competing scientifically for access to
real AGI. 


I think this group is pretty diverse. No two people here can agree on how to 
build AGI.

  

Gamez, D. 'Progress in machine consciousness', Consciousness and


Cognition vol. 17, no. 3, 2008. 887-910.

$31.50 from Science Direct. I could not find a free version. I don't understand 
why an author would not at least post their published papers on their personal 
website. It greatly increases the chance that their paper is cited. I 
understand some publications require you to give up your copyright including 
your right to post your own paper. I refuse to publish with them.

(I don't know the copyright policy for Science Direct, but they are really milking the 
publish or perish mentality of academia. Apparently you pay to publish with 
them, and then they sell your paper).

In any case, I understand you have a pending paper on machine consciousness. 
Perhaps you could make it available. I don't believe that consciousness is 
relevant to intelligence, but that the appearance of consciousness is. Perhaps 
you can refute my position.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi

Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Colin Hales

Ben Goertzel wrote:


Colin wrote:

The only working, known model of general intelligence is the
human. If we base AGI on anything that fails to account
scientifically and completely for /all/ aspects of human
cognition, including consciousness, then we open ourselves to
critical inferiority... and the rest of science will simply find
the group an irrelevant cultish backwater. Strategically the group
would do well to make choices that attract the attention of the
'machine consciousness' crowd - they are directly linked to
neuroscience via cog sci.


Actually, I very strongly disagree with the above.

While I am an advocate of machine consciousness research, and will be 
co-organizing a machine consciousness workshop in Hong Kong in June 
2009, I do **not** agree that focusing on machine consciousness would 
be likely to help AGI to get better accepted in the general scientific 
community.


Rather, I think that consciousness research is currently considered at 
least as eccentric as AGI research, by the scientific mainstream ... 
and is considered far MORE eccentric than AGI research by the AI 
research mainstream, e.g. the AAAI.


So, discussing issues of machine consciousness may be interesting and 
very worthwhile for AGI in some scientific and conceptual... but I 
really really don't think that, at the present time, more closely 
allying AGI with machine consciousness would do anything but cause 
trouble for AGI's overall scientific reputation.


Frankly I think that machine consciousness has at least as high a 
chance of being considered an irrelevant cultish backwater than AGI 
... though I don't think that either field deserves that fate.


Comparing the two fields, I note that AGI has a larger and more active 
series of conferences than machine consciousness, and is also ... 
pathetic as it may be ... better-funded overall ;-p   

Regarding the connection to neuroscience and cog sci: obviously, AGI 
does not need machine consciousness as an intermediary to connect to 
those fields, it is already closely connected.  As one among many 
examples, Stan Franklin's LIDA architecture, a leading AGI approach, 
was originated in collaboration with Bernard Baars, a leading 
cognitive psychologist (and consciousness theorist, as it happens).  
And we had a session on AGI and Neuroscience at AGI-08, chaired by 
neuroscientist Randal Koene.


I laid out my own thoughts on consciousness in some detail in The 
Hidden Pattern ... I'm not trying to diss consciousness research at 
all ... just pointing out that the posited reason for tying it in with 
AGI seems not to be correct...



-- Ben G
My main impression of the AGI-08 forum was one of over-dominance by 
singularity-obsessed  and COMP thinking, which must have freaked me out 
a bit. The IEEE Spectrum articles on the 'singularity rapture' did 
nought to improve my outlook... Thanks for bringing the Stan Franklin 
and Bernhard Baars/Global Workspace etc and neuroscience links to my 
attention. I am quite familiar with them and it's a relief to see they 
connect with the AGI fray. Hopefully the penetration of these 
disciplines, and their science, will grow.


In respect of our general consciousness-in-AGI disagreement: Excellent! 
That disagreement is a sign of diversity of views. Bring it on!


The only reason for not connecting consciousness with AGI is a situation 
where one can see no mechanism or role for it. That inability is no 
proof there is noneand I have both to the point of having a patent 
in progress.  Yes, I know it's only my claim at the moment...but it's 
behind why I believe the links to machine consciousness are not 
optional, despite the cultural state/history of the field at the moment 
being less than perfect and folks cautiously sidling around 
consciousness like it was bomb under their budgets.


So...You can count on me for vigorous defense of my position from 
quantum physics upwards to psychology, including support for machine 
consciousness as being on the critical path to AGI.  Hopefully in June 
'09? ;-)


I tried to locate a local copy of 'the hidden pattern'...no luck. Being 
in poverty-stricken student mode, at the moment...I have to survive on 
library/online resources, which are pretty impressive here at 
Unimelb..but despite this the libraries around here don't have 
it...two other titles in the state library... but not that one..Oh well. 
Maybe send me a copy with my wizard hat? :-P


cheers,
colin hales




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


OFFLIST [agi] Readings on evaluation of AGI systems

2008-10-07 Thread Colin Hales

Hi Ben,
A good bunch of papers.

(1) Hales, C. 'An empirical framework for objective testing for 
P-consciousness in an artificial agent', The Open Artificial 
Intelligence Journal vol.? , 2008.

Apparently it has been accepted but I'll believe it when I see it.

It's highly relevant to the forum you mentioned. I was particularly 
interested in the Wray and Lebiere work... my paper (1) would hold that 
the problem in their statement Taskability is difficult to measure 
because there is no absolute notion of taskability -- a particular 
quantitative measure for one domain might represent the best one could 
achieve, while in another, it might be a baseline is solved. An 
incidental byproduct of the execution of the test is that all the other 
metrics in their paper are delivered to some extent. Computationalist AI 
subjects will fail the (1) test. Humans won't. A real AGI will pass. 
Testing has been a big issue for me and has taken quite a while to sort 
out. Peter Voss's AI will fail it. As will everything based on NUMENTA 
products.but they can try!.the test can speak for itself. Objective 
measurement of outward agent behaviour is decisive. All contenders have 
the same demands made of them...the only requirement is that verifiably 
autonomous, embodied agents only need apply.


I don't know if this is of interest to anyone, but I thought I'd mention it.

regards,
Colin

Ben Goertzel wrote:


Hi all,

In preparation for an upcoming (invitation-only, not-organized-by-me) 
workshop on Evaluation and Metrics for Human-Level AI systems, I 
concatenated a number of papers on the evaluation of AGI systems into 
a single PDF file (in which the readings are listed alphabetically in 
order of file name).


In case anyone else finds this interesting, you can download the 
single PDF file from


http://goertzel.org/AGI_Evaluation.pdf

It's 196 pages of text  I don't condone all the ideas in all the 
papers, nor necessarily consider all the papers incredibly fascinating 
... but it's a decent sampling of historical thinking in the area by a 
certain subclass of AGI-ish academics...


ben


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-06 Thread Colin Hales
Excellent. I want one! Maybe they should be on sale at the next 
conference...there's a marketing edge for ya.


If I have to be as wrong as Vladimir says I'll need the right clothes.

:-)
cheers
colin


Ben Goertzel wrote:


 And you
can't escape flaws in your reasoning by wearing a lab coat.


Maybe not a lab coat... but how about my trusty wizard's hat???  ;-)

http://i34.tinypic.com/14lmqg0.jpg

 



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Colin Hales
! This is supposed to be fun!


cheers
Colin Hales

Ben Goertzel wrote:


The argument seems wrong to me intuitively, but I'm hard-put to argue 
against it because the terms are so unclearly defined ... for instance 
I don't really know what you mean by a visual scene ...


I can understand that to create a form of this argument worthy of 
being carefully debated, would be a lot more work than writing this 
summary email you've given.


So, I agree with your judgment not to try to extensively debate the 
argument in its current sketchily presented form.


If you do choose to present it carefully at some point, I encourage 
you to begin by carefully defining all the terms involved ... 
otherwise it's really not possible to counter-argue in a useful way ...


thx
ben g

On Sat, Oct 4, 2008 at 12:31 AM, Colin Hales 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
wrote:


Hi Mike,
I can give the highly abridged flow of the argument:

!) It refutes COMP , where COMP = Turing machine-style abstract
symbol manipulation. In particular the 'digital computer' as we
know it.
2) The refutation happens in one highly specific circumstance. In
being false in that circumstance it is false as a general claim.
3) The circumstances:  If COMP is true then it should be able to
implement an artificial scientist with the following faculties:
  (a) scientific behaviour (goal-delivery of a 'law of nature', an
abstraction BEHIND the appearances of the distal natural world,
not merely the report of what is there),
  (b) scientific observation based on the visual scene,
  (c) scientific behaviour in an encounter with radical novelty.
(This is what humans do)

The argument's empirical knowledge is:
1) The visual scene is visual phenomenal consciousness. A highly
specified occipital lobe deliverable.
2) In the context of a scientific act, scientific evidence is
'contents of phenomenal consciousness'. You can't do science
without it. In the context of this scientific act, visual
P-consciousness and scientific evidence are identities.
P-consciousness is necessary but on its own is not sufficient.
Extra behaviours are needed, but these are a secondary
consideration here.

NOTE: Do not confuse scientific observation  with the
scientific measurement, which is a collection of causality
located in the distal external natural world. (Scientific
measurement is not the same thing as scientific evidence, in this
context). The necessary feature of a visual scene is that it
operate whilst faithfully inheriting the actual causality of the
distal natural world. You cannot acquire a law of nature without
this basic need being met.

3) Basic physics says that it is impossible for a brain to create
a visual scene using only the inputs acquired by the peripheral
stimulus received at the retina. This is due to fundamentals of
quantum degeneracy. Basically there are an infinite number of
distal external worlds that can deliver the exact same photon
impact. The transduction that occurs in the retinal rod/cones is
entirely a result of protein isomerisation. All information about
distal origins is irretievably gone. An impacting photon could
have come across the room or across the galaxy. There is no
information about origins in the transduced data in the retina.

That established, you are then faced with a paradox:

(i) (3) says a visual scene is impossible.
(ii) Yet the brain makes one.
(iii) To make the scene some kind of access to distal spatial
relations must be acquired as input data in addition to that from
the retina.
(iv) There are only 2 places that can come from...
  (a) via matter (which we already have - retinal impact at
the boundary that is the agent periphery)
  (b) via space (at the boundary of the matter of the brain
with space, the biggest boundary by far).
So, the conclusion is that the brain MUST acquire the necessary
data via the spatial boundary route. You don't have to know how.
You just have no other choice. There is no third party in there to
add the necessary data and the distal world is unknown. There is
literally nowhere else for the data to come from. Matter and Space
exhaust the list of options. (There is alway magical intervention
... but I leave that to the space cadets.)

That's probably the main novelty for the reader to  to encounter.
But we are not done yet.

Next empirical fact:
(v) When  you create a turing-COMP substrate the interface with
space is completely destroyed and replaced with the randomised
machinations of the matter of the computer manipulating a model of
the distal world. All actual relationships with the real distal
external world are destroyed. In that circumstance the COMP
substrate is implementing the science of an encounter

Re: [agi] COMP = false

2008-10-05 Thread Colin Hales

Hi Vladimir,
I did not say the physics was unknown. I said that it must exist. The 
physics is already known.Empirically and theoretically. It's just not 
recognised in-situ and by the appropriate people. It's an implication of 
the quantum non-locality underpinning electrodynamics. Extensions of the 
physics model to include the necessary effects are not part of the 
discussion and change nothing. This does not alter the argument, which 
is empirical. Please accept and critique it on this basis. I am planning 
an experiment as a post-doc to validate the basic principle as it 
applies in a neural context. It's under development now. It involves 
electronics and lasers and all the usual experimental dross.


BTW I don't do non-science. Otherwise I'd just be able to sit back and 
declare my world view complete and authoritative, regardless of the 
evidence, wouldn't I? That is so not me. I am an engineer If I can't 
build it then I know I don't understand it. Nothing is sacred. At no 
point ever will I entertain any fictional/untestable/magical solutions. 
Like assuming an unproven conjecture is true. Nor will I idolise the 
'received view' as having all the answers and force the natural world to 
fit my prejudices in respect of what 'explanation' entails. Especially 
when major mysteries persist in the face of all explanatory attempts. 
That's the worst non-science you can have... so I'm rather more 
radically empirical and dry, evidenced based but realistic in 
expectations of our skills as explorers of the natural world ...than it 
might appear. In being this way I hope to be part of the solution, not 
part of the problem.


COMP being false make the AGI goal much harder...but much much more 
interesting!


That's a little intro to colin hales for you.

cheers
colin hales
(all done now!)




Vladimir Nesov wrote:

Basically, you are saying that there is some unknown physics mojo
going on. The mystery of mind looks as mysterious as mystery of
physics, therefore it requires mystery of physics and can derive
further mysteriousness from it, becoming inherently mysterious. It's
bad, bad non-science.

  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-05 Thread Colin Hales

OK. Last one!
Please replace 2) with:

2. Science says that the information from the retina is insufficient
to construct a visual scene.

Whether or not that 'constuct' arises from computation is a matter of 
semantics. I would say that it could be considered computation - natural 
computation by electrodynamic manipulation of natural symbols. Not abstractions 
of the kind we manipulate in the COMP circumstance. That is why I use the term 
COMP...

It's rather funny: you could redefine computation to include natural 
computation (through the natural causality that is electrodynamics as it 
happens in brain material). Then you could claim computationalism to be true. 
But you'd still behave the same: you'd be unable to get AGI from a Turing 
machine. So you'd flush all traditional computers and make new technology 
Computationalism would then be true but 100% useless as a design decision 
mechanism. Frankly I'd rather make AGI that works than be right according to a 
definition!  The lesson is that there's no pracitcal use in being right 
according to a definition! What you need to be able to do is make successful 
choices.


OK. Enough. A very enjoyable but sticky thread...I gotta work!

cheers all for now.

regards

Colin


Abram Demski wrote:

Colin,

I believe you did not reply to my points? Based on your definition of
computationalism, it appears that my criticism of your argument does
apply after all. To restate:

Your argument appears to assume computationalism. Here is a numbered
restatement:

1. We have a visual experience of the world.
2. Science says that the information from the retina is insufficient
to compute one.
3. Therefore, we must get more information.
4. The only possible sources are material and spatial.
5. Material is already known to be insufficient, therefore we must
also get spatial info.

Computationalism is assumed to get from #2 to #3. If we do not assume
computationalism, then the argument would look more like this:

1. We have a visual experience of the world.
2. Science says that the information from the retina is insufficient
to compute one.
3. Therefore, our visual experience is not computed.

This is obviously unsatisfying because it doesn't say where the visual
scene comes from; answers range from prescience to quantum
hypercomputation, but that does not seem important to the current
issue.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Colin Hales

Hi Will,
It's not an easy thing to fully internalise the implications of quantum 
degeneracy. I find physicists and chemists have no trouble accepting it, 
but in the disciplines above that various levels of mental brick walls 
are in place. Unfortunately physicists and chemists aren't usually asked 
to create vision!... I inhabit an extreme multidisciplinary zone. This 
kind of mental resistance comes with the territory. All I can say is 
'resistance is futile, you will be assimilated' ... eventually. :-) It's 
part of my job to enact the necessary advocacy. In respect of your 
comments I can offer the following:


You are exactly right: humans don't encounter the world directly (naive 
realism). Nor are we entirely operating from a cartoon visual 
fantasy(naive solipsism). You are also exactly right in that vision is 
not 'perfect'. It has more than just a level of indirectness in 
representation, it can malfunction and be fooled - just as you say. In 
the benchmark behaviour: scientific behaviour, we know scientists have 
to enact procedures (all based around the behaviour called 
'objectivity') which minimises the impact of these aspects of our 
scientific observation system.


However, this has nothing to say about the need for an extra information 
source. necessary for there is not enough information in the signals to 
do the job. This is what you cannot see. It took me a long while to 
discard the tendency to project my mental capacity  into the job the 
brain has when it encounters a retinal data stream. In vision processing 
using computing we know the structure of the distal natural world. We 
imagine the photon/CCD camera chip measurements to be the same as that 
of the retina. It looks like a simple reconstruction job.


But it is not like that at all. It is impossible to tell, from the 
signals in their natural state in the brain, whether they are about 
vision or sound or smell. They all look the same. So I did not 
completely reveal the extent of the retinal impact/visual scene 
degeneracy in my post. The degeneracy operates on multiple levels. 
Signal encoding into standardised action potentials is another level.


Maybe I can just paint a mental picture of the job the brain has to do. 
Imagine this:


You have no phenomenal consciousness at all. Your internal life is of a 
dreamless  sleep.

Except ... for a new perceptual mode called Wision.
Looming in front of you embedded in a roughly hemispherical blackness is 
a gigantic array of numbers.

The numbers change.

Now:
a) make a visual scene out of it representing the world outside: convert 
Wision into Vision.
b) do this without any information other than the numbers in front of 
you and without assuming you have any a-priori knowledge of the outside 
world.


That is the job the brain has. Resist the attempt to project your own 
knowledge into the circumstance. You will find the attempt futile.


Regards,

Colin








William Pearson wrote:

Hi Colin,

I'm not entirely sure that computers can implement consciousness. But
I don't find your arguments sway me one way or the other. A brief
reply follows.

2008/10/4 Colin Hales [EMAIL PROTECTED]:
  

Next empirical fact:
(v) When  you create a turing-COMP substrate the interface with space is
completely destroyed and replaced with the randomised machinations of the
matter of the computer manipulating a model of the distal world. All actual
relationships with the real distal external world are destroyed. In that
circumstance the COMP substrate is implementing the science of an encounter
with a model, not an encounter with the actual distal natural world.

No amount of computation can make up for that loss, because you are in a
circumstance of an intrinsically unknown distal natural world, (the novelty
of an act of scientific observation).
.



But humans don't encounter the world directly, else optical illusions
wouldn't exist, we would know exactly what was going on.

Take this site for example. http://www.michaelbach.de/ot/

It is impossible by physics to do vision perfectly without extra
information, but we do not do vision by any means perfectly, so I see
no need to posit an extra information source.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Testing, and a question....

2008-10-03 Thread Colin Hales

Dear AGI folk,
I am testing my registration on the system,, saying an inaugural 'hi' 
and seeking guidance in respect of potential submissions for a 
presentation spot at the next AGI conference.


It is time for me to become more visible in AGI after 5 years of 
research and reprogramming my brain into the academic side of things 
My plans as a post-doc are to develop a novel chip technology. It will 
form the basis for what I have called 'A-Fauna'. I call it A-Fauna 
because it will act like biological organisms and take their place 
alongside natural fauna in a chosen ecological niche. Like tending a 
field as a benign 'artificial weed-killer'...they know and prefer their 
weeds...you get the idea. They are AGI robots that learn (are coached) 
to operate in a specific role and then are 'intellectually nobbled' 
(equivalent to biology), so their ability to handle novelty is 
specifically and especially curtailed. They will also be a whole bunch 
cheaper in that form...They are then deployed into that specific role 
and will be happy little campers. These creatures are different to 
typical mainstream AI fare because they cannot be taught how to learn. 
They are like us: they learn how to learn. As a result they can handle 
novelty better...a long story...Initially the A-Fauna is very small but 
potentially it could get to human level. The first part of the 
development is the initial proof of specific physics, which requires a 
key experiment. I can't wait to do this! The success of the experiment 
then leads to development and miniaturisation and eventual application 
into a prototype 'critter', which will then have to be proven to have 
P-consciousness (using the test in 3 below)anyway...that's the rough 
'me' of it.


I am in NICTA  www.nicta.com.au
Victoria Research Lab in the Life-Sciences theme.
Department of Electrical/Electronic Eng, University of Melbourne

Sothe AGI-09 basic topics to choose from are:

1) Empirical refutation of computationalism
2) Another thought experiment refutation of computationalism. The 
Totally Blind Zombie Homunculus Room

3) An objective test for Phenomenal consciousness.
4) A novel backpropagation mechanism in an excitable cell 
membrane/syncytium context.


1) and 2) are interesting because the implication is that if anyone 
doing AGI lifts their finger over a keyboard thinking they can be 
directly involved in programming anything to do with the eventual 
knowledge of the creature...they have already failed. I don't know 
whether the community has internalised this yet. BTW that makes 4 ways 
that computationalism has been shot. How dead does it have to get? :-) I 
am less interested in these than the others.


3) Is a special test which can be used to empirically test for 
P-consciousness in an embedded, embodied artificial agent. I need this 
test framework for my future AGI developments...one day I need to be 
able to point at at my AGI robot and claim it is having experiences of a 
certain type and to be believed. AGI needs a test like this to get 
scientific credibility. So you claim it's conscious?prove it!. 
This is problematic but I am reasonably sure I have worked out a way 
So it needs some attention (a paper is coming out sometime soon I hope. 
They told me it was accepted, anyway...). The test is 
double-blind/clinical style with 'wild-type' control and 'test 
subject'...BTW the computationalist contender (1/2 above) is quite 
validly tested but would operate as a sham/placebo control... because it 
is known they will always fail. Although anyone serious enough can offer 
it as a full contender. Funnily enough it also proves humans are 
conscious! In case you were wondering...humans are the wild-type control.


4) Is my main PhD topic. I submit this time next year. (I'd prefer to do 
this because I can get funded to go to the conference!). It reveals a 
neural adaptation mechanism that is completely missing from present 
neural models. It's based on molecular electrodynamics of the neural 
membrane. The effect then operates in the syncytium as a regulatory 
(synchrony) bias operating in quadrature with (and roughly independent 
of) the normal synaptic adaptation.


I prefer 4) because of the funding but also because I'd much rather 
reveal it to the AGI community - because that is my future...but I will 
defer to preferences of the groupI can always cover 1,2,3 informally 
when I am there if there's any interestso...which of these (any) is 
of interest?...I'm not sure of the kinds of things you folk want to hear 
about. All comments are appreciated.


regards to all,

Colin Hales


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Testing, and a question....

2008-10-03 Thread Colin Hales

Hi Ben,

Excellent. #4 it is. I'll proceed on that basis. I can't get funding 
unless I present...and the timing is perfect for my PhD, so I'll be 
working towards that.


Hong Kong sounds good. I assume it's the Toward a Science of 
Consciousness 2009 I'll chase it up. I didn't realise it was in Hong 
Kong. The last one I went to was Tucson. 2006. It was a hoot. I wonder 
if Dave Chalmers will do the 'end of consciousness' party and 
blues-slam. :-) We'll see. Consider me 'applied for' as a workshop. I'll 
do the applications ASAP.


regards,

Colin Hales


Ben Goertzel wrote:


In terms of a paper submission to AGI-09, I think that your option 4 
would be of the most interest to the audience there.   By and large 
it's not a philosophy of AI crowd so much as a how to build an AI 
crowd...


I am also organizing a workshop on machine consciousness that will be 
in Hong Kong in June 09, following the major consciousness conference 
there ... for that workshop, your option 3 would be of great interest...


ben

On Fri, Oct 3, 2008 at 5:01 PM, Colin Hales 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
wrote:


Dear AGI folk,
I am testing my registration on the system,, saying an inaugural
'hi' and seeking guidance in respect of potential submissions for
a presentation spot at the next AGI conference.

It is time for me to become more visible in AGI after 5 years of
research and reprogramming my brain into the academic side of
things My plans as a post-doc are to develop a novel chip
technology. It will form the basis for what I have called
'A-Fauna'. I call it A-Fauna because it will act like biological
organisms and take their place alongside natural fauna in a chosen
ecological niche. Like tending a field as a benign 'artificial
weed-killer'...they know and prefer their weeds...you get the
idea. They are AGI robots that learn (are coached) to operate in a
specific role and then are 'intellectually nobbled' (equivalent to
biology), so their ability to handle novelty is specifically and
especially curtailed. They will also be a whole bunch cheaper in
that form...They are then deployed into that specific role and
will be happy little campers. These creatures are different to
typical mainstream AI fare because they cannot be taught how to
learn. They are like us: they learn how to learn. As a result they
can handle novelty better...a long story...Initially the A-Fauna
is very small but potentially it could get to human level. The
first part of the development is the initial proof of specific
physics, which requires a key experiment. I can't wait to do this!
The success of the experiment then leads to development and
miniaturisation and eventual application into a prototype
'critter', which will then have to be proven to have
P-consciousness (using the test in 3 below)anyway...that's the
rough 'me' of it.

I am in NICTA  www.nicta.com.au http://www.nicta.com.au
Victoria Research Lab in the Life-Sciences theme.
Department of Electrical/Electronic Eng, University of Melbourne

Sothe AGI-09 basic topics to choose from are:

1) Empirical refutation of computationalism
2) Another thought experiment refutation of computationalism. The
Totally Blind Zombie Homunculus Room
3) An objective test for Phenomenal consciousness.
4) A novel backpropagation mechanism in an excitable cell
membrane/syncytium context.

1) and 2) are interesting because the implication is that if
anyone doing AGI lifts their finger over a keyboard thinking they
can be directly involved in programming anything to do with the
eventual knowledge of the creature...they have already failed. I
don't know whether the community has internalised this yet. BTW
that makes 4 ways that computationalism has been shot. How dead
does it have to get? :-) I am less interested in these than the
others.

3) Is a special test which can be used to empirically test for
P-consciousness in an embedded, embodied artificial agent. I need
this test framework for my future AGI developments...one day I
need to be able to point at at my AGI robot and claim it is having
experiences of a certain type and to be believed. AGI needs a test
like this to get scientific credibility. So you claim it's
conscious?prove it!. This is problematic but I am reasonably
sure I have worked out a way So it needs some attention (a
paper is coming out sometime soon I hope. They told me it was
accepted, anyway...). The test is double-blind/clinical style with
'wild-type' control and 'test subject'...BTW the computationalist
contender (1/2 above) is quite validly tested but would operate as
a sham/placebo control... because it is known they will always
fail. Although anyone serious enough can offer it as a full
contender. Funnily enough

Re: [agi] COMP = false

2008-10-03 Thread Colin Hales
 useless. So the words 
'refutation of COMP by an attempted  COMP implementation of a scientist' 
have to be carefully contrasted with the words you can't simulate a 
scientist.


The self referential use of scientific behaviour as scientific evidence 
has cut logical swathes through all sorts of issues. COMP is only one of 
them. My AGI benchmark and design aim is the artificial scientist.  
Note also that this result does not imply that real AGI can only be 
organic like us. It means that real AGI must have new chips that fully 
capture all the inputs and make use of them to acquire knowledge the way 
humans do. A separate matter altogether. COMP, as an AGI designer' 
option, is out of the picture.


I think this just about covers the basics. The papers are dozens of 
pages. I can't condense it any more than this..I have debated this so 
much it's way past its use-by date. Most of the arguments go like this: 
But you CAN! I am unable to defend such 'arguments from 
under-informed-authority' ... I defer to the empirical reality of the 
situation and would prefer that it be left to justify itself. I did not 
make any of it up. I merely observed. . ...and so if you don't mind I'd 
rather leave the issue there.  ..


regards,

Colin Hales



Mike Tintner wrote:

Colin:

1) Empirical refutation of computationalism...

.. interesting because the implication is that if anyone
doing AGI lifts their finger over a keyboard thinking they can be
directly involved in programming anything to do with the eventual
knowledge of the creature...they have already failed. I don't know
whether the community has internalised this yet.

Colin,

I'm sure Ben is right, but I'd be interested to hear the essence of 
your empirical refutation. Please externalise it so we can internalise 
it :)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence enhancement

2003-06-22 Thread Colin Hales

This be Snyder...

http://www.centerforthemind.com/

Tread carefully.

cheers,

Col



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] BDI architecture

2003-03-04 Thread Colin Hales



Both 
Peter Wallis and Mike Georgeff have a history at Melbourne University. 

http://www.cs.mu.oz.au/agentlab/ 
and they and RMIT http://www.agents.org.au/collaborate 
a lot.
There 
is a mailing list from which you may launch queries.
They 
are an active group and quite approachable.
I've 
gatecrashed their seminars many times!

BDI 
seems to have found a very useful placein the weak AI that is current 
agent technology.
Most 
of the BDI talks that I have witnessedare all about ontologies for 
application of agents in different problem domains. It gets a bit sterile from 
an AGI builder's point of view.

BDI is 
'visible' in an AGI but, like so many other models (eg Schmidhubers or Baars or, yes..Pei Wang and 
Eliezer and my own! ), if you create an AGI based on an intelligence model you 
have to ask yourself is it sufficient to create an AGI. If Baars' model was 
complete (global workspace) then presumably Stan Franklin's IDA (a real life 
completed implementation of it) would be on this mailing list arguing with 
us!

I like 
to think of them as analogous to building,say, the combustion process, whilst 
thinking all along that you're building a car. Idealised models can be very 
right and very mis-directing. They are all worth a look though and as I say, 
they are all 'right'.

cheers

Colin



-Original Message-From: 
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]On Behalf Of 
Mike DeeringSent: Wednesday, 5 March 2003 4:04 AMTo: 
[EMAIL PROTECTED]Subject: Re: [agi] BDI 
architecture

  Margeret Heath [EMAIL PROTECTED](CyberMeg),
  
  BDI (beliefs, desires and intentions) 
  
  www.cs.toronto.edu/~mkolp/9-agents3831.pdf 
  brief explanation of a Microsoft BDI 
  agent named Jack.
  
  
  "Intention without Representation: Implementing SituatedBDI agents" 
  Peter Wallis, The Journal of Philosophical Psychology, (submitted July, 
  2002)
  
  
  
  Existing agent models emphasize an intentional notion of agency - the 
  supposition that agents should be understood primarily in terms of mental 
  concepts such as beliefs, desires and intentions. The BDI model (Rao and 
  Georgeff, 1991, 1995) is a paradigm of this kind of agent, although some 
  attempts to include social aspects have been made.
  
  
  yallara.cs.rmit.edu.au/~ldesilva/doc/cs435.pdf This document says BDI was developed by A. Rao 
  and M. Georgeff in 1991 based on work by Bratman 1987 and Singh 1991. 
  
  
  ccc.inaoep.mx/~mapco/AGws-IRT.pdf Automatic Generation and Maintenance 
  of Hypertext for the Web through Information Retrieval Techniques 
  the construction of a belief-desire-intention (BDI) 
  agentusing the Zeus Agent Building Toolkit www.labs.bt.com/projects/agents.htm
  
  
  http://www.inf.pucrs.br/~giraffa/x-bdi/Fruto do trabalho desenvolvido pelo 
  professor Michael da Costa Móra em 
  sua tese de doutorado no CPGCC/UFRGS - FCT/UNL. A ferramenta X-BDI foi desenvolvida 
  com a intenção de diminuir a distância que existe entre teorias formais para 
  especificação de agentes cognitivos e sua programação.
  
  
  Mike Deering, Director
  www.SingularityActionGroup.com


RE: [agi] Playing with fire

2003-03-03 Thread Colin Hales

 (1) Since we cannot accurately predicate the future
 implications of our
 action, almost all research can lead to deadly result ---
 just see what has
 been used as weapons in the current world. If we ask for a
 guaranty for
 safety before a research, then we cannot do anything. I don't
 think the
 danger of AGI is more clear and near than most of other research.

 (2) Don't have AGI developed in time may be even more
 dangerous. We may
 encounter a situation where AGI is the only hope for the
 survival of the
 human species.  I haven't seen a proof that AGI is more
 likely to be evil
 than otherwise.

 So my position is: let's go ahead, but carefully.

 Cheers,

 Pei


Ref:
  http://www.optimal.org/peter/siai_guidelines.htm
  www.singinst.org/CFAI.html
  www.goertzel.org/dynapsyc/2002/AIMorality.htm
Extra credit:
I've just read the Crichton novel PREY. Totally transparent movie-scipt but
a perfect text book on how to screw up really badly. Basically the formula
is 'let the military finance it'. The general public will see this
inevitable movie and we we will be drawn towards the moral battle we are
creating.

In early times it was the 'tribe over the hill' we feared. Communication has
killed that. Now we have the 'tribe from another planet' and the 'tribe from
the future' to fear and our fears play out just as powerfully as any time in
out history.
---

I'm pretty sure we're screwed. I've read a lot and thought a lot about this.
Whilst I disagree with Hugo Degaris' AGI (as actually constructing the worst
case AGI- an Ungrounded adaptive mega-puppet of the type most likely to give
us maximum grief early), I concur with his fears -there is a battle looming
one way or another. Either between AGI/Human or Human/Human or both.

I try to look at things very simply and pragmatically. The easiest
characterisation of the AGIs we're aiming at is 'apparent causality
modellers'.

(Crick and Koch characterise our brains like this:
Francis Crick  Christof Koch A Framework for Consciousness .Nature
Neuroscience, February 2003 Volume 6 Number 2 pp 119 - 126. see
http://www.nature.com/cgi-taf/DynaPage.taf?file=/neuro/journal/v6/n2/abs/nn0
203-119.html )

No matter what you do, statistically we're going to be faced with scenarios
commensurate with 'reality' as made 'apparent' to our AGI. This is all we
have to go on. Lattitude to create abstractions (personal truths) that are
100% factual crap and then act on it comes with the package.

IMO there is only one parameter that offers any hope: Cognition grounded in
real phenomenology. Our AGI, if it is to survive in the reality of our laws
of physics, has to be fully grounded like us. Only then will it inhabit the
same universe as us and be able to understand the universe the way it needs
to survive. This does not mean hooking up a few signals. It means real
qualia within the 'apparent causality modelling'.

This will not stop it from creating abstractions with nasty side effects for
us humans. But at least we will be able to communicate with it through the
common ground of existing in the same reality.

For those who don't get qualia: Consider a pressure transmitter 4-20mA
connected to a PC running an 'AGI'. The 4-20mA signal is around 10^15
electrons smashing into a wiring terminal. That's meaningless noise.
Pressure is a label we humans attribute. The most incredible deductive AGI
engine in the universe will _never_ understand pressure like this. If an
outside agency tells it it is pressure the learning is grounded in the
outside agency, not in the AGI. It still does not understand pressure, only
how to respond to questions about pressure by an outside agency so it
appears like it understands. Qualia means actual pressure_ness (intentional
phenomenology in matter) in its head.

Prohibition doesn't work. The current round of self censorship in the AAAS
conference is a nice thing, but the Iraq's of the world will still find a
way. I suspect there will be a time when people will herd around the
Artificial Mind Institute with a new form of life in it, demonstrating
against it.

You can do all the philosophical categorisation in the universe and
postulate all sort of strategies. IMHO they all mean squat. I'm pretty sure
we're going to have to face this thing full on and cop the consequences.

I'm with Pei Wang. Let's explore and deal with it.

cheers,

Colin Hales








---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Playing with fire

2003-03-03 Thread Colin Hales



PhilipI personally think humans as a society are 
capable of saving themselves from their own individual and collective 
stupidity. I've worked explicitly on this issue for 30 years and still 
retain some optimism on the subject. Colin: I'm with Pei 
Wang. Let's explore and deal with it.OK, if you're with Pei, 
what exactly is the position that you are not with?Cheers, 
Philip 
The usual paradoxes 
and dichotomies:
-letting it happen by 
accident.
-getting too misty 
eyed abouthomo sapiens sapiens when we're a last minute mayfly in the 
scheme of things.
-not getting misty 
eyed enough about homo sapiens sapiensand not looking after my children's 
interests.
-not including a 
moral/ethical department alongside the AGI development (like this 
one!)
-letting an ungrounded 
self modifying AGI loose.
-constructing a 
cottage industry out of second-guessing an outcome. We have enough insurance 
salesmen.
-behaving like the 
down side has already happened.
-behaving like the 
down side won't happen...
Cheers

Colin





[agi] The AGI and the will to live

2003-01-09 Thread Colin Hales
Hi all,

I find the friendliness issue fairly infertile ground tackled way too soon.
Go back to where we are: the beginning. I'm far more interested in the
conferring of a will to live. Our natural tendency is to ascribe this will
to live to our intelligent artifacts. This 'seed' is by far the hardest
thing to create and the real determinant of 'friendliness' in the end. Our
seed? I think a model that starts from something like 'another heartbeat
must happen'. When you don't have a heart? What - poke a watchdog timer
every 100msec or die?

My feeling at the moment is that far from having a friendliness problem
we're more likely to need a cattle prod to keep the thing interested in
staying awake, let alone getting it to take the trouble to formulate any
form of friendliness or malevolence or even indifference.

If our artifact is a zombie, what motivation is there to bother _faking_
friendliness or malevolence or even indifference? Without it Pinnocchio the
puppet goes to sleep.

If our artifact is not a zombie (.ie. has a real subjective experience) then
what motivates _real_ friendliness or malevolence or even indifference?
Without it Pinnocchio the artificial little boy goes to sleep.

Whatever the outcome, at its root is the will to even start learning that
outcome. You have to be awake to have a free will.

What gets our AGI progeny up in the morning?

regards,


Colin Hales


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]