Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-29 Thread Robert Swaine
 
The paper as a link instead of attachment:
http://mindsoftbioware.com/yahoo_site_admin/assets/docs/Swaine_R_Story_Understander_Model.36375123.pdf
 
The paper gives a quick view of the Human-centric representation and behavioral 
systems approach for problem-solving, reasoning as giving meaning (human 
values) to stories and games...Indexing relations via spatially related 
registers is it's simulated substrate.
 
cheers,
Robert


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-29 Thread Robert Swaine
Mike, 
 
>
>Six  2003
>Seven  1996
>Eight 2001
>Eight and a half 
 
Good point with the movies, only a hardcore movie fan would make that 
association early in his trials to figure out the pattern as movie dates.  In 
this case you gave a hint, such a hint would tell the system to widen its 
attention spotlight to inlude "movies", so entertainment, events, celebration, 
etc would come under attention based on what structure the movie concept's 
parent has in its domain content.
 
Thinking imaginatively to find hard solutions as you say, is possible with this 
system, by telling it to "think outside the box" to other domains and it can 
learn this pattern of domain- hopping based on the reward of a success or being 
authorized to value cross-domain attention search.  Thinking for the system is: 
shifting its attention to different regions (with the 4 domains), sizing and 
orienting the attention scale, and setting the focus depth (of details);  it 
can then read the contents of what comes up from that region and Compare, 
Contrast, Combine it to anyalyze or synthesize it. Thinking bigger or narrower 
is almost literal.
 
Like humans, this system stops a behavior (e.g, stops searching) because it 
runs out of motivation value, not ideas to search. Many systems known or 
described can lend themself to brute force thinking unsure of a solution, this 
structure allows it to do it elegantly using human-centric concept domains 
"first" (easier for us to communicate to it this way by saying "build a damn 
good engine" as human do vs 0010101101 or any other non-human language).  
 
It can and does re-write the concepts and content in its domain as it learns, 
but it started with the domains humans give it, e.g., I knew what movies were 
by having live in a number of situations where this concept was built up, so 
that later, I can learn about independent films and live performances or new 
types of entertainment thta gives similar or unfamiliar emotions.
 
Further rational
1) What humans do: have a biased (value system) that makes sense relative to 
our biological architecture;   Generate all human knowledge in this 
representation structure (natural language, ambiguous, low logic language).
 
2) What an early AGI can do: learn the human-bias by having a similar 
architecture which includes the value bias for pattern humans seek. Obtain as 
much of the recorded knowledge in the world from humans. Generate more, 
faster, new and better knowlege. "Better" is because it knows our value system 
and as well knows humans enough to convince them in a diccussion unlike most of 
us, that better is what it wants us to do(very bad!).
 
For natural language processing, humans readily communicate in song and poems, 
and understand them.  Many songs and poems do not make any logical sense, and 
few songs have wording order and story elements that are reasonable.  The model 
makes sense by looking for patterns where humans do, in the beats (situational 
border that structure all input) and the value (emotional meaning) of the 
song/poems content.
 
Hope some of this helps
Robert
 


--- On Sun, 12/28/08, Mike Tintner  wrote:

From: Mike Tintner 
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date: Sunday, December 28, 2008, 11:38 PM



Robert,
 
Thanks for your detailed, helpful replies. I like your approach of operating in 
multiple domains for problemsolving. But if the domains are known beforehand, 
then it's not truly creative problemsolving - where you do have to be prepared 
to go in search of the appropriate domains - and thus truly cross domains 
rather than simply combining preselected ones. I gave you a perhaps 
exaggerated example just to make the point. You had to realise that the correct 
domain to solve my problem was that of movies - the numbers were the titles of 
movies and the dates they came out. If you're dealing with real world rather 
than just artificial creative problems like our two, you may definitely have to 
make that kind of domain switch - solving any scientific detective problem, 
say, like that of binding in the brain, may require you to think in a 
surprising, new domain, for which you will have to search long and hard (and 
possibly without end).






Mike,
Very good choice.
 
> But the system always *knows* these domains beforehand  - and that it must 
> consider them in any problem?
 
 
YES  the domains "content structure" is what you mean, are the human-centric 
ones provided by living a childs life loading the value system with biases such 
as humans are warm and candy is really sweet.  By further being pushed thru 
western culture grade level curriculum we value the visual features symbols 
"2003" and "1996" as "numbers", then as "dates".  The content models (concept 
patterns) are build up from any basic feature to 

Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-28 Thread Robert Swaine
Mike,
Very good choice.
 
> But the system always *knows* these domains beforehand  - and that it must 
> consider them in any problem?
 
 
YES  the domains "content structure" is what you mean, are the human-centric 
ones provided by living a childs life loading the value system with biases such 
as humans are warm and candy is really sweet.  By further being pushed thru 
western culture grade level curriculum we value the visual features symbols 
"2003" and "1996" as "numbers", then as "dates".  The content models (concept 
patterns) are build up from any basic feature to form instance from the basic 
content of the four domain, such as "dates of leap years", "century marks", 
"millenium" or "anniversary".
 
problems more like:
  --  "ice cream favorite red happee"  -- 
What this group of words means has everything to do with what the reader knows 
and values beforehands.  And what he values will determine what his attention 
is on, the food, the emotions, the color, the positions; or how deep the focus 
is: on the entire situation (sentence), a group of them, a single word or a 
letter.  Humans value from the top so we'll likely think of cherry ice cream 
before we see:
    the occurance pattern of letter "e" in every word in that 'sentence' above.
 
 
 
Good choice for your problem: 
Six  2003
Seven  1996
Eight 2001
Eight and a half   ?  (i see a number of patterns, such as "00" "99", 
multiply, add word to end - but haven't gotten the complete formula)
 
For the system, it is biased; it make sense for itself, it's internal value.
 
The answer the system chooses is the one that makes sense to what it knows and 
values.  Sure, it can and will be used as a general pattern mining by comparing 
and contrasting within lines, line-to-line, number-to-text, text-to-number, 
date-to-word, month-to-number, middle-part to end, end-to-end, etc, until a 
resulting comparison yeilds a pattern that it values (from experience or being 
"told").  However, the value system controlling attention prevents any 
combinatorial explosion - animals only search through the models that have 
value (indirectly or directly) to the problem situation, thus limiting the 
total gueses we could even make (it looks for patterns it already knows).
 
To solve problems it has not been taught or can't see a pattern for:
 
1) If self-motivated because a reward/avoidance is strong:
Keeps looking for patterns 3-C by persiting in its behavior (doing the same ol 
thing) and fail.
If a value happens to occur in one of the result when it kept going, it will 
see that something was different.  It has acces to its own actions (role and 
relation domain) and this different action stands out (auto-contrast) and 
become of greater value due to the associated difference (non-failure). It 
keeps trying until the motivation runs out (energy level decays) or other value 
or past experiences exceeds its model of how long it should take..
 
2)  Instructed how to solve it by trying x, y or x.
 "Wden your attention",  "expand your focus" - then it has a larger set of 
regions to try and find a pattern it values.  If set, it can examine regions of 
the instruction (x, y , and z) and see what was different from what it was 
trying (if the comparision yeilds a high enough value, it will try those as 
well). "Try going left and up" O.K.  auto-contrast "I was trying only up: the 
difference is to add one more direction; I can try left and up and back" etc..
 
 

Creativity and reason come from the  3-C mechanism
 
Creativity in the model is to combine any sets of domain content and give it a 
respective value from its experience and domain models. 
 
Example: Combine the form of a computer mouse, the look of diamonds, the 
function of a steering wheel, with the feel of leather: what do you get?  Focus 
on each region and combine, then e-valuate (compare it to objects, functions).  
What's your result?
Models in my experience say that it's a luxury-car controller; while you might 
say it would be something in an art galleryy, etc (art, value without 
function/role).
 
 
Anyway, Bens, pre-school for AGI is one of the means to bias such a system with 
experience and human values; another way is to try to properly represent human 
experience (static and dynamic) and then essentially implanting memories and 
"experience" instead of just declarative facts.
 
Robert
 

--- On Sun, 12/28/08, Mike Tintner  wrote:

From: Mike Tintner 
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date: Sunday, December 28, 2008, 8:38 PM





Robert: 
Example:
Here's a pattern example you may not have seen before, but by 3C you discover 
the pattern and how to make an example:
 
As spoken aloud:
five and nine    [is]   fine
two and six [is]   twix
five and seven  [is]   fiven
 
 
Robert,
 
So, if I underst

Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-28 Thread Robert Swaine
Mike,
 
Mike wrote:
>What kind of problems have you designed this to solve? Can you give some 
>examples?
 
Natural language understanding, path finding, game playing
 
Any problems that can be represented as a situation in the four component 
domain (value - role - relation -  feature models) can be 3-C (compared, 
contrast, combined) to give a resulting situation (frame pattern).  What is 
combined/compared/or contrast?:  only the regions under attention, including 
its focus detail level are examined.  What is placed and represented in the 
regions determines what component can be 3-C analyzed... as a general computing 
paradigm using 3-C (AND - OR - NOT).
 
 
Example:
Here's a pattern example you may not have seen before, but by 3C you discover 
the pattern and how to make an example:
 
As spoken aloud:
five and nine    [is]   fine
two and six [is]   twix
five and seven  [is]   fiven
 
Take the "five and seven = fiven".
when the system compares the resultant of "fiven" to "five" ..the result is 
that "five" is at the start of the situation.
When it compares "fiven" and "seven"... the result is that "ven" is at the end 
position.
 
resulting situation PATTERN = 
[situation 1 ][ focus inward ] [ start-position ]    combined with 
[situation 2 ][ focus inward ] [ end position  ]
(Spatial and sequence positions are a key part of the representation system)
 
How was the correct (reasoning) method chosen?
This result was was by comparison; it could have been by contrasting.  All 
three Compare, Contrast and Combine happen symultaneously.  The winner is 
whichever resulting situation makes sense to the system has the most activation 
in the value area (some direct or indirect value from past experience or value 
given by the "authority system" in the value region: e.g. fearful or attractive 
spectrum).
 
How was the correct region and focus detail level chosen?
The attention region in the example was on the sound region, the focus detail 
was on the phoneme level (syllable), it could have looked for patterns in the 
number values or the emotions related to each word, or the letter patterns,  or 
hand motions, eye position when spoken, etc).  The regions are biased by the 
value system's current index (amygdala/septum analog): e.g. when you see "five" 
the quantity region will be given a lower threshold, and the focus level 
associated will give the content on the 1 - 10 scale. The index region weights 
are re-organized only by stronger reward/failure (authority system), 3-C 
results can on the index changing the content connections weights.
 
Now you compare apples to oranges for an encore; what do you get? a color, a 
taste, a mass, a new fruit..your attention determines te result
 
All regions are being matched for patterns in the 2 primary index modules 
(action selection, emotional value,..others can be integrated seamlessly).
 
Five and seven is not "fiven", it is twelve, but in this situation it makes 
sense to the circumstances.  Sense and meaning are contextual for the model, 
for humans.
 
 
Hope this sheds light. Detailed paper has been in the works.
Robert
 
--- On Sun, 12/28/08, Mike Tintner  wrote:

From: Mike Tintner 
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date: Sunday, December 28, 2008, 4:49 PM

Robert,
 
What kind of problems have you designed this to solve? Can you give some 
examples?
Robert:
 
A brief paper on an AGI system for human-level  ...had only 2 pages to fit in.
 
If you are working on a system, you probably hope it will one day help design a 
better world, better tools, better inventions.  The better is a subjective 
human value.  A place for or human-like representation of  at least rough, 
general human values  (bias, likes) in the AGI is essential.
 
The paper give a quick view of the Human-centric representation and behavioral 
systems approach for problem-solving, reasoning as giving meaning (human 
values) to stories and games...Indexing relations via spatially related 
registers is it's simulated substrate.
 
Happy Holidays,
Robert
 
...all the human values were biased, unlike the very objective AGI systems 
designed on the Mudfish's home planet; AGI systems that objectively knew that 
sticky mud is beautiful,  large oceans of gooey mud..how enchanting!  Pure 
clean water, now that's fishy!"
  


agi | Archives  | Modify Your Subscription  


--- On Sun, 12/28/08, Mike Tintner  wrote:

From: Mike Tintner 
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date: Sunday, December 28, 2008, 4:49 PM





Robert,
 
What kind of problems have yo

[agi] Images read from human brain-Old ground, new thoughts?

2008-12-11 Thread Robert Swaine
fMRI scanner reconstructing images seen by subjects, etc

http://www.yomiuri.co.jp/dy/features/science/20081211TDY01306.htm
copied below - Anyone read the actual article in "Neuron"?


//
Images read from human brain
The Yomiuri Shimbun

OSAKA--In a world first, a research group in Kyoto Prefecture has succeeded in 
processing and displaying optically received images directly from the human 
brain. 

The group of researchers at Advanced Telecommunications Research Institute 
International, including Yukiyasu Kamitani and Yoichi Miyawaki, from its 
NeuroInformatics Department, said about 100 million images can be read, adding 
that dreams as well as mental images are likely to be visualized in the future 
in the same manner. 

The research will be published Thursday in the U.S. scientific journal 
"Neuron." 

Optically received images are converted to electrical signals in the retina and 
treated in the brain's visual cortex. 

In the recent experiment, the research group asked two people to look at 440 
different still images one by one on a 100-pixel screen. Each of the images 
comprised random gray sections and flashing sections. 

The research group measured subtle differences in brain activity patterns in 
the visual cortexes of the two people with a functional magnetic resonance 
imaging (fMRI) scanner. They then subdivided the images and recorded the 
subjects' recognition patterns. 

The research group later measured the visual cortexes of the two people who 
were looking at the word "neuron" and five geometric figures such as a square 
and a cross. Based on the stored brain patterns, the research group analyzed 
the brain activities and reconstructed the images of Roman letters and other 
figures, succeeding in recreating optically received images. 

(Dec. 11, 2008)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] The Future of AGI

2008-11-26 Thread Robert Swaine
 
Derek wrote:
>>Building computer systems that generate pictures or videos as their way of 
>>communicating with us could be a very lucrative addition to computer 
>>applications that include cognitive models of their users (instead of 
>>focusing solely on generating natural language), because most of us do 
>>process visual information so well.
---
 
1.
Derek,
 
A simple movie, like a visual slide show or cartoon, would be a fairly straight 
forward implementation for certain symbolic/connectionist and other models.
 
The architecture I work on is set to do that as a medium term goal.  Any 
representation system (language) is learned within the contextual patterns its 
used, so a visual language (shapes), motion language (ultimately gestures), 
text, sound (un-implemented), etc. of all  sensor type can be integrated with 
the core system as interchangeable patterns.*
 
You can simply read off the state from any region to see what it's thinking 
within specific and multimodal sensors, as well as what it's attention is on 
and the focus level of that atttention; e.g. looking at the colorful red and 
blue car in the parking lot, but narrowly focus on the rust on the door handle 
or a wide defocus on the entire parking lot and the heat from the pavement - 
all in the same situation.  When this situation is recalled or referenced, the 
scene is pulled-up and it will generate its version of what it was focused on 
(rust on the door handle or heat).  It can then shift it's focus outward or to 
other regions in the scene, modify a feature (say change the red colors to blue 
on the car etc).
 
A movie from its output would look like the jerky camera movements and quick 
focus change in a show like "NYPD Blue"  or the new "Battlestar Galactic" space 
scenes or History Channel's "Dogfight" sky scenes.
 
 
*pattens (for the model) have spatial and serial components to varying degrees 
for each sensor: e.g. bodyspace maps easy to vision, sound to change (motion 
sequence), force (touch) to vision,...for the sensors that don't map well, just 
force it and you get your less seemless metaphors : a sweet sound, but left the 
room with a bitter taste

 
 2.
Derek wrote:
>>because most of us do process visual information so well.
 
It makes it easier to program visually to see what the system is doingas a 
whole or by states
 

--- On Wed, 11/26/08, Derek Zahn <[EMAIL PROTECTED]> wrote:

From: Derek Zahn <[EMAIL PROTECTED]>
Subject: RE: [agi] The Future of AGI
To: agi@v2.listbox.com
Date: Wednesday, November 26, 2008, 11:02 AM




#yiv2133712726 .hmmessage P
{
margin:0px;padding:0px;}
#yiv2133712726 {
font-size:10pt;font-family:Verdana;}

Although a lot of AI-type research focuses on natural language interfaces 
between computer systems and their human users, computers have the ability to 
create visual images (which people can't do in real-time beyond gestures and 
facial expressions).  Building computer systems that generate pictures or 
videos as their way of communicating with us could be a very lucrative addition 
to computer applications that include cognitive models of their users (instead 
of focusing solely on generating natural language), because most of us do 
process visual information so well.
 
This is really narrow AI I suppose, though it's kind of on the borderline.  It 
does seem like one of the ways to commercialize incremental progress toward AGI.
 
Derek Zahn
supermodelling.net









agi | Archives  | Modify Your Subscription



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] How much/little qualia do you need to be conscious

2008-11-17 Thread Robert Swaine
Richard,

This is probably covered elsewhere, but help me on this, just some thoughts at 
the end.

Many humans don't share the full complement of sensory apparatus: blind, deaf, 
cannot feel pain, taste, vestibular sense of motion, body sensation, etc; 
either through damage or congenitally.  So questions about what makes red 
appear "red" or why does chicken taste like "chicken" has no meaning for them; 
e.g., a sixth (6) dimensional being asking you what does a certain fifth (5) 
dimension  object look like.

Under your model or view, is consciousness on a graduated scale, and does it go 
to zero?

For instance if you remove ALL the qualia's sensor origins/function completely, 
is the human still conscious?  ...removing the circuits and mechanism, etc.  In 
this same vain, does adding more qualia as you proposed in the paper lead to 
more consciousness. E.g. tetrachromats (usually) women with four color 
receptors, are they more conscious than color blind (usually) men?

My view remains the same, there's probably nothing there but mechanisms with 
feedback and attention on other mechanism. Other unknowns leading to actual 
consciousness... the unsensed, unknown forces or intangibles, plains of 
existence, simulations, gods, etc..it's all possible.  However, consciousness 
is not necessary for a human-skill level AGI that can invent tools to help us 
answer all the above questions and unknowns.  E.g., a microscope or medical 
tool need not be alive to teach us more about life and biology. We'll build,  
ask it if its comfortable and to tell us god's phone number and the meaning of 
life the universe and everything (42).

Richard Loosemore wrote:
> And, please don't misunderstand: this is not a "path to AGI".  Just an 
> important side issue that the geneal public cares about enormously.

For any AGI system, the saying applies: "If it walks like a duck, and it quacks 
like a duck...it is a duck".  We treat ducks like ducks, not because they walk 
and quack like a duck, but because we, each individual's value system, "feel" 
that this is how a duck-like-walk and quack-like-sound should be treated, 
whether its a duck or not.  If the AGI exhibits human-like behavior's, humans, 
the general public, will treat it like humans feel about those particular 
behaviors (positive and aversive). 

AIBO robo pet dogs, ELIZA chatterbox programs, sex toys,.. all are treated by 
their owners/users beacause of how the owner feels about the behaviors 
expressed, and not because of what is innately going on in, or actually 
characterizes the object.  It's just as valid to say the zombies give other 
zombies meaning/value because they have behaviors that other zombies value as 
useful.  We see the earth as alive, because we attend to behaviors it has that 
are similar to our own behaviors that we value.  Is the earth really alive or 
have awareness/sensory mechanisms?  Stars as conscious lifeforms?.. an 
unconscious AGI can answer these as quick as  a conscious one.

But, I say keep up your research efforts, discoveries come from everywhere 
others fail to look.

Robert


--- On Mon, 11/17/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> From: Richard Loosemore <[EMAIL PROTECTED]>
> Subject: Re: [agi] A paper that actually does solve the problem of 
> consciousness
> To: agi@v2.listbox.com
> Date: Monday, November 17, 2008, 6:33 PM
> Mark Waser wrote:
> > An excellent question from Harry . . . .
> > 
> >> So when I don't remember anything about those
> towns, from a few 
> >> minutes ago on my road trip, is it because (a) the
> attentional 
> >> mechanism did not bother to lay down any episodic
> memory traces, so I 
> >> cannot bring back the memories and analyze them,
> or (b) that I was 
> >> actually not experiencing any qualia during that
> time when I was on 
> >> autopilot?
> >>
> >> I believe that the answer is (a), and that IF I
> can stopped at any 
> >> point during the observation period and thought
> about the experience I 
> >> just had, I would be able to appreciate the last
> few seconds of 
> >> subjective experience.
> > 
> > So . . . . what if the *you* that you/we speak of is
> simply the 
> > attentional mechanism?  What if qualia are simply the
> way that other 
> > brain processes appear to you/the attentional
> mechanism?
> > 
> > Why would "you" be experiencing qualia when
> you were on autopilot?  It's 
> > quite clear from experiments that human's
> don't "see" things in their 
> > visual field when they are concentrating on other
> things in their visual 
> > field (for example, when you are told to concentrate
> on counting 
> > something that someone is doing in the foreground
> while a man in an ape 
> > suit walks by in the background).  Do you really have
> qualia from stuff 
> > that you don't sense (even though your sensory
> apparatus picked it up, 
> > it was clearly discarded at some level below the
> conscious/attentional 
> > level)?
> 
> Yes, I did not mean to imply that all unatte

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Robert Swaine
Conciousness is akin to the phlogiston theory in chemistry.  It is likely a 
shadow concept, similar to how the bodily reactions make us feel that the heart 
is the seat of emotions.  Gladly, cardiologist and heart surgeons do not look 
for a spirit, a soul, or kindness in the heart muscle.  The brain organ need 
not contain anything beyond the means to effect physical behavior,.. and 
feedback as to those behavior.

A finite degree of sensory awareness serves as a suitable replacement for 
consciousness, in otherwords, just feedback.

Would it really make a difference if we were all biological machines, and our 
perceptions were the same as other animals, or other "designed" minds; more so 
if we were in a simulated existence.  The search for consciousness is a 
misleading (though not entirely fruitless) path to AGI.


--- On Fri, 11/14/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> From: Richard Loosemore <[EMAIL PROTECTED]>
> Subject: [agi] A paper that actually does solve the problem of consciousness
> To: agi@v2.listbox.com
> Date: Friday, November 14, 2008, 12:27 PM
> I completed the first draft of a technical paper on
> consciousness the 
> other day.   It is intended for the AGI-09 conference, and
> it can be 
> found at:
> 
> http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
> 
> The title is "Consciousness in Human and Machine: A
> Theory and Some 
> Falsifiable Predictions", and it does solve the
> problem, believe it or not.
> 
> But I have no illusions:  it will be misunderstood, at the
> very least. 
> I expect there will be plenty of people who argue that it
> does not solve 
> the problem, but I don't really care, because I think
> history will 
> eventually show that this is indeed the right answer.  It
> gives a 
> satisfying answer to all the outstanding questions and it
> feels right.
> 
> Oh, and it does make some testable predictions.  Alas, we
> do not yet 
> have the technology to perform the tests yet, but the
> predictions are on 
> the table, anyhow.
> 
> In a longer version I would go into a lot more detail,
> introducing  the 
> background material at more length, analyzing the other
> proposals that 
> have been made and fleshing out the technical aspects along
> several 
> dimensions.  But the size limit for the conference was 6
> pages, so that 
> was all I could cram in.
> 
> 
> 
> 
> 
> Richard Loosemore
> 
> 
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com