[agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mike Tintner

Matthias,

You seem - correct me - to be going a long way round saying that words are 
different from concepts - they're just sound-and-letter labels for concepts, 
which have a very different form. And the processing of words/language is 
distinct from and relatively simple compared to the processing of the 
underlying concepts.


So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l c-r-i-s-i-s 
are distinct from the underlying concepts. The question is: What form do 
those concepts take? And what is happening in our minds (and what has to 
happen in any mind) when we process those concepts?


You talk of patterns. What patterns, do you think, form the concept of 
mind that are engaged in thinking about sentence 2? Do you think that 
concepts like mind or the US might involve something much more complex 
still? Models? Or is that still way too simple? Spaces?


Equally, of course, we can say that each *sentence* above is not just a 
verbal composition but a conceptual composition - and the question then 
is what form does such a composition take? Do sentences form, say, a 
pattern of patterns,  or something like a picture? Or a blending of 
spaces ?


Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million dollars - 
something that we know can be cashed in, in an infinite variety of ways, but 
that we may not have to start cashing in,  (when processing), unless 
really called for - or only cash in so far?


P.S. BTW this is the sort of psycho-philosophical discussion that I would 
see as central to AGI, but that most of you don't want to talk about?






Matthias: What the computer makes with the data it receives depends on the 
information

of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities 
can

be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to 
obtain

this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have 
experienced

angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who was
angry?

The way to obtain knowledge with embodiment is hard and long even in 
virtual

worlds.
If the AGI shall understand natural language it would be necessary that it
makes similar experiences as humans make in the real world. But this would
need a very very sophisticated and rich virtual world. At least, there 
have

to be angry dogs in the virtual world ;-)

As I have already said I do not think the relation between utility of this
approach and the costs would be positive for first AGI.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mike Tintner

Matthias,

I take the point that there is vastly more to language understanding than 
the surface processing of words as opposed to concepts.


I agree that it is typically v. fast.

I don't think though that you can call any concept a pattern. On the 
contrary, a defining property of concepts, IMO, is that they resist 
reduction to any pattern or structure - which is rather important, since my 
impression is most AGI-ers live by patterns/structures. Even a concept like 
triangle cannot actually be reduced to a pattern. Try it, if you wish.


And the issue of conceptualisation  - of what a concept consists of - is 
manifestly an unsolved problem for both cog sci and AI  and of utmost, 
central importance for AGI. We have to understand how the brain performs its 
feats here, because that, at a rough general level, is almost certainly how 
it will *have* to be done. (I can't resist being snide here and saying that 
since this an unsolved problem, one can virtually guarantee that AGI-ers 
will therefore refuse to discuss it).


Trying to work out what information the brain handles, for example, when it 
talks about


THE US IS THE HOME OF THE FINANCIAL CRISIS

- what passes - and has to pass - through a mind thinking specifically of 
the financial crisis?- is in some ways as great a challenge as working out 
what the brain's engrams consist of. Clearly it won't be the kind of mere, 
symbolic, dictionary processing that some AGI-ers envisage.


It will be perhaps as complex as the conceptualisation of party in:

HOW WAS THE PARTY LAST NIGHT?

where a single word may be used to touch upon over, say, two hours of 
sensory, movie experience in the brain.


I partly disagree with you about how we should study all this -  it is vital 
to look at how we understand, or rather fail to understand and get confused 
by concepts and language - which happens all the time. This can tell us a 
great deal about what is going on underneath.



Matthias:
For the discussion of the subject the details of the pattern 
representation

are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a 
word:


http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for 
concepts,


which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l 
c-r-i-s-i-s

are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of patterns. What patterns, do you think, form the concept of
mind that are engaged in thinking about sentence 2? Do you think that
concepts like mind or the US might involve something much more complex
still? Models? Or is that still way too simple? Spaces?

Equally, of course, we can say that each *sentence* above is not just a
verbal composition but a conceptual composition - and the question 
then

is what form does such a composition take? Do sentences form, say, a
pattern of patterns,  or something like a picture? Or a blending of
spaces ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million 
dollars -
something that we know can be cashed in, in an infinite variety of ways, 
but


that we may not have to start cashing in,  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want

Re: [agi] Re: Defining AGI

2008-10-18 Thread Mike Tintner



Trent:  Oh you just hit my other annoyance.

How does that work?

Mirror neurons

IT TELLS US NOTHING.



Trent,

How do they work? By observing the shape of humans and animals , (what 
shape they're in),  our brain and body automatically *shape our bodies to 
mirror their shape*, (put themselves into their skin, i.e. body/ into 
their place/ into their shoes), and we can work out the nature/ style of 
their movement and with it their emotions.


Hence by observing:

http://www.hotsalsa.co.uk/danceMat.jpg]

(little more than literally shapes of the dancers - with almost zero 
muscular info)


you can not only get up and mimic the immediate shapes/movements of the 
dancers *but create further movements - a whole dance* in the style of those 
dancers, which will be reasonably faithful.


(Note that we can not only understand the immediate shape/movement and 
position of their bodies in space, but their preceding movements and 
subsequent movements - every [still] picture tells a [moving] story to the 
human brain - a remarkable and v. complicated extension of our powers of 
mirroring).


Similarly by hearing a few words spoken by a person, you can pick up the 
shape of their voice and style of their diction - and shape your own voice 
and diction accordingly - and mimic it, and create further lines in their 
style. For example, just one sentence of 60-odd words:


If you want to hear about it, you'll probably want to know where I was 
born, and what a lousy childhood I had, and how my parents were occupied 
before they had me, and all the David Copperfield crap, but if you want to 
know the truth, I don't really want to get into it.


can give you a whole voice/character.

Both are *whole-bodied* operations - bringing your whole body to bear on the 
process of understanding -  and fundamental to your ability to understand 
other humans and animals - applied even to inanimate objects (the book lay 
on the table/the wardrobe stood in the sitting room) - and fundamental to 
intelligence, and our ability to intelligently copy, and learn from others' 
skills, and acquire culture generally.


Both operations also depend on *fluid transformations*  - our ability to 
mimic others depends on *fluidly transforming* our own body or voice shape 
to match theirs - a necessarily rough, imprecise, *non-formulaic* operation, 
since the two bodies will always be significantly different in shape, and 
there will be no formulaic/mathematical way for our brain to morph one into 
the other.


Fluid, non-formulaic transformations are also fundamental to our capacity 
for analogy and metaphor and crossing domains -  and that thing called 
general intelligence - our capacity to see a storm in the swirls of milk in 
a teacup, or tears in drops of rain, or a solar system in an atom.


We understand and think with our whole bodies.

(Oh, Ben, these are all original or recently original observations about the 
powers of the human brain and body which are beyond the powers of any 
digital computer. You claimed never to have heard an original observation 
here re digital computers' limitations -  that's because you don't listen, 
and aren't interested in the non-digital and non-rational. Obviously a pet 
in a virtual world can have no real body or embodied integrity).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Mike Tintner

Trent,

I should have added that our brain and body, by observing the mere 
shape/outline of others bodies as in Matisse's Dancers, can tell not only 
how to *shape* our own outline,  but how to dispose of our *whole body*  - 
we transpose/translate (or flesh out) a static two-dimensional body shape 
into an extremely complex set of instructions as to how to position and move 
our entire, *solid* body with all its immensely complex musculature. It's an 
awesomely detailed process, mechanically, when you analyse it.


(It reminds me of an observation by Vlad, long ago, about how efficient some 
computational coding can be. That painting of the Dancers surely must 
represent a vastly more efficient form of coding than anything digital or 
rational languages can achieve. So much info has been packed into such a 
brief outline. Never was so much told by so little? The same is true of 
artistic drawing generally).


P.S. Perhaps the best summary of all this is that general intelligence 
depends on body mapping  -  fluidly and physically/embodied-ly mapping our 
body onto others (as totally distinct frommapping structures of symbols 
onto each other). Not worth discussing, Ben?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Defining AGI

2008-10-18 Thread Mike Tintner


  David:Mike, these statements are an *enormous* leap from the actual study of 
mirror neurons. It's my hunch that the hypothesis paraphrased above is 
generally true, but it is *far* from being fully supported by, or understood 
via, the empirical evidence.
   
[snip] these are all original or recently original observations about the 
powers of the human brain and body which are beyond the powers of any digital 
computer. You claimed never to have heard an original observation here re 
digital computers' limitations -  that's because you don't listen, and aren't 
interested in the non-digital and non-rational. Obviously a pet in a virtual 
world can have no real body or embodied integrity).

  It seems that your magical views on human cognition are showing their colors 
again; you haven't supplied any coherent argument as to why the hypothetical 
function of mirror neurons (skills empathy with and mimicry of other embodied 
entities or representations thereof) could not be duplicated by sufficiently 
clever software written for digital computers.


  David,

  I actually did give the reason - but, fine, I haven't clearly explained it 
enough to communicate. The reason is basically simple. All the powers discussed 
depend on the cognitive ability to map one complex, irregular shape onto 
another  - and that involves a fluid transformation, (which is completely 
beyond the power of any current software - or,to be more precise, any rational 
sign system, esp. mathematics/geometry).

  When you map your body onto that of the Dancers, (or anyone else's), you are 
mapping two irregular shapes that are not geometrically comparable, onto each 
other. There is no formulaic way to transform one into the other, and hence 
perceive their likeness. Geometry and geometrically-based software can't do 
this.

  When you see that the outline map of Italy is like a boot - a classic example 
of metaphor/analogy - there is no geometric, formulaic way to transform that 
cartographic outline of that landmass into the outline of a boot. It is a 
fluid transformation of one irregular shape into another irrregular shape.

  When you *draw* almost any shape whatsoever, you are engaged in performing 
fluid transformations - producing *rough* likenesses/shapes (as opposed to the 
precise, formulaic likenesses of geometry). The shapes of the faces and flowers 
you draw on a page are only v. (sometimes v.v.) roughly like the real shapes of 
the real objects you have observed, 

  Think of a cinematic *dissolve* from one object, like a face, into another - 
which is not a precise, formulaic morphing but simply a rough superimposition 
of two shapes that are roughly alike. Crudely, you could say, your brain is 
continually performing that sort of operation on the shapes of the world in 
order to recognize them and compare them..

  Or think of a face perceived through fluid rippling water. Your brain, 
speaking v. loosely, is able to perform somewhat similar transformations on 
objects.

  The human mind deals in fluid shapes. 

  The human body continuously produces fluid shapes itself. When you move you 
are continuously shaping and then fluidly transforming your body to fit the 
world around you. When you reach out for an object, you start shaping your hand 
to fit before you get there, and fluidly adjust that hand shape as required to 
actually grasp the object.

  Geometry can only perform regular/rational transformations of  objects - even 
 topology deals in the regular likenesses besides otherwise non-comparable 
objects like a doughnut and a cup handle. Even, at its current, most flexible 
extreme, the geometry of free-form transformation is still dealing with 
formulaic transformations, that are not truly free-form/fluid and so not able 
to handle the operations I've been discussing. But the very term, free-form, 
indicates what geometry would like but is unable to achieve).

  There is an obvious difference between geometry and art/drawing. Computers in 
their current guise are only geometers and not artists. They cannot map shapes 
directly - physically-  onto each other, (with no intermediate operations), and 
they cannot fluidly (and directly) transform shapes into each other. The brain 
is manifestly an artist and manifestly organized extremely extensively on 
mapping lines - and those brain maps, as experiments show, are able to undergo 
fluid transformations themselves in their spatial layout.

  Another way to say this, is to say that the brain has and computers don't 
have,imagination - they cannot truly handle/map images/shapes. 

  There is nothing magical about this. What it will require is a different 
and/or additional kind of computer. A computer that can handle not only 
rational operations, which all depend on taking things to (regular/rational) 
pieces, but imaginative operations, which all depend on fluid comparisons of 
(mainly irregular/irrational)  wholes (without reducing them to pieces)..  A 

Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Mike Tintner


Matthias:


I do not agree that body mapping is necessary for general intelligence. But
this would be one of the easiest problems today.
In the area of mapping the body onto another (artificial) body, computers
are already very smart:

See the video on this page:
http://www.image-metrics.com/



Matthias,

See my reply to David. This is typical of the free-form transformations 
that computers can achieve - and, I grant you,  is v. impressive. (I really 
think there should be a general book celebrating some of the recent 
achievements of geometry in animation - is there?).


But it is NOT mapping one body onto another. It is working only with one 
body, and transforming it in highly sophisticated operations.


Computer software can't map two totally different bodies onto each other - 
can't perceive the likeness between the map of Italy and a boot. And it 
can't usually perceive the same body or face in different physical or facial 
forms - can't tell that two faces with v. different facial/emotional 
expressions belong to the same person, eg Madonna, can it?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Mike Tintner
Matthias: I think here you can see that automated mapping between different 
faces is

possible and the computer can smoothly morph between them. I think, the
performance is much better than the imagination of humans can be.

http://de.youtube.com/watch?v=nice6NYb_WA

Matthias,

Perhaps we're having difficulties communicating in words about a highly 
visual subject. The above involves morphing systematically from a single 
face.  It does not involve being confronted with two different faces or 
objects randomly chosen/positioned and finding/recognizing the similarities 
between them . My God, if it did, computers would have no problems with 
visual object (or facial) recognition.


Of course, morphing operations by computers are better, i.e. immensely more 
detailed and accurate,  than anything the human mind can achieve - better 
at, if you like, the mechanical *implementation* of imagination. (But bear 
in mind that it was the imagination of the programmer that decided in the 
above software, which face should be transformed into which face. The 
software could not by itself choose or create a totally new face to add to 
its repertoire without guidance).


What rational computers can't do is find similarities between disparate, 
irregular objects - via fluid transformation - the essence of imagination. I 
repeat - computers can't do this -


http://www.bearskinrug.co.uk/_articles/2005/09/16/doodle/hero.jpg

and therein lies the central mechanism of analogy and metaphor.

Rather than simply objecting to this, the focus should be on *how* to endow 
computers with imagination. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Mike Tintner

Matthias,

When a programmer (or cameraman)  macroscopic(ally) positions two faces - 
adjusting them manually so that they are capable of precise point-to-point 
matching,  that proceeds from an initial act of  visual object recognition - 
and indeed imagination, as I have defined it.


He will have taken two originally disparate faces moving through many 
different not-easily-comparable positions, and recognized their 
compatibility - by, I would argue, a process of fluid transformation.


The programmer accordingly won't put any old two faces together - he won't 
put one person with a harelip  and/or one eye together with a regular face, 
He won't put a woman with hair over her eyes, together with one whose eyes 
are unobscured - or one with heavy make-up with one who is clear - or, just 
possibly, one with cosmetic surgery together with a natural face.  The human 
brain is capable of recognizing the similarities and differences between all 
such faces - the program isn't.


(I think you're being a bit difficult here - I don't think many others - 
incl. say. Ben - would try to ascribe the powers to these particular 
programs that you are doing).


Matthias:

I think it does  involve being confronted with two different faces or
objects randomly chosen/positioned and finding/recognizing the 
similarities

between them.

If you have watched the video carefully then you have heard that they have
spoken from automated algorithms which do the matching.

On an initial macroscopic scale there is some human hint necessary but on 
a

microscopic scale it is done by software alone and after the initial
matching, the complete morphing is done even on macroscopic scales.

Computer generated morphing between complete different objects as it is 
the

case in your picture is no problem for computers after an initial matching
of some points of the first and the last picture is made by humans.
It is a common special effect in many science fiction movies.

In the morphing video I have given, there were no manual initial matching 
of

points necessary. Only the macroscopic position of two faces had to be
adjusted manually.

- Matthias Heger


Mike Tintner wrote:


Matthias: I think here you can see that automated mapping between 
different

faces is
possible and the computer can smoothly morph between them. I think, the
performance is much better than the imagination of humans can be.

http://de.youtube.com/watch?v=nice6NYb_WA

Matthias,

Perhaps we're having difficulties communicating in words about a highly
visual subject. The above involves morphing systematically from a single
face.  It does not involve being confronted with two different faces or
objects randomly chosen/positioned and finding/recognizing the 
similarities

between them . My God, if it did, computers would have no problems with
visual object (or facial) recognition.

Of course, morphing operations by computers are better, i.e. immensely 
more

detailed and accurate,  than anything the human mind can achieve - better
at, if you like, the mechanical *implementation* of imagination. (But bear
in mind that it was the imagination of the programmer that decided in the
above software, which face should be transformed into which face. The
software could not by itself choose or create a totally new face to add to
its repertoire without guidance).

What rational computers can't do is find similarities between disparate,
irregular objects - via fluid transformation - the essence of imagination. 
I


repeat - computers can't do this -

http://www.bearskinrug.co.uk/_articles/2005/09/16/doodle/hero.jpg

and therein lies the central mechanism of analogy and metaphor.

Rather than simply objecting to this, the focus should be on *how* to 
endow

computers with imagination.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Mike Tintner

  Ben: I defy you to give me any neuroscience or cog sci result that cannot be 
clearly explained using computable physics.


  Ben,

  As discussed before, no current computational approach can replicate the 
brain's ability to produce a memory in what we can be v. confident are only a 
few neuronal steps - by comparison with computers which often take millions of 
steps. This is utterly central to general intelligence.and the capacity to 
produce analogies/metaphors etc. The brain seems to work by recall (if I've 
got the right term) as opposed to *search.* (And Hawkins argues that the entire 
brain is a memory system - memories are stored everywhere).

  That indicates a radically different computer to any we have.

  Ben:Colin notes that we do not have a good, detailed explanation of how 
scientific creativity emerges from computational processes.  OK.  I tried to 
give such an explanation in From Complexity to Creativity, but of course 
whether my explanation is right, is subject to debate.  

  Ben,

  I still intend to reply to your creativity post, but perhaps you/d care to at 
least label what your explanation of scientific creativity is - I'm not aware 
of your explaining, or connecting up any of the theories you explore - in any 
*direct* way with any creative process at all. My brief reading is that you 
indicate a loose, possible connection, but nothing direct - as your final 
Conclusion seems to confirm:

  I14.7 CONCLUSION 
  The phenomenon of creativity is a challenge for the psynet model, and for 
complexity science as a whole.

  Are you claiming you have any ideas here that anyone is paying attention to, 
or should?







--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Mike Tintner
Ben: I don't have time to summarize all that stuff I already wrote in emails 
either ;-p

Ben,

I asked you to at least *label* what your explanation of scientific 
creativity is.. Just a label, Ben.  Books that are properly organized and 
constructed (and sell), usually do have clearly labelled theories, which they 
hang their book around. It isn't clear what your book's is.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-15 Thread Mike Tintner


Trent : If you disagree with my paraphrasing of your opinion Colin, please

feel free to rebut it *in plain english* so we can better figure out
what the hell you're on about.



Well, I agree that Colin hasn't made clear what he stands for 
[neo-]computationally. But perhaps he is doing us a service, in making clear 
how neuroscientific opinion is changing? I must confess I didn't know re 
integrative neuroscience. So there is something important to be explored 
here - how much *is* science (and cog sci) changing its computational 
paradigm?


Basically, you guys are in general blinkering yourselves to the fact that 
the brain clearly works *fundamentally differently* to any computer - in 
major ways.


Colin may not have succeeded in fully identifying or translating those 
differences into any useful mechanical form [or not - I'm certainly 
interested to hear more]. But sooner or later *someone* will.


And it's a safe bet that cog. sci. which still largely underpins your 
particular computational view of mind, will v. soon sweep the rug from under 
your feet. If I were you, I'd explore more here.


(The parallels between a vastly overleveraged financial, economic  
political world order suddenly collapsing and a similarly overleveraged (in 
their claims) cog. sci and AGI also on the verge of collapse, should not 
escape you). 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Mike Tintner
why don't you start AGI-tech on the forum?   enough people have expressed an 
interest  - simply reconfirm  - and start posting there


- Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 15, 2008 9:09 PM
  Subject: RE: [agi] META: A possible re-focusing of this list


  I bet if you tried very hard to move the group to the forum (for example, by 
only posting there yourself and periodically urging people to use it), people 
could be moved there.  Right now, nobody posts there because nobody else posts 
there; if one wants one's stuff to be read, one sends it to the high traffic 
location unless there's a reason not to.



--
  Date: Wed, 15 Oct 2008 16:00:45 -0400
  From: [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Subject: Re: [agi] META: A possible re-focusing of this list



  There is already a forum site on agiri.org .  Nobody uses it  So, just 
setting up a forum site is not the answer...

  ben g


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Mike Tintner
Colin:

others such as Hynna and Boahen at Stanford, who have an unusual hardware 
neural architecture...(Hynna, K. M. and Boahen, K. 'Thermodynamically 
equivalent silicon models of voltage-dependent ion channels', Neural 
Computation vol. 19, no. 2, 2007. 327-350.) ...and others ... then things will 
be diverse and authoritative. In particular, those who have recently 
essentially squashed the computational theories of mind from a neuroscience 
perspective- the 'integrative neuroscientists':

Poznanski, R. R., Biophysical neural networks : foundations of integrative 
neuroscience, Mary Ann Liebert, Larchmont, NY, 2001, pp. viii, 503 p.

Pomerantz, J. R., Topics in integrative neuroscience : from cells to cognition, 
Cambridge University Press, Cambridge, UK ; New York, 2008, pp. xix, 427 p.

Gordon, E., Ed. (2000). Integrative neuroscience : bringing together 
biological, psychological and clinical models of the human brain. Amsterdam, 
Harwood 



Colin, 

This all looks v. interesting - googling quickly. The general integrative 
approach to the brain's functioning is clearly v. important. 

*Distinctive Paradigms/Approaches. But are any distinctive models or more 
specific paradigms emerging? It isn't immediately clear why AGI has to pay 
special attention here. Can you do a bit more selling of the importance of this 
field.

*Models - I notice some researchers are developing models of the brain's 
functioning. Are any worthwhile? I called here sometime ago for a Systems 
Psychology and Systems AI, that would be devoted to developing overall models 
both of the intelligent brain and of AGI systems. Existing AGI systems like 
Ben's offer de facto models of what is required for an intelligent mind. So it 
would be v. valuable to be able to compare different models, both natural and 
artificial.

*Embodied Cognitive Science.  How do you see int. neurosci. in relation to 
this? For example, I noted some purely neuronal models of the self. For me, 
only integrated brain-body models of the self are valid.

*Free Will. An interest of mine. I noted some reference that suggested a 
neuroscientific attempt to explain this (or perhaps explain it away). Know any 
more about this?







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Mike Tintner

Will:There is a reason why lots of the planets biomass has stayed as
bacteria. It does perfectly well like that. It survives.
Too much processing power is a bad thing, it means less for
self-preservation and affecting the world. Balancing them is a tricky
proposition indeed

Interesting thought. But do you (or anyone else) have any further thoughts 
about what  the proper balance between brain and body relative to what set 
of functions/behaviours is, or how it is determined or adjusted? (Obviously 
a v. difficult question)/ 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-13 Thread Mike Tintner
Colin,

Yes you and Rescher are going in a good direction, but you can make it all 
simpler still, by being more specific..

We can take it for granted that we're talking here mainly about whether 
*incomplete* creative works should be criticised.

If we're talking about scientific theories, then basically we're talking in 
most cases about detective theories, about theories of whodunit or whatdunit. 
If you've got an incomplete theory about who committed a murder, because you 
don't have enough evidence, or enough of a motive - do you want criticism? In 
general, you'd be pretty foolish not to seek it. Others may point out evidence 
you've missed, or other motives, or suggest still better suspects.

If we're talking about inventions, then we're talking about tools/ machines/ 
engines etc designed to produce certain effects. If you've got an incomplete 
machine, it doesn't achieve the effect as desired. It isn't as efficient or as 
effective as you want. Should you seek criticism? In general, you'd still be 
pretty foolish not to. Others may point out improved ways of designing or 
moving your machine parts,  or of arranging the objects-to-be-moved.

And if nothing else the simple act of presenting your ideas to others allows 
you to use them as sounding-boards - you get to hear your ideas with a clarity 
that is difficult to achieve alone, and become more aware of their 
deficiiencies - and more motivated to solve them.

The difficulty with AGI is that we're dealing not with machines or software 
that are incomplete but simply non-functioning - with essentially narrow AI 
systems that haven't shown any capacity for general intelligence and 
problemsolving - with machines that want to be airplanes, but are actually 
motorbikes, and have never taken off, or shown any ability to get off the 
ground for even a few seconds. As a result, you have a whole culture where 
people are happy to tell you how their machine works - the kind of engine or in 
this case software that they're using - but not happy to tell you what their 
machine does - what specific problems it addresses - because that will 
highlight their complete failure so far.

Is that sensible? If you want to preserve your dignity, yes. Acknowledging 
failure is v. painful and humiliating. Plus, in this case, there's the v. 
serious possbility that you're building totally the wrong machine a motorbike 
that will never be a plane, (or a narrow plane that will never be a general 
bird) - or in this case, software that simply doesn't and can't address the 
right problems at all. If you actually want to get somewhere, though, and not 
remain trapped in errors, then not presenting and highlighting what your 
machine does (and how it fails) is also foolish.




  Colin:
  The process of formulation of scientific theories has been characterised as a 
dynamical system nicely by Nicholas Rescher.

  Rescher, N., Process philosophy : a survey of basic issues, University of 
Pittsburgh Press, Pittsburgh, 2000, p. 144.
  Rescher, N., Nature and understanding : the metaphysics and method of 
science, Clarendon Press, Oxford, 2000, pp. ix, 186.

  In that approach you can see critical argument operating operating as a brain 
process - competing brain electrodynamics that stabilises on the temporary 
'winner', whose position may be toppled at any moment by the emergence of a 
more powerful criticism which destabilises the current equilibrium...and so on. 
The 'argument' may involve the provision of empirical evidence ... indeed that 
is the norm for most sciences.

  In order that a discipline be seen to be real science, then, what one would 
expect to see such processes happening in a dialog between a diversity of views 
competing for ownership of scientific evidence through support for whatever 
theoretical framework seems apt. As a recent entrant here, and seeing the 
dialog and the issues as they unfold I would have some difficulty classifying 
what is going on as 'scientific' in the sense that there is no debate 
calibrated against any overt fundamental scientific theoretical framework(s), 
nor defined testing protocols. 

  In the wider world of science it is the current state of play that the 
theoretical basis for real AGI is an open and multi-disciplinary question.  A 
forum that purports to be invested in achievement of real AGI as a target, one 
would expect that forum to a multidisciplianry approach on many fronts, all 
competing scientifically for access to real AGI. 

  I am not seeing that here. In having a completely different approach to AGI, 
I hope I can contribute to the diversity of ideas and bring the discourse 
closer to that of a solid scientific discipline, with formal testing metrics 
and so forth. I hope that I can attract the attention of the neuroscience and 
physics world to this area. 

  Of course whether I'm an intransigent grumpy theory-zealot of the Newtonian 
kind... well... just let the ideas speak for themselves... :-)  The 

Re: [agi] creativity

2008-10-12 Thread Mike Tintner
Ben,

I'm glad that you have decided to respond to, - or at least recognize - my 
criticisms/points re creativity, because they are extremely important and 
central to AGI -  as I said, it isn't just you but everyone who is avoiding 
them - when it is in all your interests to confront them *now*/*urgently*. I 
think in fact my criticisms do hold - but obviously I will have to look at your 
book first. [I may have looked at it already - I've read quite a bit of you - 
but you've written a lot]. If you could link me, or send me a copy, I will 
reply in a more considered way.
  ... some loose ends in reply to a message from a few days back ...

  Mike Tintner wrote:

  ***
  Be honest - when and where have you ever addressed creative problems? 
[Just count how many problems I have raised).. 
  ***

  In my 1997 book FROM COMPLEXITY TO CREATIVITY

   

  *** 
  Just as it is obvious that I know next to nothing about programming, it 
is also obvious that you have v. little experience of discussing creative 
problemsolving - at, I stress, a *metacognitive* level. (And nor, AFAIK, do any 
AGI-ers -  only partly excepting Minsky).

  ***


  The 1997 book I referenced above in fact contains a significant amount of 
metacognition about creativity.  You seem to have the idea that it's supposed 
to be possible to explain an AGI's creative process in detail, in specific 
instances ... and I don't know why you think that, since it's not even the case 
for humans.
   

  *** 
  All this stands in total, stark contrast to any discussion of logical or 
mathematical, problems, where you are always delighted to engage in detail, and 
v. helpful and constructive - and do not make excuses to cover up your 
inexperience.
  ***

  Aspects of the mind that are closer to the deliberative, intensely conscious 
level are easier to discuss explicitly and in detail.

  Aspects of the mind that are mainly unconscious and have to do mainly with 
the coordinated activity of a large number of different processes, are harder 
to describe in detail in specific instances.  One can describe the underlying 
processes but this then becomes technical and lengthy!!

  -- Ben


  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Webs vs Nets

2008-10-11 Thread Mike Tintner
As I understand the way you guys and AI generally work, you create 
well-organized spaces which your programs can systematically search for 
options. Let's call them nets - which have systematic, well-defined and 
orderly-laid-out connections between nodes.


But it seems clear that natural systems create totally different 
structures, or almost anti-structures of information. The WWW is indeed 
a web of information, with the nodes haphazardly linked to each other, 
without any prior system or planning, (only at best, some v. v. basic, 
simple rules about for example how links work). Steven Johnson talks of the 
growth of such webs as like a data fungus.


http://www.ted.com/index.php/talks/steven_johnson_on_the_web_as_a_city.html

And, actually, that I suggest is largely how the brain creates its own 
webs of info., as opposed to organized spaces. You didn't learn about sex, 
for example, in any organized, rational way. An anecdote here, a few jokes 
there, the sight of some physical activity there, a sexual manual there, 
massive amounts of porn and so on, all higgledy-piggledy, freely 
associated/linked together. A crazy jungle of info rather than a well laid 
out garden.


And when you think about sex, that web as opposed to any rational space, is 
what your brain brings to bear on the subject.


The distinction between webs and nets as different kinds of data 
structures, seems fundamental and worth exploring. Does such a distinction 
exist already, and has it been explored?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Webs vs Nets

2008-10-11 Thread Mike Tintner
Ben,

Some questions then.

You don't have any spaces or frames as such within your systems? (what terms 
would you use/prefer here BTW?)  Everything is potentially connected to 
everything else?  Perhaps you can give some example from say your 
pet-in-a-virtual-world (or anything else). It doesn't have a frame say re 
fetching, or some other activity? How can it connect, as you can connect on 
the Web, from say the domain of fetching and balls to any other domain ? Like 
hide-and-seek? Or conversation? (Or, on the Web itself, to planets in a solar 
system).

It won't have ordered hierarchies, say, re animals (...mammals...humans etc?

Another feature of the webs vs nets distinction. Webs it seems to me are 
*multi-domain* of their very nature. .( A domain, for me, consists of a set of 
elements which behave according to consistent rules - e.g. chess pieces which 
move in set ways on a board).So webs are composed of diverse and often 
contradictory domains and rules. Your sex web, for example, will have a whole 
variety of domains, religious, literary, different moralities, etiquette, 
fashion, pornographic, fantasy etc offering contradictory rules about whom you 
can and can't have sex with, and how, and when - and for what reasons Ditto our 
language webs consist of radically conflicting rules about how we can and can't 
speak, construct sentences, use words, spell, mix different conventions, 
accents, tones etc. etc. Do your spaces/domains exist similarly with 
conflicting rules? You don't need to keep updating them for consistency? Your 
system can, for example, survive with conflicting rules of logic - Nars-ian and 
PLN - as your own brain can?

I suspect IOW there *are* important distinctions to be drawn  explored here. 
And my first attempt here may be rather like my first attempt at defining 
programs a long time ago, which failed to distinguish between sequences and 
structures of instructions - and was then pounced on by AI-ers.




  On Sat, Oct 11, 2008 at 7:38 AM, Mike Tintner [EMAIL PROTECTED] wrote:

As I understand the way you guys and AI generally work, you create 
well-organized spaces which your programs can systematically search for 
options. Let's call them nets - which have systematic, well-defined and 
orderly-laid-out connections between nodes.

  That is simply incorrect ... the connections between 
nodes/terms/concepts/whatever are chaotic and self-organized and disorderly, 
within OpenCogPrime, NARS, or any of a load of other AGI systems.

  And then you have some cognitive processes that try to build order out of the 
chaos and create links imposing some fragmentary order ... which won't last 
long unless actively maintained [roughly: as some folks build directories of 
parts of the Web...]

  There is a large body of study of the connection statistics of large 
networks, and some (but less) study of the dynamics of connection stats in 
large networks.

  -- Ben G





--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Webs vs Nets PS

2008-10-11 Thread Mike Tintner
I guess the obvious follow up question is when your systems search among 
options for a response to a situation, they don't search in a systematic way 
through spaces of options? They can just start anywhere and end up anywhere in 
the system's web of knowledge - as you can in searching the Web itself?

Presumably they must search among well-defined spaces, otherwise how could you 
have been having this argument about combinatorial explosion with Richard et al?

A web, I guess, by definition   -   (I'm tossing this out as I go along) - 
can't be systematically searched, and there can be no combinatorial explosion. 
At worst, you can surf for too long :).


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Logical Intuition

2008-10-11 Thread Mike Tintner

Pei:The NARS solution fits people's intuition

You guys keep talking - perfectly reasonably - about how your logics do or 
don't fit your intuition. The logical question is - how - on what 
principles - does your intuition work? What ideas do you have about this? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Logical Intuition PS

2008-10-11 Thread Mike Tintner
What I should have added is that presumably your intuition must work on 
radically different principles to your logics - otherwise you could 
incorporate it/them




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Webs vs Nets

2008-10-11 Thread Mike Tintner
Ben,

Thanks. But you didn't reply to the surely central-to-AGI question of whether 
this free-form knowledge base is or can be multi-domain - and particularly 
involve radically conflicting sets of rules about how given objects can behave 
- a central feature of the human brain and its knowledge base, I would argue.

I haven't thought this through, but my first thought is that such a 
multi-domain structure lends itself v. strongly to the cross-domain thinking 
that remains a problem for AGI.
  Ben:The OpenCog Atomspace --- its knowledge-base of nodes and links --- is 
totally free-form without any overarching structures imposed by the programmer

  However, hierarchies or frames can of course exist as structures within this 
free-form pool of nodes and links

  In building a particular app using OpenCog, one can opt to build in 
hierarchies and frames and such (via creating XML files containing appropriate 
nodes/links and importing them) or one can start from a blank slate and let the 
whole structure emerge as it will...

  Ben G


  On Sat, Oct 11, 2008 at 9:38 AM, Mike Tintner [EMAIL PROTECTED] wrote:

Ben,

Some questions then.

You don't have any spaces or frames as such within your systems? (what 
terms would you use/prefer here BTW?)  Everything is potentially connected to 
everything else?  Perhaps you can give some example from say your 
pet-in-a-virtual-world (or anything else). It doesn't have a frame say re 
fetching, or some other activity? How can it connect, as you can connect on 
the Web, from say the domain of fetching and balls to any other domain ? Like 
hide-and-seek? Or conversation? (Or, on the Web itself, to planets in a solar 
system).

It won't have ordered hierarchies, say, re animals (...mammals...humans 
etc?

Another feature of the webs vs nets distinction. Webs it seems to me are 
*multi-domain* of their very nature. .( A domain, for me, consists of a set of 
elements which behave according to consistent rules - e.g. chess pieces which 
move in set ways on a board).So webs are composed of diverse and often 
contradictory domains and rules. Your sex web, for example, will have a whole 
variety of domains, religious, literary, different moralities, etiquette, 
fashion, pornographic, fantasy etc offering contradictory rules about whom you 
can and can't have sex with, and how, and when - and for what reasons Ditto our 
language webs consist of radically conflicting rules about how we can and can't 
speak, construct sentences, use words, spell, mix different conventions, 
accents, tones etc. etc. Do your spaces/domains exist similarly with 
conflicting rules? You don't need to keep updating them for consistency? Your 
system can, for example, survive with conflicting rules of logic - Nars-ian and 
PLN - as your own brain can?

I suspect IOW there *are* important distinctions to be drawn  explored 
here. And my first attempt here may be rather like my first attempt at defining 
programs a long time ago, which failed to distinguish between sequences and 
structures of instructions - and was then pounced on by AI-ers.



   
  On Sat, Oct 11, 2008 at 7:38 AM, Mike Tintner [EMAIL PROTECTED] wrote:

As I understand the way you guys and AI generally work, you create 
well-organized spaces which your programs can systematically search for 
options. Let's call them nets - which have systematic, well-defined and 
orderly-laid-out connections between nodes.

  That is simply incorrect ... the connections between 
nodes/terms/concepts/whatever are chaotic and self-organized and disorderly, 
within OpenCogPrime, NARS, or any of a load of other AGI systems.

  And then you have some cognitive processes that try to build order out of 
the chaos and create links imposing some fragmentary order ... which won't last 
long unless actively maintained [roughly: as some folks build directories of 
parts of the Web...]

  There is a large body of study of the connection statistics of large 
networks, and some (but less) study of the dynamics of connection stats in 
large networks.

  -- Ben G





--
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Mike Tintner
Ben,

I think that's all been extremely clear -and I think you've been very good in 
all your different roles :).  Your efforts have produced a v. good group -and a 
great many thanks for them.
  And, just to clarify: the fact that I set up this list and pay $12/month for 
its hosting, and deal with the  occasional list-moderation issues that arise, 
is not supposed to give my **AI opinions** primacy over anybody else's on the 
list, in discussions   I only intervene as moderator when discussions go 
off-topic, not to try to push my perspective on people ... and on the rare 
occasions when I am speaking as list owner/moderator rather than as just 
another AI guy with his own opinions, I try to be very clear that that is the 
role I'm adopting..

  ben g
   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Mike Tintner
Terren:autopoieisis. I wonder what your thoughts are about it? 

Does anyone have any idea how to translate that biological principle into 
building a machine, or software? Do you or anyone else have any idea what it 
might entail? The only thing I can think of that comes anywhere close is the 
Carnegie Mellon starfish robot with its sense of self.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Mike Tintner
Terren,

Thanks for reply. I think I have some idea, no doubt confused, about how you 
want to evolve a system. But the big deal re autopoiesis for me - correct me - 
is the capacity of a living system to *maintain its identity* despite 
considerable disturbances. That can be both in the embryonic/developmental 
stages and also later in life. A *simple* example of the latter is an 
experiment where they screwed around with the nerves to a monkey's hands, and 
neverthless its brain maps rewired themselves, so to speak, to restore normal 
functioning within months. Neuroplasticity generally is an example - the 
brain's capacity, when parts are damaged, to get new parts to take on their 
functions.

How a system can be evolved - computationally, say, as you propose  - is, in my 
understanding, no longer quite such a problematic thing to understand or 
implement. But how a living system manages to adhere to a flexible plan of its 
identity despite disturbances, is, IMO, a much more problematic thing to 
understand and implement. And that, for me - again correct me - is the essence 
of autopoiesis,  (which BTW seems to me not the best explained of ideas - by 
Varela  co).

Mike,

Autopoieisis is a basic building block of my philosophy of life and of 
cognition as well. I see life as: doing work to maintain an internal 
self-organization. It requires a boundary in which the entropy inside the 
boundary is kept lower than the entropy outside. Cognition is autopoieitic as 
well, although this is harder to see.

I have already shared my ideas on how to build a virtual intelligence 
that satisfies this definition. But in summary, you'd design a framework in 
which large numbers of interacting parts would evolve into an environment with 
emergent, persistent entities. Through a guided process you would make the 
environment more and more challenging, forcing the entities to solve harder and 
harder problems to stay alive, corresponding with ever increasing intelligence. 
At some distant point we may perhaps arrive at something with human-level 
intelligence or beyond. 

Terren

--- On Fri, 10/10/08, Mike Tintner [EMAIL PROTECTED] wrote:

  From: Mike Tintner [EMAIL PROTECTED]
  Subject: Re: [agi] open or closed source for AGI project?
  To: agi@v2.listbox.com
  Date: Friday, October 10, 2008, 11:30 AM


  Terren:autopoieisis. I wonder what your thoughts are about it? 

  Does anyone have any idea how to translate that biological principle 
into building a machine, or software? Do you or anyone else have any idea what 
it might entail? The only thing I can think of that comes anywhere close is the 
Carnegie Mellon starfish robot with its sense of self.

--
agi | Archives  | Modify Your Subscription  
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Mike Tintner
-organization. It requires a boundary in which the entropy inside 
the boundary is kept lower than the entropy outside. Cognition is autopoieitic 
as well, although this is harder to see.

  I have already shared my ideas on how to build a virtual 
intelligence that satisfies this definition. But in summary, you'd design a 
framework in which large numbers of interacting parts would evolve into an 
environment with emergent, persistent entities. Through a guided process you 
would make the environment more and more challenging, forcing the entities to 
solve harder and harder problems to stay alive, corresponding with ever 
increasing intelligence. At some distant point we may perhaps arrive at 
something with human-level intelligence or beyond. 

  Terren

  --- On Fri, 10/10/08, Mike Tintner [EMAIL PROTECTED] wrote:

From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] open or closed source for AGI project?
To: agi@v2.listbox.com
Date: Friday, October 10, 2008, 11:30 AM


Terren:autopoieisis. I wonder what your thoughts are about 
it? 

Does anyone have any idea how to translate that biological 
principle into building a machine, or software? Do you or anyone else have any 
idea what it might entail? The only thing I can think of that comes anywhere 
close is the Carnegie Mellon starfish robot with its sense of self.


  agi | Archives  | Modify Your Subscription  
 




  agi | Archives  | Modify Your Subscription   


--
agi | Archives  | Modify Your Subscription  
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread Mike Tintner

Russell : Whoever said you

need to protect ideas is just shilly-shallying you. Ideas have no
market value; anyone capable of taking them up, already has more ideas
of his own than time to implement them.


In AGI, that certainly seems to be true -  ideas are crucial, but require 
such a massive amount of implementation. That's why I find Peter Voss and 
others -  incl Ben at times - refusing to discuss their ideas,  silly. Even 
if say you have a novel idea for applying AGI or a sub-AGI to some highly 
commercial field,  it would still all depend on implementation. The chance 
of someone stealing your idea is v. remote. And discussing your ideas openly 
will only improve them.


In many other creative fields, there can be reason to be secretive. If you 
had an idea for some new, more efficient chemical, or way of treating a 
chemical, for an electric battery, say, that could be v. valuable and highly 
stealable. Hence all those formula movies. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Mike Tintner
Ben,

V. interesting and helpful to get this pretty clearly stated general position.

However:

To put it simply, once an AGI can understand human language we can teach it 
stuff.

you don't give any prognostic view about the acquisition of language. Mine is - 
in your dreams. Arguably, most AGI-ers still see handling language as a 
largely logical exercise of translating between symbols in dictionaries and 
texts, with perhaps a little grounding. I see language as an extremely 
sophisticated worldpicture, and system for handling that picture, which is 
actually, even if not immediately obvious,  a multimedia exercise that is both 
continuously embodied in our system and embedded in the real world. Not just a 
mode of, but almost the whole of the brain in action, interacting with the 
whole of the world. No AGI system will be literate in an awfully long time. 
Your view?

And:

I think we're at the stage where a team of a couple dozen could do it in 5-10 
years

I repeat - this is outrageous. You don't have the slightest evidence of 
progress - you [the collective you] haven't solved a single problem of general 
intelligence - a single mode of generalising - so you don't have the slightest 
basis for making predictions of progress other than wish-fulfilment, do you? 

  Ben:A few points...

  1)  
  Closely associating embodiment with GOFAI is just flat-out historically 
wrong.  GOFAI refers to a specific class of approaches to AI that wer pursued a 
few decades ago, which were not centered on embodiment as a key concept or 
aspect.  

  2)
  Embodiment based approaches to AGI certainly have not been extensively tried 
and failed in any serious way, simply because of the primitive nature of real 
and virtual robotic technology.  Even right now, the real and virtual robotics 
tech are not *quite* there to enable us to pursue embodiment-based AGI in a 
really tractable way.  For instance, humanoid robots like the Nao cost $20K and 
have all sorts of serious actuator problems ... and virtual world tech is not 
built to allow fine-grained AI control of agent skeletons ... etc.   It would 
be more accurate to say that we're 5-15 years away from a condition where 
embodiment-based AGI can be tried-out without immense time-wastage on making 
not-quite-ready supporting technologies work

  3)
  I do not think that humanlike NL understanding nor humanlike embodiment are 
in any way necessary for AGI.   I just think that they seem to represent the 
shortest path to getting there, because they represent a path that **we 
understand reasonably well** ... and because AGIs following this path will be 
able to **learn from us** reasonably easily, as opposed to AGIs built on 
fundamentally nonhuman principles

  To put it simply, once an AGI can understand human language we can teach it 
stuff.  This will be very helpful to it.  We have a lot of experience in 
teaching agents with humanlike bodies, communicating using human language.  
Then it can teach us stuff too.   And human language is just riddled through 
and through with metaphors to embodiment, suggesting that solving the 
disambiguation problems in linguistics will be much easier for a system with 
vaguely humanlike embodied experience.

  4)
  I have articulated a detailed proposal for how to make an AGI using the OCP 
design together with linguistic communication and virtual embodiment.  Rather 
than just a promising-looking assemblage of in-development technologies, the 
proposal is grounded in a coherent holistic theory of how minds work.

  What I don't see in your counterproposal is any kind of grounding of your 
ideas in a theory of mind.  That is: why should I believe that loosely coupling 
a bunch of clever narrow-AI widgets, as you suggest, is going to lead to an AGI 
capable of adapting to fundamentally new situations not envisioned by any of 
its programmers?   I'm not completely ruling out the possiblity that this kind 
of strategy could work, but where's the beef?  I'm not asking for a proof, I'm 
asking for a coherent, detailed argument as to why this kind of approach could 
lead to a generally-intelligent mind.

  5)
  It sometimes feels to me like the reason so little progress is made toward 
AGI is that the 2000 people on the planet who are passionate about it, are 
moving in 4000 different directions ;-) ... 

  OpenCog is an attempt to get a substantial number of AGI enthusiasts all 
moving in the same direction, without claiming this is the **only** possible 
workable direction.  

  Eventually, supporting technologies will advance enough that some smart guy 
can build an AGI on his own in a year of hacking.  I don't think we're at that 
stage yet -- but I think we're at the stage where a team of a couple dozen 
could do it in 5-10 years.  However, if that level of effort can't be 
systematically summoned (thru gov't grants, industry funding, open-source 
volunteerism or wherever) then maybe AGI won't come about till the supporting 

Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Mike Tintner
This is fine and interesting, but hasn't anybody yet read Kauffman's 
Reinventing the Sacred (publ this year)? The entire book is devoted to this 
theme and treats it globally, ranging  from this kind of emergence in 
physics, to emergence/evolution of natural species, to emergence/deliberate 
creativity in the economy and human thinking. Kauffman systematically - and 
correctly - argues that the entire, current mechanistic worldview of science 
is quite inadequate to dealing with and explaining creativity in every form 
throughout the world and at every level of evolution.  Kauffman also 
explicitly deals with the kind of problems AGI must solve if it is to be 
AGI.


In fact, everything is interrelated here. Ben argues:

we are not trying to understand some natural system, we are trying to 
**engineer** systems 


Well, yes, but how you get emergent physical properties of matter, and how 
you get species evolving from each other with creative, scientifically 
unpredictable new organs and features , can be *treated*  as 
design/engineering problems (even though, of course, nature was the 
designer).


In fact, AGI *should* be doing this - should be understanding how its 
particular problem of getting a machine to be creative, fits in with the 
science-wide problem of understanding creativity in all its forms. The two 
are mutually enriching, (indeed mandatory when it comes to a) the human and 
animal brain's creativity and an AGI's and b)  the evolution of the brain 
and the evolutionary path of AGI's).



Richard:
Perhaps now that there are other physicists (besides myself) making these 
claims, people in the AGI community will start to take more seriously the 
implications for their own field 


http://www.newscientist.com/article/mg20026764.100

For those who do not have a New Scientist subscription, the full article 
refers to a paper at http://www.arxiv.org/abs/0809.0151.


Mile Gu et al looked at the possibility of explaining emergent properties 
of Ising glasses and managed to prove that those properties are not 
reducible.


Myself, I do not need the full force of Gu's proof, since I only claim 
that emergent properties can be *practically* impossible to work with.


It is worth noting that his chosen target systems (Ising glasses) are very 
closely linked to some approaches to AGI, since these have been proposed 
by some neural net people as the fundamental core of their approach.


I am sure that I can quote a short extract from the full NS article 
without treading on the New Scientist copyright.  It is illuminating 
because what Gu et al refer to is the problem of calculating the lowest 
energy state of the system, which approximately corresponds to the state 
of maximum understanding in the class of systems that I am most 
interested in:


BEGIN QUOTE:

Using the model, the team focused on whether the pattern that the atoms 
adopt under various scenarios, such as a state of lowest energy, could be 
calculated from knowledge of those forces. They found that in some 
scenarios, the pattern of atoms could not be calculated from knowledge of 
the forces - even given unlimited computing power. In mathematical terms, 
the system is considered formally undecidable.


We were able to find a number of properties that were simply decoupled 
from the fundamental interactions, says Gu. Even some really simple 
properties of the model, such as the fraction of atoms oriented in one 
direction, cannot be computed.


This result, says Gu, shows that some of the models scientists use to 
simulate physical systems may actually have properties that cannot be 
linked to the behaviour of their parts (www.arxiv.org/abs/0809.0151). 
This, in turn, may help explain why our description of nature operates at 
many levels, rather than working from just one. A 'theory of everything' 
might not explain all natural phenomena, says Gu. Real understanding may 
require further experiments and intuition at every level.


Some physicists think the work offers a promising scientific boost for the 
delicate issue of emergence, which tends to get swamped with philosophical 
arguments. John Barrow at the University of Cambridge calls the results 
really interesting, but thinks one element of the proof needs further 
study. He points out that Gu and colleagues derived their result by 
studying an infinite system, rather than one of large but finite size, 
like most natural systems. So it's not entirely clear what their results 
mean for actual finite systems, says Barrow.


Gu agrees, but points out that this was not the team's goal. He also 
argues that the idealised mathematical laws that scientists routinely use 
to describe the world often refer to infinite systems. Our results 
suggest that some of these laws probably cannot be derived from first 
principles, he says.


END QUOTE.


I particularly liked his choice of words when he said: We were able to 
find a number of properties that were simply decoupled from the 

Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Mike Tintner
Ben:I didn't read that book but I've read dozens of his papers ... it's cool 
stuff but does not convince me that engineering AGI is impossible ... however 
when I debated this with Stu F2F I'd say neither of us convinced each other ;-) 
...

Ben,

His argument (like mine), is that AGI is *algorithmically* impossible, 
(Similarly he is arguing only that our *present* mechanistic worldview is 
inadequate). I can't vouch for it, since he doesn't explicitly address AGI as 
distinct from the powers of algorithms, but I would be v. surprised if he was 
arguing that AGI is impossible, period (no?).  

I would've thought that he would argue something like that just as we need a 
revolutionary new mechanistic worldview, so we need a revolutionary approach to 
AGI, (and not just a few tweaks  :)  ).


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Mike Tintner

Matthias,

You don't seem to understand creative/emergent problems (and I find this 
certainly not universal, but v. common here).


If your chess-playing AGI is to tackle a creative/emergent  problem (at a 
fairly minor level) re chess - it would have to be something like: find a 
new way for chess pieces to move - and therefore develop a new form of 
chess   (without any preparation other than some knowledge about different 
rules and how different pieces in different games move).  Or something like 
get your opponent to take back his move before he removes his hand from the 
piece  - where some use of psychology, say, might be appropriate rather 
than anything to do directly with chess itself.


IOW by definition a creative/emergent problem is one where you have to bring 
about a given effect by finding radically new kinds of objects that move or 
relate in radically new kinds of ways -  to produce that effect. By 
definition, you *do not know which domain is appropriate to solving the 
problem,* (what kinds of objects or moves are relevant),  let alone have a 
set of instructions to hold your hand every step of the way -   and the 
eventual solution will involve crossing hitherto unrelated domains.


That, as Kauffman also insists, is an absolute show stopper. Which is why 
the show that is AGI cannot not only not go on, but hasn't even started.


No form of logic or maths or programming -  no preexisting frame - is 
sufficient to deal with such problems - and cross domains in surprising 
ways. If those are the only relevant disciplines you know, then you will 
indeed have major difficulties understanding creative problems. They do not 
prepare you.


PS Ditto all evolutionary steps present creative problems of discovery. For 
example - give me a *biological* piece of the puzzle that explains how 
humans/apes with relatively curved spines acquired erect spines   (an 
explanation that reveals something about the *internal* processes by which 
permanent changes in the body's blueprints come about - as opposed to 
something about external, natural selection).




Matthias:  The problem of the emergent behavior already arises within a 
chess program

which
visits millions of chess positions within a second.
I think the problem of the emergent behavior equals the fine tuning 
problem

which I have already mentioned:
We will know, that the main architecture of our AGI works. But in our 
first

experiments
we will observe a behavior of the AGI which we don't want to have. We will
have several parameters which we can change.
The big question will be: Which values of the parameters will let the AGI 
do

the right things.
This could be an important problem for the development of AGI because in 
my
opinion the difference between a human and a monkey is only fine tuning. 
And

nature needed millions of years for this fine tuning.

I think there is no way to avoid this problem but this problem is no show
stopper.

- Matthias


Mike Tintner wrote:

This is fine and interesting, but hasn't anybody yet read Kauffman's
Reinventing the Sacred (publ this year)? The entire book is devoted to 
this

theme and treats it globally, ranging  from this kind of emergence in
physics, to emergence/evolution of natural species, to 
emergence/deliberate
creativity in the economy and human thinking. Kauffman systematically - 
and
correctly - argues that the entire, current mechanistic worldview of 
science


is quite inadequate to dealing with and explaining creativity in every 
form

throughout the world and at every level of evolution.  Kauffman also
explicitly deals with the kind of problems AGI must solve if it is to be
AGI.

In fact, everything is interrelated here. Ben argues:

we are not trying to understand some natural system, we are trying to
**engineer** systems 

Well, yes, but how you get emergent physical properties of matter, and how
you get species evolving from each other with creative, scientifically
unpredictable new organs and features , can be *treated*  as
design/engineering problems (even though, of course, nature was the
designer).

In fact, AGI *should* be doing this - should be understanding how its
particular problem of getting a machine to be creative, fits in with the
science-wide problem of understanding creativity in all its forms. The two
are mutually enriching, (indeed mandatory when it comes to a) the human 
and

animal brain's creativity and an AGI's and b)  the evolution of the brain
and the evolutionary path of AGI's).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https

Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Mike Tintner

Matthias (cont),

Alternatively, if you'd like *the* creative ( somewhat mathematical) 
problem de nos jours - how about designing a bail-out fund/ mechanism for 
either the US or the world, that will actually work?  No show-stopper for 
your AGI?  [How would you apply logic here, Abram?] 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Mike Tintner
Ben,

I am frankly flabberghasted by your response. I have given concrete example 
after example of creative, domain-crossing problems, where obviously there is 
no domain or frame that can be applied to solving the problem (as does 
Kauffman) - and at no point do you engage with any of them - or have  the least 
suggestion as to how a logical/mathematical AGI could go about solving them, or 
identify a suitable domain..

On the contrary,it is  *you* who repeatedly resort to essentially *reference to 
authority* arguments  - saying read my book, my paper etc etc - and what 
basically amounts to the tired line I have the proof, I just don't have the 
time to write it in the margin  (Or it's too complicated for your pretty 
little head.)  Be honest - when and where have you ever addressed creative 
problems? [Just count how many problems I have raised).. 

Just as it is obvious that I know next to nothing about programming, it is also 
obvious that you have v. little experience of discussing creative 
problemsolving - at, I stress, a *metacognitive* level. (And nor, AFAIK, do any 
AGI-ers -  only partly excepting Minsky).

All this stands in total, stark contrast to any discussion of logical or 
mathematical, problems, where you are always delighted to engage in detail, and 
v. helpful and constructive - and do not make excuses to cover up your 
inexperience.



Mike,



by definition a creative/emergent problem is one where you have to bring 
about a given effect by finding radically new kinds of objects that move or 
relate in radically new kinds of ways -  to produce that effect. By definition, 
you *do not know which domain is appropriate to solving the problem,* (what 
kinds of objects or moves are relevant),  let alone have a set of instructions 
to hold your hand every step of the way -   and the eventual solution will 
involve crossing hitherto unrelated domains.

That, as Kauffman also insists, is an absolute show stopper. Which is why 
the show that is AGI cannot not only not go on, but hasn't even started.


  This is just an argument by reference to authority ... Stu Kauffman wrote a 
book saying X, therefore we're supposed to believe X is true???

  He certainly did not convincingly demonstrate in any of his books or papers 
that AGI cannot deal with creativity in the same sense that humans can...

  These discussions get **so** tiresome... I am soon going to stop 
participating in threads of this nature...

  ben g 




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Mike Tintner

Brad:Unfortunately,
as long as the mainstream AGI community continue to hang on to what 
should, by now, be a thoroughly-discredited strategy, we will never (or 
too late) achieve human-beneficial AGI.


Brad,

Perhaps you could give a single example of what you mean by non-human 
intelligence. What sort of faculties for instance? Or problemsolving? How 
will these be fundamentally different?


Maybe you didn't follow my discussion with Ben about this - it turned out 
that Novamente is entirely humanoid. IOW when AGI-ers talk of producing a 
non-human intelligence, what they actually mean in practice is 
cherry-picking those human faculties they like ( think they can mimic)  
ignoring those they don't (or find too difficult). There is no real, 
thought-through conception of a non-human entity at all.Have you thought one 
through? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Super-Human friendly AGI

2008-10-05 Thread Mike Tintner

John,

Sorry if I missed something, but I can't see any attempt by you to 
schematise/ classify emotions as such, e.g.


melancholy, sorrow, bleakness...
joy, exhilaration, euphoria..

(I'd be esp. interested in any attempt to establish a gradation of emotional 
terms).


Do you have anything like that? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Mike Tintner
 make up for that loss, because you are in a 
circumstance of an intrinsically unknown distal natural world, (the 
novelty of an act of scientific observation).

.
= COMP is false.
==
OK.  There are subtleties here.
The refutation is, in effect, a result of saying you can't do it (replace 
a scientist with a computer) because you can't simulate inputs. It is just 
the the nature of 'inputs' has been traditionally impoverished by 
assumption born merely of cross-disciplinary blindness.. Not enough 
quantum mechanics or electrodynamics is done by those exposed to 'COMP' 
principles.


This result, at first appearance, says you can't simulate a scientist. 
But you can! If you already know what is out there in the natural world 
then you can simulate a scientific act. But you don't - by definition  - 
you are doing science to find out! So it's not that you can't simulate a 
scientist, it is just that in order to do it you already have to know 
everything, so you don't want to ... it's useless. So the words 
'refutation of COMP by an attempted  COMP implementation of a scientist' 
have to be carefully contrasted with the words you can't simulate a 
scientist.


The self referential use of scientific behaviour as scientific evidence 
has cut logical swathes through all sorts of issues. COMP is only one of 
them. My AGI benchmark and design aim is the artificial scientist.  Note 
also that this result does not imply that real AGI can only be organic 
like us. It means that real AGI must have new chips that fully capture all 
the inputs and make use of them to acquire knowledge the way humans do. A 
separate matter altogether. COMP, as an AGI designer' option, is out of 
the picture.


I think this just about covers the basics. The papers are dozens of pages. 
I can't condense it any more than this..I have debated this so much it's 
way past its use-by date. Most of the arguments go like this: But you 
CAN! I am unable to defend such 'arguments from 
under-informed-authority' ... I defer to the empirical reality of the 
situation and would prefer that it be left to justify itself. I did not 
make any of it up. I merely observed. . ...and so if you don't mind I'd 
rather leave the issue there.  ..


regards,

Colin Hales



Mike Tintner wrote:

Colin:

1) Empirical refutation of computationalism...

.. interesting because the implication is that if anyone
doing AGI lifts their finger over a keyboard thinking they can be
directly involved in programming anything to do with the eventual
knowledge of the creature...they have already failed. I don't know
whether the community has internalised this yet.

Colin,

I'm sure Ben is right, but I'd be interested to hear the essence of your 
empirical refutation. Please externalise it so we can internalise it :)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Mike Tintner
Matthias: I think it is extremely important, that we give an AGI no bias 
about

space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many 
places at once - not in NO place. I have no idea how you would swing that - 
other than what we already have - machines that are information-processors 
with no sense of identity at all.Do you? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Mike Tintner

Matthias,

First, I see both a human body-brain and a distributed entity, such as a 
computer network,  as *physically integrated* units, with a sense of their 
physical integrity. The fascinating thought, (perhaps unrealistic) for me 
was of being able to physically look at a scene or scenes, from different 
POV's more or less simultaneously - a thought worth exploring.


Second, your idea, AFAICT, of an unbiassed-as-to-time-and-space 
intelligence, while v. vague, is also worth exploring. I suspect the 
all-important fallacy here is of pure objectivity - the idea that an 
object or scene or world can be depicted WITHOUT any location or reference 
or comparison. When we talk of time and space,  which are fictions that have 
no concrete existence -  we are really talking (no?) of frameworks we use to 
locate and refer other things to. Clocks. 3/4 dimensional grids... All 
things have to be referred and compared to other things in order to be 
understood, which is an inevitably biassed process. So is there any such 
thing as your non-bias?   Just my first stumbling thoughts.





Matthias:


From my points 1. and 2. it should be clear that I was not talking about a

distributed AGI which is in NO place. The AGI you mean consists of several
parts which are in different places. But this is already the case with the
human body. The only difference is, that the parts of the distributed AGI
can be placed several kilometers from each other. But this is only a
quantitative and not a qualitative point.

Now to my statement of an useful representation of space and time for AGI.
We know, that our intuitive understanding of space and time works very well
in our life. But the ultimate goal of AGI is that it can solve problems
which are very difficult for us. If we give an AGI bias of a model of space
and time which is not state of the art of the knowledge we have from
physics, then we give AGI a certain limitation which we ourselves suffer
from and which is not necessary for an AGI.
This point has nothing to do with the question whether the AGI is
distributed or not.
I mentioned this point because your question has relations to the more
fundamental question whether and which bias we should give AGI for the
representation of space and time.


Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED]
Gesendet: Samstag, 4. Oktober 2008 14:13
An: agi@v2.listbox.com
Betreff: Re: [agi] I Can't Be In Two Places At Once.

Matthias: I think it is extremely important, that we give an AGI no bias
about
space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many
places at once - not in NO place. I have no idea how you would swing that -
other than what we already have - machines that are information-processors
with no sense of identity at all.Do you?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Mike Tintner
Matt:The problem you describe is to reconstruct this image given the highly 
filtered and compressed signals that make it through your visual perceptual 
system, like when an artist paints a scene from memory. Are you saying that 
this process requires a consciousness because it is otherwise not 
computable? If so, then I can describe a simple algorithm that proves you 
are wrong: try all combinations of pixels until you find one that looks the 
same.


Matt,

Simple? Well, you're good at maths. Can we formalise what you're arguing? A 
computer screen, for argument's sake.  800 x 600, or whatever. Now what is 
the total number of (diverse) objects that can be captured on that screen, 
and how long would it take your algorithm to enumerate them?


(It's an interesting question, because my intuition says to me that there is 
an infinity of objects that can be depicted on any screen (or drawn on a 
page). Are you saying that there aren't? - that you can in effect predict 
new objects as yet unconceived,  new kinds of ipods/inventions/evolved 
species, say,  -at least in terms of their representations on a flat 
screen - with an algorithm? ) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Mike Tintner
Ben,

Thanks for reply. I'm a bit lost though. How does this formula take into 
account the different pixel configurations of different objects? (I would have 
thought we can forget about the time of display and just concentrate on the 
configurations of points/colours, but no doubt I may be wrong).

Roughly how large a figure do you come up with, BTW?

I guess a related question is the old one - given a keyboard of letters, what 
are the total number of works possible with say 500,000 key presses, and how 
many 500,000-press attempts will it (or could it) take the proverbial monkey to 
type out, say, a 50,000 word play called Hamlet?

In either case, I would imagine, the numbers involved are too large to be 
practically manageable in, say, this universe, (which seems to be a common 
yardstick). Comments?   The maths here does seem important, because it seems to 
me to be the maths of creativity - and creative possibilities - in a given 
medium. A somewhat formalised maths, since creators usually find ways to 
transcend and change their medium - but useful nevertheless. Is such a maths 
being pursued?

  On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Matt:The problem you describe is to reconstruct this image given the highly 
filtered and compressed signals that make it through your visual perceptual 
system, like when an artist paints a scene from memory. Are you saying that 
this process requires a consciousness because it is otherwise not computable? 
If so, then I can describe a simple algorithm that proves you are wrong: try 
all combinations of pixels until you find one that looks the same.

Matt,

Simple? Well, you're good at maths. Can we formalise what you're arguing? A 
computer screen, for argument's sake.  800 x 600, or whatever. Now what is the 
total number of (diverse) objects that can be captured on that screen, and how 
long would it take your algorithm to enumerate them?

(It's an interesting question, because my intuition says to me that there 
is an infinity of objects that can be depicted on any screen (or drawn on a 
page). Are you saying that there aren't? -


  There is a finite number of possible screen-images, at least from the point 
of view of the process sending digital signals to the screen.

  If the monitor refreshes each pixel N times per second, then over an interval 
of T seconds, if each pixel can show C colors, then there are

  C^(N*T*800*600)

  possible different scenes showable on the screen during that time period

  A big number but finite!

  Drawing on a page is a different story, as it gets into physics questions, 
but it seems rather likely there is a finite number of pictures on the page 
that are distinguishable by a human eye.  

  So, whether or not an infinite number of objects exist in the universe, only 
a finite number of distinctions can be drawn on a monitor (for certain), or by 
an eye (almost surely)


  ben g


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Testing, and a question....

2008-10-03 Thread Mike Tintner

Colin:

1) Empirical refutation of computationalism...

.. interesting because the implication is that if anyone
doing AGI lifts their finger over a keyboard thinking they can be
directly involved in programming anything to do with the eventual
knowledge of the creature...they have already failed. I don't know
whether the community has internalised this yet.

Colin,

I'm sure Ben is right, but I'd be interested to hear the essence of your 
empirical refutation. Please externalise it so we can internalise it :) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Mike Tintner
I think either way - computers or robots - a distributed entity has to be 
looking at the world from different POV's more or less simultaneously, even if 
rapidly switching. My immediate intuitive response is that that would make the 
entity much less self-ish -much more open to merging or uniting with others.

The idea of a distributed entity may well have the power to change our ideas 
about God/ the divine force/principle ,  I suspect our ideas are directly or 
indirectly v. located. Even if we, say, think about God or the force being 
everywhere, it's hard not to think of that being the same force spread out.

But the idea of a distributed entity IMO  opens up the possibility of an entity 
with a highly multiple personality  - and perhaps also might make it possible 
to see all humans, say, and/or animals as one  - an idea which has always given 
me, personally, a headache.


Ben:yah, I discuss this in chapter 2 of The Hidden Pattern ;-) ...

the short of it is: the self-model of such a mind will be radically different 
than that of a current human, because we create our self-models largely by 
analogy to our physical organisms ...

intelligences w/o fixed physical embodiment will still have self-models but 
they will be less grounded in body metaphors ... hence radically different 

we can explore this different analytically, but it's hard for us to grok 
empathically...

a hint of this is seen in the statement my son Zeb (who plays too many 
videogames) made: i don't like the real world as much as videogames because in 
the real world I always have first person view and can never switch to third 
person   

one would suspect that minds w/o fixed embodiment would have more explicitly 
contextualized inference, rather than so often positioning all their 
inferences/ideas within one default context ... for starters...

ben


  On Fri, Oct 3, 2008 at 8:43 PM, Mike Tintner [EMAIL PROTECTED] wrote:

The foundation of the human mind and system is that we can only be in one 
place at once, and can only be directly, fully conscious of that place. Our 
world picture,  which we and, I think, AI/AGI tend to take for granted, is an 
extraordinary triumph over that limitation   - our ability to conceive of the 
earth and universe around us, and of societies around us, projecting ourselves 
outward in space, and forward and backward in time. All animals are similarly 
based in the here and now.

But,if only in principle, networked computers [or robots] offer the 
possibility for a conscious entity to be distributed and in several places at 
once, seeing and interacting with the world simultaneously from many POV's.

Has anyone thought about how this would change the nature of identity and 
intelligence? 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben: the reason AGI is so hard has to do with Santa Fe Institute style
complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but 
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their 
environments 

Characterizing what these emergent structures/dynamics are is hard, 

Ben,

Maybe you could indicate how complexity might help solve any aspect of 
*general* intelligence - how it will help in any form of crossing domains, such 
as analogy, metaphor, creativity, any form of resourcefulness  etc.-  giving 
some example.   Personally,  I don't think it has any connection  - and it 
doesn't sound from your last sentence, as if you actually see a connection :). 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben: analogy is mathematically a matter of finding mappings that match certain 
constraints.   The traditional AI approach to this would be to search the 
constrained space of mappings using some search heuristic.  A complex systems 
approach is to embed the constraints into a dynamical system and let the 
dynamical system evolve into a configuration that embodies a mapping matching 
the constraints.

Ben,

If you are to arrive at a surprising analogy or solution to a creative problem, 
 the first task is to find out a new domain that maps on to or is relevant to 
the given domain, and by definition you have no rules for where to search. If 
for example you had to solve Kauffman's practical problem - how do I 
hide/protect a loose computer cord so that no one trips over it? - which 
domains do you start with (that connect to computer cords), and where do you 
end? Books? Bricks? Tubes? Cellotape? Warning signs? There are actually an 
infinity (or practically endless set)  of possibilities. And there are no 
pre-applicable rules  about which domains to search, or what constitutes 
hiding/protecting - and therefore the constraints of the problem, or indeed 
how much evidence to consider, and what  constitutes evidence.And  hiding 
computer cords and other household objects is not a part of any formal subject 
or branch of reasoning.

Ditto if you, say, are an adman and have to find a new analogy for your beer 
being as cool as a ---  (must be new/surprising aka cabbages and kings, and 
preferably in form as well as content, e.g. as cool as a tool in a pool as a 
rule [1st attempt] ).

Doesn't complexity only apply when you have some formulae or rules to start 
with? But you don't with analogy. That's the very nature of the problem

That's why I asked you to give me a problem example. {Can you remember a 
problem example of analogy or otherwise crossing domains from your book - just 
one? )

Nor can I see how maths applies to problems such as these, or any crossing of 
domains, other than to prove that there are infinite possibilities. Which 
branch of maths actually deals with analogies? 

And the statement:

it is provable that complex systems methods can solve **any** analogy problem, 
given appropriate data 

seems outrageous. You can prove mathematically that you can solve the creative 
problem of the engram (how info. is laid down in the brain)? That you can 
solve any of  the problems of discovery and invention currently being faced by 
science and technology? A mind-reading machine, say? Or did you mean problems 
where you are given appropriate data, i.e. the answers/clues/rules? Those 
aren't problems of analogy or creativity. 

I don't know about you, but a lot of computer guys don't actually understand 
what analogy is. Hofstadter's  oft-cited xyy is to xyz as abb is to a--? for 
example  is NOT an analogy. It is logic.

And if you look at your brief answer para, you will find that while you talk 
of mappings and constraints, (which are not necessarily AGI at all), you make 
no mention in any form of how complexity applies to the crossing of hitherto 
unconnected domains [or matrices, frames etc], which, of course, are.


.








  Ben,
Ben: the reason AGI is so hard has to do with Santa Fe Institute style
complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but 
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their 
environments 

Characterizing what these emergent structures/dynamics are is hard, 

Ben,

Maybe you could indicate how complexity might help solve any aspect of 
*general* intelligence - how it will help in any form of crossing domains, such 
as analogy, metaphor, creativity, any form of resourcefulness  etc.-  giving 
some example.  

 
Personally,  I don't think it has any connection  - and it doesn't sound 
from your last sentence, as if you actually see a connection :). 



  You certainly draw some odd conclusions from the wording of peoples' 
sentences.  I not only see a connection, I wrote a book on this subject, 
published by Plenum Press in 1997: From Complexity to Creativity.

  Characterizing these things at the conceptual and even mathematical level is 
not as hard at realizing them at the software level... my 1997 book was 
concerned with the former.

  I don't have time today to cut and paste extensively from there to satisfy 
your curiosity, but you're free to read the thing ;-) ... I still agree with 
most of it ...

  To give a brief answer to one of your questions: analogy is mathematically a 
matter of finding mappings that match certain constraints.   The traditional AI 
approach to this would be to search the constrained space of mappings using 
some search heuristic.  A complex systems approach is to embed the constraints 
into a dynamical system and let the dynamical system evolve into a 
configuration that embodies a mapping matching the 

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Can't resist, Ben..

it is provable that complex systems methods can solve **any** analogy problem, 
given appropriate data 

Please indicate how your proof applies to the problem of developing an AGI 
machine. (I'll allow you to specify as much appropriate data as you like - 
any data,  of course, *currently* available).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben,

Well, funny perhaps to some. But nothing to do with AGI -  which has nothing to 
with well-defined problems. 

The one algorithm or rule that can be counted on here is that AGI-ers won't 
deal with the problem of AGI -  how to cross domains (in ill-defined, 
ill-structured problems). Applies to Richard too. But the reasons for this 
general avoidance aren't complex :)

  Ben,
  It doesn't have any application...

  My proof has two steps

  1)
  Hutter's paper

  The Fastest and Shortest Algorithm for All Well-Defined Problems
  http://www.hutter1.net/ai/pfastprg.htm

  2)
  I can simulate Hutter's algorithm (or *any* algorithm)
  using an attractor neural net, e.g. via Mikhail Zak's
  neural nets with Lipschitz-discontinuous threshold
  functions ...


  This is all totally useless as it requires infeasibly much computing power 
... but at least, it's funny, for those of us who get the joke ;-)

  ben




  On Tue, Sep 30, 2008 at 3:38 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Can't resist, Ben..

it is provable that complex systems methods can solve **any** analogy 
problem, given appropriate data 

Please indicate how your proof applies to the problem of developing an AGI 
machine. (I'll allow you to specify as much appropriate data as you like - 
any data,  of course, *currently* available).




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben,

I must assume you are being genuine here - and don't perceive that you have not 
at any point   illustrated how complexity might lead to the solution of any 
given general (domain-crossing) problem of AGI.

Your OpenCog design also does not illustrate how it is to solve problems - how 
it is, for example, to solve the problems of concept, especially speculative 
concept,, formation. There are no examples in the relevant passages. General 
statements of principle but no practical examples. [Otherwise offhand I can't 
see any sections that relate to crossing domains].

You  rarely give examples  - i.e. you do not ground your theories - your novel 
ideas, (as we have discussed before). [You give standard textbook examples of 
problems, of course,  in other, unrelated discussions]. 

You have already provided one very suitable example of a general AGI problem -  
how is your pet having learnt one domain - to play fetch, - to use that 
knowledge to cross into another domain -  to learn/discover the game of 
hide-and-seek.?  But I have repeatedly asked you to give me your ideas how 
your system will deal with this problem. And you have always avoided it. I 
don't think, frankly, you have an idea how it will make the connection in an 
AGI way. I am extremely confident you couldn't begin to explain how a complex 
approach will make the cross-domain connection between fetching and 
hiding/seeking. (What *is* the connection BTW?)

If it is any consolation - this reluctance to deal with AGI problems is 
universal among AGI-ers. Richard. Pei. Minsky...

Check how often in the past few years cross-domain problems have been dealt 
with on this group. Masses of programming, logical and mathematical problems, 
of course, in great, laudable detail. But virtually none that relate to 
crossing domains.

One thing is for sure - if you don't discuss and deal with the problems of AGI 
- and lots and lots of examples - you will never get any better at them. The 
answers won't magically pop up. No one ever got better at a skill by *not* 
practising it.

P.S. As for :

gather as much money as possible while upsetting as few people as pos [or as 
little] - it is a massively open-ended [and indeed GI] problem that can be 
instantiated in a virtual infinity of moneymaking domains [from stockmarkets, 
to careers, small jobs, prostitution and virtually any area of the economy] 
with a virtual infinity of constructions of  upsetting. . Please explain how 
a complex AGII program, which by definition would not be pre-prepared for such 
a problem ,  would tightly define it or even *want* to .

And note your first instinct - rather than asking- how can we deal with this 
open-ended problem in an open-ended AGI way - you immediately talk about trying 
to define it in a closed-ended, tightly defined, basically *narrow* AI way. 
That again is a typical, pretty universal instinct among AGI-ers.

{Remember Levitt's What people need is not a quarter-inch drill, but 
quarter-inch holes -  AGI should be first  foremost not about how you 
construct certain logical programs, but how you solve certain problems - and 
then work out what programs you need.]






  Ben,

Well, funny perhaps to some. But nothing to do with AGI -  which has 
nothing to with well-defined problems. 


  I wonder if you are misunderstanding his use of terminology.

  How about the problem of gathering as much money as possible while upsetting 
people as little as possible?

  That could be well defined in various ways, and would require AGI to solve as 
far as I can see...

   
The one algorithm or rule that can be counted on here is that AGI-ers won't 
deal with the problem of AGI -  how to cross domains (in ill-defined, 
ill-structured problems). 


  I suggestion the OpenCogPrime design can handle this, and it's outlined in 
detail at

  http://www.opencog.org/wiki/OpenCogPrime:WikiBook

  You are not offering any counterarguments to my suggestion, perhaps (I'm not 
sure)
  because you lack the technical expertise or the time to read about the design
  in detail.

  At least, Richard Loosemore did provide a counterargument, which I disagreed 
with ... but you provide
  no counterargument, you just repeat that you don't believe the design 
addresses the problem ...
  and I don't know why you feel that way except that it intuitively doesn't 
seem to feel right
  to you...

  -- Ben G




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Mike Tintner
Ben and Stephen,

AFAIK your focus - and the universal focus - in this debate on how and whether 
language can be symbolically/logically interpreted - is on *individual words 
and sentences.*  A natural place to start. But you can't stop there - because 
the problems, I suggest, (hard as they already are), only seriously begin when 
you try to interpret *passages* - series of sentences from texts - and connect 
one sentence with another. Take:

John sat down in the carriage. His grim reflection stared at him through the 
window. A whistle blew. The train started shuddering into motion, and slowly 
gathered pace. He was putting Brighton behind him for good. And just then the 
conductor popped his head through the door.

I imagine you can pose the interpretative questions yourself. How do you 
connect any one sentence with any other here? Where is the whistle blowing? 
Where is the train moving? Inside the carriage or outside? Is the carriage 
inside or outside or where in relation to the moving train?  Was he putting 
Brighton *physically* behind him like a cushion? Did the conductor break his 
head? etc. etc.

The point is - in reading passages, in order to connect up sentences, you have 
to do a massive amount of *reading between the lines* .  In doing that, you 
have to reconstruct the world or parts of the world, being referred to, from 
your brain's own models of that world.. (To understand the above passage, for 
example, you employ a very complex model of train travel).

And this will apply to all kinds of passages - to arguments as well as stories. 
 (Try understanding Ben's argument below).

How does Stephen or YKY or anyone else propose to read between the lines? And 
what are the basic world models, scripts, frames etc etc. that you think 
sufficient to apply in understanding any set of texts, even a relatively 
specialised set?

(Has anyone seriously *tried* understanding passages?)


  Stephen,

  Yes, I think your spreading-activation approach makes sense and has plenty of 
potential.

  Our approach in OpenCog is actually pretty similar, given that our 
importance-updating dynamics can be viewed as a nonstandard sort of spreading 
activation...

  I think this kind of approach can work, but I also think that getting it to 
work generally and robustly -- not just in toy examples like the one I gave -- 
is going to require a lot of experimentation and trickery.  

  Of course, if the AI system has embodied experience, this provides extra 
links for the spreading activation (or analogues) to flow along, thus 
increasing the odds of meaningful results...

  Also, I think that spreading-activation type methods can only handle some 
cases, and that for other cases one needs to use explicit inference to do the 
disambiguation.

  My point for YKY was (as you know) not that this is an impossible problem but 
that it's a fairly deep AI problem which is not provided out-of-the-box in any 
existing NLP toolkit.  Solving disambiguation thoroughly is AGI-hard ... 
solving it usefully is not ... but solving it usefully for *prepositions* is 
cutting-edge research going beyond what existing NLP frameworks do...

  -- Ben G


  On Mon, Sep 29, 2008 at 1:25 PM, Stephen Reed [EMAIL PROTECTED] wrote:

Ben gave the following examples that demonstrate the ambiguity of the 
preposition with:


People eat food with forks


People eat food with friend[s]


People eat food with ketchup


The Texai bootstrap English dialog system, whose grammar rule engine I'm 
currently rewriting, uses elaboration and spreading activation to perform 
disambiguation and pruning of alternative interpretations.  Let's step through 
how Texai would process Ben's examples.  According to Wiktionary,  with has 
among its word senses the following:

  a.. as an instrument; by means of
  a.. in the company of; alongside; along side of; close to; near to
  a.. in addition to, as an accessory to

Its clear when I make these substitutions which word sense is to be 
selected:


People eat food by means of forks

People eat food in the company of friends

People eat ketchup as an accessory to food


Elaboration of the Texai discourse context provides additional entailed 
propositions with respect to the objects actually referenced in the utterance.  
 The elaboration process is efficiently performed by spreading activation over 
the KB from the focal terms with respect to context.  The links explored by 
this process can be formed by offline deductive inference, or learned from 
heuristic search and reinforcement learning, or simply taught by a mentor.

Relevant elaborations I would expect Texai to make for the example 
utterances are:


a fork is an instrument

there are activities that a person performs as a member of a group of 
friends; to eat is such an activity

ketchup is a condiment; a condiment is an accessory with regard to food


Texai considers all interpretations 

Re: [agi] universal logical form for natural language

2008-09-29 Thread Mike Tintner
David,

Thanks for reply. Like so many other things, though, working out how we 
understand texts is central to understanding GI - and something to be done 
*now*. I've just started looking at it, but immediately I can see that what the 
mind does - how it jumps around in time and space and POV and person/subject - 
and flexibly applies its world/subworld models - is quite awesome.

I think the word/sentence focus BTW is central to cognitive science *and* the 
embodied cog. sci. of Lakoff and co.  as well as AI/AGI.  

But the understanding of language understanding will only really come alive 
when we move the focus to passages - and how we use language to construct a) 
stories b) arguments and c) scenes (descriptive passages).   [I wonder whether 
there are any other major categories of language].

It also entails a switch from just a one-sided embodied POV to a two-sided 
embodied-embedded overview, looking at how language is embedded in the world.

To focus on sentences alone is like focussing on the odd frame in a movie. You 
can't get the picture at all.

A passage/text approach will v. quickly answer Matt's:

I mean that a more productive approach would be to try to understand why the 
problem is so hard.


  David:

How does Stephen or YKY or anyone else propose to read between the lines? 
And what are the basic world models, scripts, frames etc etc. that you 
think sufficient to apply in understanding any set of texts, even a relatively 
specialised set?

(Has anyone seriously *tried* understanding passages?)

  That's a most thoughtful and germane question! The short answer is no, we're 
not ready yet to even *try* to tackle understanding passages. Reaching that 
goal is definitely on the roadmap though, and there's a concrete plan to get 
there involving learning through vast and varied activities experienced over 
the course of many years of practically continious residence in numerous 
virtual worlds. The plan indeed includes the continuous creation, variation and 
development of mental world-models within an OCP-based mind. Attention 
allocation and many other mind dynamics (CIMDynamics) crucial to this 
world-modeling faculty must be adequately developed, tested and tuned as a 
pre-requisite to begin trying to understand passages (and, also to generate and 
communicate imagined world-models as a human story teller would do; a curious 
byproduct of an intelligent system that can reason about potential events and 
scenarios!)

  NB: help is needed on the OpenCog wiki to better document many of the 
concepts discussed here and elsewhere, e.g. Concretely-Implemented Mind 
Dynamics (CIMDynamics) requires a MindOntology page explaining it conceptually, 
in addtion to the existing nuts-and-bolts entry in the OpenCogPrime section. 

  -dave


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Mike Tintner

Eric,

Thanks for link. Flipping through quickly, it still seemed sentence-based.

Here's an example of time flipping - fast-forwarding text - and the kind 
of jumps that the mind can make


AGI Year One. AGI is one of the great technological challenges. We believe 
we have the basic technology - the basic modules - to meet that challenge.
AGI Year Five. We can reach the goal of AGI in 10 years, if we really, 
really try.

AGI Year Ten.  It may take longer than we thought, but we can get there...
AGI Year Fifteen: It's proved a much larger problem than we ever 
imagined..


[n.b. I'm not trying to be historically or otherwise accurate :)

But note how your mind had no problem creating a v. complex underlying 
time-jumping scenario to understand - and fill/read between the lines of - 
that text. No current approach has the slightest idea how to do that, I 
suggest.  You can't do it by a surface approach,  simply analysing how words 
are used in however many million verbally related sentences in texts on the 
net.





http://video.google.ca/videoplay?docid=-7933698775159827395ei=Z1rhSJz7CIvw-QHQyNkCq=nltkvt=lf

NLTK video ;O

On 9/29/08, Mike Tintner [EMAIL PROTECTED] wrote:

David,

Thanks for reply. Like so many other things, though, working out how we
understand texts is central to understanding GI - and something to be 
done
*now*. I've just started looking at it, but immediately I can see that 
what

the mind does - how it jumps around in time and space and POV and
person/subject - and flexibly applies its world/subworld models - is 
quite

awesome.

I think the word/sentence focus BTW is central to cognitive science *and*
the embodied cog. sci. of Lakoff and co.  as well as AI/AGI.

But the understanding of language understanding will only really come 
alive
when we move the focus to passages - and how we use language to construct 
a)

stories b) arguments and c) scenes (descriptive passages).   [I wonder
whether there are any other major categories of language].

It also entails a switch from just a one-sided embodied POV to a 
two-sided

embodied-embedded overview, looking at how language is embedded in the
world.

To focus on sentences alone is like focussing on the odd frame in a 
movie.

You can't get the picture at all.

A passage/text approach will v. quickly answer Matt's:

I mean that a more productive approach would be to try to understand why
the problem is so hard.


  David:

How does Stephen or YKY or anyone else propose to read between the
lines? And what are the basic world models, scripts, frames etc 
etc.
that you think sufficient to apply in understanding any set of texts, 
even a

relatively specialised set?

(Has anyone seriously *tried* understanding passages?)

  That's a most thoughtful and germane question! The short answer is no,
we're not ready yet to even *try* to tackle understanding passages. 
Reaching
that goal is definitely on the roadmap though, and there's a concrete 
plan

to get there involving learning through vast and varied activities
experienced over the course of many years of practically continious
residence in numerous virtual worlds. The plan indeed includes the
continuous creation, variation and development of mental world-models 
within

an OCP-based mind. Attention allocation and many other mind dynamics
(CIMDynamics) crucial to this world-modeling faculty must be adequately
developed, tested and tuned as a pre-requisite to begin trying to 
understand
passages (and, also to generate and communicate imagined world-models as 
a

human story teller would do; a curious byproduct of an intelligent system
that can reason about potential events and scenarios!)

  NB: help is needed on the OpenCog wiki to better document many of the
concepts discussed here and elsewhere, e.g. Concretely-Implemented Mind
Dynamics (CIMDynamics) requires a MindOntology page explaining it
conceptually, in addtion to the existing nuts-and-bolts entry in the
OpenCogPrime section.

  -dave


--
agi | Archives  | Modify Your Subscription



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Mike Tintner

  Ben,

  Er, you seem to be confirming my point. Tomasello from Wiki is an early child 
development psychologist. I want a model that keeps going to show the stages of 
language acquistion from say 7-13, on through teens, and into the twenties - 
that shows at what stages we understand progressively general and abstract 
concepts like, say, government, philosophy, relationships,  etc etc. - and why 
different, corresponding texts are only understandable at different ages.

  There is nothing like this because there is no true *embedded* cognitive 
science that looks at how long it takes to build up a picture of the world, and 
how language is embedded in our knowledge of the world. [The only thing that 
comes at all close to it, that I know, is Margaret Donaldson's work, if I 
remember right].

  Re  rhetorical structure theory - many thanks for the intro -  it looks 
interesting. But again this is not an embedded approach:

  RST is intended to describe texts, rather than the processes of creating or 
reading and understanding them

  For example, to understand sentences they quote like

  He tends to be viewed now as a biologist, but in his 5 years on the Beagle 
his main work was geology, and he saw himself as a geologist. His work 
contributed significantly to the field.

  requires a considerable amount of underlying knowledge about Darwin's life, 
and an extraordinary ability to handle timelines - and place events/sentences 
in time.

  I can confidently bet that no one is attempting this type of text/structural 
analysis because no one, as I said, is taking an embedded approach to language. 
[Embedded is to embodied in the analysis of language use and thought as 
environment is to nature in the analysis of behaviour generally].


  Ben,


Cognitive linguistics also lacks a true deveopmental model of language 
acquisition that goes beyond the first few years of life, and can embrace all 
those several - and, I'm quite sure, absolutely necessary - stages of mastering 
language and building a world picture.


  Tomassello's theory of language acquisition specifically embraces the 
phenomena you describe.  What don't you like about it?

  ben 




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread Mike Tintner

Abram,

Yes, I'm aware of Schank - and failed to reference him. I think though that 
that approach signally failed. And you give a good reason - it requires too 
much knowledge entry.  And that is part of my point. On the surface, 
language passages can appear to be relatively simple, but actually they 
involve the manipulation of very complex underlying world pictures to fill 
in the gaps and complete them. Building up those world pictures is a 
stage-by-stage developmental, multi-level hierarchical process, which takes 
more than twenty years of education for a developing human.


There are inevitable reasons why you can't simply start your database with 
sentences like Daddy hit Susan hard and immediately add sentences like 
The government hit the rebels hard.  It takes a lot more knowledge to 
understand what governments and rebels are than Daddies and to 
understand how their hitting differs from Daddy's - vastly more than is 
contained in dictionary definitions.There are no short-cuts to acquiring 
this knowledge.


We need a true developmental psychology [that covers the whole of youth] to 
help us understand how a world picture is developed, just as we need a true 
evolutionary psychology [that covers all species and not just human].


P.S. I think also that the passages vs sentences distinction may actually 
be distinctive, because it really demands that you start with a broad range 
of actual texts and try to analyse their structural nature. You need an 
initially scientific and general approach.


My guess is that Schank and AI generally start from a technological POV, 
conceiving of *particular* approaches to texts that they can implement, 
rather than first attempting a *general* overview.


P.P.S. Thanks for the story literature - great page.


Mike,

If your question is directed toward the general AI community (rather
then the people on this list), the answer is a definite YES. It was
some time ago, and as far as I know the line of research has been
dropped, yet the results are to this day quite surprisingly good (I
think). The following site has an example.

http://www.it.uu.se/edu/course/homepage/ai/vt07/SCHANK.HTM

The details of the story can vary fairly significantly and still the
system performs as well as it does here (so long as it is still a
story about traveling to get something to eat, written with the sorts
of grammatical constructs you see in that story). Of course, this is a
result of a fair amount of effort, programming scripts for everyday
events. The approach was dropped because too much knowledge entry
would be required to be practical for reading, say, a random newspaper
story. But that is just what Cyc is for.

Anyway, the point is, understanding passages is not a new field, just
a neglected one.

--Abram

On Mon, Sep 29, 2008 at 3:23 PM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Ben and Stephen,

AFAIK your focus - and the universal focus - in this debate on how and
whether language can be symbolically/logically interpreted - is on
*individual words and sentences.*  A natural place to start. But you 
can't

stop there - because the problems, I suggest, (hard as they already are),
only seriously begin when you try to interpret *passages* - series of
sentences from texts - and connect one sentence with another. Take:

John sat down in the carriage. His grim reflection stared at him through
the window. A whistle blew. The train started shuddering into motion, and
slowly gathered pace. He was putting Brighton behind him for good. And 
just

then the conductor popped his head through the door.

I imagine you can pose the interpretative questions yourself. How do you
connect any one sentence with any other here? Where is the whistle 
blowing?

Where is the train moving? Inside the carriage or outside? Is the
carriage inside or outside or where in relation to the moving train?  Was 
he
putting Brighton *physically* behind him like a cushion? Did the 
conductor

break his head? etc. etc.

The point is - in reading passages, in order to connect up sentences, you
have to do a massive amount of *reading between the lines* .  In doing 
that,
you have to reconstruct the world or parts of the world, being referred 
to,

from your brain's own models of that world.. (To understand the above
passage, for example, you employ a very complex model of train travel).

And this will apply to all kinds of passages - to arguments as well as
stories.  (Try understanding Ben's argument below).

How does Stephen or YKY or anyone else propose to read between the 
lines?
And what are the basic world models, scripts, frames etc etc. that 
you

think sufficient to apply in understanding any set of texts, even a
relatively specialised set?

(Has anyone seriously *tried* understanding passages?)


Stephen,

Yes, I think your spreading-activation approach makes sense and has 
plenty

of potential.

Our approach in OpenCog is actually pretty similar, given that our
importance-updating dynamics can

Re: [agi] universal logical form for natural language

2008-09-28 Thread Mike Tintner

[Comment: Aren't logic and common sense *opposed*?]

Discursive [logical, propositional] Knowledge vs Practical [tacit] Knowledge
http://www.polis.leeds.ac.uk/assets/files/research/working-papers/wp24mcanulla.pdf

a) Knowledge: practical and discursive

Most, if not all understandings of tradition stress the way in which 
knowledge and beliefs are transmitted or transferred over time. However, as 
we have seen, different perspectives place varying emphases on the types of 
knowledge and belief being transferred. Some make practical and tacit 
knowledge primary, others make rational and/or intellectual knowledge forms 
of knowledge central. However, in principle there is no reason to assume 
that both types of knowledge are not important to tradition. Yet to maintain 
this necessitates examining to what extent these kind of knowledge are 
distinct and/or compatible. It will be suggested below that we might gain a 
better grasp of traditions by making a clear distinction between the 
different types of knowledge they can transmit. Stompka's unpacking of the 
objects of tradition into material and ideal components is instructive here. 
For this draws our attention to examine not just the relations between the 
different ideas within traditions, but also the relations between people and 
the physical objects relevant to a tradition. Drawing on realist social 
theory, I suggest drawing a distinction between practical and discursive 
forms of knowledge3.


Practical knowledge
. Centrally concerns subject-object relations e.g. someone's skill in using 
a bottle-opener
. Primarily tacit in content, as it involves engaging with reality through 
activity and dealings with artifacts (rather than manipulating symbols)
. Cognitive content entails non-verbal theorising and development of skills 
(rather than enunciation of propositions) (Archer, 2000: 166)


Practical knowledge emerges from our active engagement with the world of 
objects. In this view pre-verbal practical action is the way in which 
infants learn principles of logical reasoning. Learning these principles in 
a is necessary and prior to discursive socialisation and the acquisition of 
language. However, there is no reason to believe that such non-linguistic 
forms of practical action cease following the learning of language (Archer, 
2000: 153). Indeed the practical skills we develop often do not depend in a 
direct way upon language e.g. our abilities to use a bottle opener, or to 
control car gears through use of a clutch, are something we gain a 'feel' 
for. The best kinds of car-user instruction manual do not of themselves help 
develop many of the practical skills we need for driving. As such practical 
knowledge is regulated by our relations with material culture i.e. the 
objects and artifacts we encounter (ibid. 166) Practical knowledge is thus 
implicit and tacit, gained through activity rather than through engaging 
with linguistic propositions or discursive symbols. When practical knowledge 
is transmitted (e.g. in the form of tradition) it is done so in the form of 
'apprenticeship' where skilled individual e.g. Mastercraftsmen or a 
Professional demonstrates good practice and offers practical criticism and 
evaluation (ibid. 176) Once such skills are acquired, the use of such 
practical knowledge often becomes 'second nature'.


Discursive knowledge
. Centrally concerns subject-subject relations and linguistic communication
. Consists of theories, arguments, social norms and their propositional 
formulation (Archer, 2000: 173-176)

. Consist of linguistically generated meaning and symbols

Discursive knowledge is developed through our linguistic powers to 
communicate meaningfully and to attribute meanings to our relations. Thus 
discursive knowledge may consist of theories, arguments, social norms and 
the kinds of propositions associated with them (e.g. 'maximum liberty 
requires a minimal state'). The ideas contained within discursive knowledge 
stand in logical relationship to one another and can usually be represented 
in propositional forms. It is through discursive knowledge that we develop 
and maintain ideational commitments to particular doctrines, theories or 
world-views (Archer, 2000: 173-176). Discursive knowledge can act to 
constrain and/or enable our projects as actors in the world. In turn, this 
discursive knowledge can be elaborated or transformed as a result of our 
socio-linguistic interactions. Discursive knowledge is transmitted, or 
handed down (e.g. within tradition) through 'scholarship', the teaching of 
linguistically encoded theories and propositions.


b) The interaction between practical and discursive knowledge
If such a distinction between practical and discursive knowledge is accepted 
then it is clear that traditions may vary in the extent to which they 
consist of each type. For example, a tradition of British farming would 
clearly involve a high element of practical knowledge. Conversely, an 

Re: [agi] Call yourself mathematicians? [O/T]

2008-09-24 Thread Mike Tintner
Thanks, Ben, Dmitri for replies.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Balancing Body (and Mind)

2008-09-24 Thread Mike Tintner
Piecing through the notice below with my renowned ignorance, it occurs to me 
to ask: does the brain/ cerebellum demonstrate as much general intelligence 
and flexibility  in its movements as in its consciously directed thinking? 
...  In its ability to vary muscle coordination patterns ( structural 
alignments) to achieve the same motions? (there is no one-to-one 
correspondence between a desired movement goal, limb motions, or  muscle 
activity)? (Its capacity, for example, to shift around body weight while 
standing in a given position, in order to ease pressures, or to 
automatically adjust muscular coordination for walking,say, when a foot is 
injured, [without any history of such an injury]). ... In its ability to 
improvise new muscle coordination patterns etc to create new movements, (and 
move so elegantly through unpredictable and  dynamic environments)? (Its 
capacity for example to immediately contort itself strangely to catch a 
plate falling at a strange angle,   or to writhe every which way to squeeze 
out of tight corners).


[Are there any good/standard terms BTW for this flexibility of motor 
patterns? Multimuscularity?]


Do any AGI's demonstrate any comparable flexibility in trying to solve 
problems? This perhaps comes down to Minsky's idea that  an AGI should be 
able to switch between different ways to think - or perhaps one can use 
the word faculties. Are there any systems that can, say, switch flexibly 
between different kinds of logic (PLN/NARS say) when one doesn't work? Or 
between logic, language, visualisation, geometry etc  - to solve the same 
problem?


Shouldn't this be a foundational requirement for an AGI - the ability to 
switch between faculties/ modalities in solving intellectual problems, as 
easily as the body switches between muscle groups in solving motor problems 
(and as the brain itself switches)? The capacity to have its wits about 
it?


[I get almost 0, googling for multimodal AI].


   *** Redwood Seminar - TODAY ***

 Dimensional Reduction in Motor Patterns for Balance Control

   Lena H. Ting
   Department of Biomedical Engineering, Emory University
  and Georgia Institute of Technology,
and Fall 2008 Visiting Miller Professor

   Wednesday, Sept. 24 at 12:00
 508-20 Evans Hall

How do humans and animals move so elegantly through unpredictable and
dynamic environments? And why does this question continue to pose
such a challenge? We have a wealth of data on the action of neurons,
muscles, and limbs during a wide variety of motor behaviors, yet
these data are difficult to interpret, as there is no one-to-one
correspondence between a desired movement goal, limb motions, or
muscle activity. Using combined experimental and computational
approaches, we are teasing apart the neural and biomechanical
influences on muscle coordination of during standing balance control
in cats and humans. Our work demonstrates that variability in motor
patterns both within and across subjects during balance control in
humans and animals can be characterized by a low-dimensional set of
parameters related to abstract, task-level variables. Temporal
patterns of muscle activation across the body can be characterized by
a 4-parameter, delayed-feedback model on center-of-mass kinematic
variables. Changes in muscle activity that occur following large-
fiber sensory-loss in cats, as well as during motor adaptation in
humans, appear to be constrained within the low-dimensional parameter
space defined by the feedback model. Moreover, well-adapted responses
to perturbations are similar to those predicted by an optimal
tradeoff between mechanical stability and energetic expenditure.
Spatial patterns of muscle activation can also be characterized by a
small set of muscle synergies (identified using non-negative matrix
factorization) that are like motor building blocks, defining
characteristic patterns of activation across multiple muscles. We
hypothesize that each muscle synergy performs a task-level function,
thereby providing a mechanism by which task-level motor intentions
are translated into detailed, low-level muscle activation patterns.
We demonstrate that a small set of muscle synergies can account for
trial-by-trial variability in motor patterns across a wide range of
balance conditions. Further, muscle activity and forces during
balance control in novel postural configurations are best predicted
my minimizing the activity of a few muscle synergies rather than the
activity of individual muscles. Muscle synergies may represent a
sparse motor code, organizing muscles to solve an “inverse binding
problem” for motor outputs. We propose that such an organization
facilitates fast motor adaptation while concurrently imposing
constraints on the structure and energetic efficiency of motor
patterns used during motor 

[agi] Call yourself mathematicians? [O/T]

2008-09-23 Thread Mike Tintner

So can *you* understand credit default swaps?

Here's the scary part of today's testimony everyone seems to have missed: 
SEC chairman Chris Cox's statement that the Credit Default Swap (CDS) market 
is completely unregulated. It's size? Somewhere in the $50 TRILLION 
range.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Call yourself mathematicians? [O/T]

2008-09-23 Thread Mike Tintner
Ben,

Are CDS significantly complicated then - as an awful lot of professional, 
highly intelligent people are claiming?
  So can *you* understand credit default swaps?

  Yes I can, having a PhD in math and having studied a moderate amount of 
mathematical finance ...




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-20 Thread Mike Tintner
Steve:
If I were selling a technique like Buzan then I would agree. However, someone 
selling a tool to merge ALL techniques is in a different situation, with a 
knowledge engine to sell.

The difference AFAICT is that Buzan had an *idea* -   don't organize your 
thoughts about a subject in random order, or list, or tables or other old 
structures  etc. organize them like a map/tree on a page so that you can 
oversee them. Not a big idea, but an idea, out of wh. he's made money,  
clearly appeals to many..

If you have a distinctive idea, wh. you may well have, I've missed it  you're 
not repeating it. A tool to merge all techniques is a goal, not an idea. You 
have to show me that you have an idea - some new insight into general system 
principles applying to ,say, repair. And if you are to do focus groups, you 
will also have to have a new idea to show them  test on them.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner


Pei:In a broad sense, formal logic is nothing but
domain-independent and justifiable data manipulation schemes. I
haven't seen any argument for why AI cannot be achieved by
implementing that

Have you provided a single argument as to how logic *can* achieve AI - or 
to be more precise, Artificial General Intelligence, and the crossing of 
domains? [See attached post to Matt]


The line of argument above is classically indirect (and less than logical?). 
It's comparable to:


SHE:  Have you been unfaithful to me?
HE:  Why would I be unfaithful to you?

SHE: You've been unfaithful to me, haven't you?
HE: What possible reason have you for thinking I've been unfaithful?

The task you should by now have achieved is providing a direct argument why 
AGI *can* be achieved by your logic, not expecting others to show that it 
can't be.


(And can you provide an example of a single surprising metaphor or analogy 
that have ever been derived logically? Jiri said he could - but didn't.)






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner

  Ben: Mike:
  (And can you provide an example of a single surprising metaphor or analogy 
that have ever been derived logically? Jiri said he could - but didn't.)


  It's a bad question -- one could derive surprising metaphors or analogies by 
random search, and that wouldn't prove anything useful about the AGI potential 
of random search ...

  Ben,

  When has random search produced surprising metaphors ? And how did or would 
the system know that it has been done - how would it be able to distinguish 
valid from invalid metaphors, and surprising from unsurprising ones?

  You have just put forward, I suggest, a hypothetical/false and evasive 
argument.

  Your task, as Pei's, is surely to provide an argument, or some evidence, as 
to how the logical system you use can lead in any way to the crossing/ 
connection of previously uncrossed/unconnected domains - the central task and 
problem of  AGI.   Surprising metaphors and analogies are just two examples of 
such crossing of domains. (And jokes another)

  You have effectively tried to argue  via the (I suggest) false random search 
example, that it is impossible to provide such an argument..

  The truth is - I'm betting - that, you're just making excuses -   neither you 
nor Pei have ever actually proposed an argument as to how logic can solve the 
problem of AGI and, after all these years, simply don't have one. If you have 
or do, please link me.

  P.S. The counterargument is v. simple. A connection of domains via 
metaphor/analogy or any other means is surprising if it does not follow from 
any known premises and  rules. There were no known premises and rules for Matt 
to connect altimeters and the measurement of progress, or, if you remember my 
visual pun, for connecting the head of a clarinet and the head of a swan. Logic 
depends on inferences from known premises and rules. Logic is therefore quite 
incapable of - and has always been expressly prohibited from - making 
surprising connections (and therefore solving AGI). It is dedicated to the 
maintenance not the breaking of rules.

  As for Logic, its syllogisms and the majority of its other precepts are of 
avail rather in the communication of what we already know, or... even in 
speaking without judgment of things of which we are ignorant, than in the 
investigation of the unknown.
  Descartes

  If I and Descartes are right - and there is every reason to think so, (incl. 
the odd million, logically inexplicable metaphors not to mention many millions 
of logically inexplicable jokes)  - you surely should be addressing this matter 
urgently, not evading it..

  P.P.S. You should also bear in mind that a vast amount of jokes (which 
involve the surprising crossing of domains) explicitly depend on ILLOGICALITY. 
Take the classic Jewish joke about the woman who, told that her friend's son 
has the psychological problem of an Oedipus Complex, says:
  Oedipus Schmoedipus, what does it matter as long as he loves his mother? 
And your logical explanation is..?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner
Ben,

Not one metaphor below works.

You have in effect accepted the task of providing a philosophy and explanation 
of your AGI and your logic - you have produced a great deal of such stuff 
(quite correctly). But none of it includes the slightest explanation of how 
logic can produce AGI - or, to use your favourite metaphor, how the plane will 
take off. I don't know the history of the Wright brothers, but I'll confidently 
bet that they had at least an idea or two, from early on, of how and why their 
contraption would fly. They didn't entirely wing it.

Mike,

I understand that my task is to create an AGI system, and I'm working on it 
...

The fact that my in-development, partial AGI system has not yet demonstrated 
advanced intelligence, does not imply that it will not do so once completed.

No, my AGI system has not yet discovered surprising metaphors, because it is 
still at an early stage of development.  So what.  An airplane not yet fully 
constructed doesn't fly anywhere either.

My point was that asking whether a certain type of software system has ever 
produced a surprising metaphor -- is not a very interesting question.  I am 
quite sure that the chatbot MegaHAL has produced many surprising metaphors.  
For instance, see his utterances on

http://megahal.alioth.debian.org/Classic.html

including

AMAZING GRACE, HOW SWEET THE SOUND OF ONE OR MORE NUMBERS REPRESENTED IN 
DIGITAL FORM. 

HAL IS A CRAZY COW WHEN IT SINGS HALLELUJA 

LIFE'S BUT A GREEN DUCK WITH SOY SAUCE 

CHESS IS A FUN SPORT, WHEN PLAYED WITH SHOT GUNS. 

KEN KESEY WROTE ONE FLEW OVER THE CENTER OF THE CUCKOLDED LIZARD MAN, WHO 
STRAYED FROM HIS MISTAKES WHEN HE IS A MADEUP WORD.

COWS FLY LIKE CLOUDS BUT THEY ARE NEVER COMPLETELY SUCCESSFUL

JESUS IS THE BEST RADIO PRODUCER IN THE BEANS.

MegaHAL is kinda creative and poetic, and he does generate some funky and 
surprising metaphors ...  but alas he is not an AGI...

-- Ben



  On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner [EMAIL PROTECTED] wrote:


  Ben: Mike:

  (And can you provide an example of a single surprising metaphor or 
analogy that have ever been derived logically? Jiri said he could - but didn't.)


  It's a bad question -- one could derive surprising metaphors or analogies 
by random search, and that wouldn't prove anything useful about the AGI 
potential of random search ...

  Ben,

  When has random search produced surprising metaphors ? And how did or 
would the system know that it has been done - how would it be able to 
distinguish valid from invalid metaphors, and surprising from unsurprising ones?

  You have just put forward, I suggest, a hypothetical/false and evasive 
argument.

  Your task, as Pei's, is surely to provide an argument, or some evidence, 
as to how the logical system you use can lead in any way to the crossing/ 
connection of previously uncrossed/unconnected domains - the central task and 
problem of  AGI.   Surprising metaphors and analogies are just two examples of 
such crossing of domains. (And jokes another)

  You have effectively tried to argue  via the (I suggest) false random 
search example, that it is impossible to provide such an argument..

  The truth is - I'm betting - that, you're just making excuses -   neither 
you nor Pei have ever actually proposed an argument as to how logic can solve 
the problem of AGI and, after all these years, simply don't have one. If you 
have or do, please link me.

  P.S. The counterargument is v. simple. A connection of domains via 
metaphor/analogy or any other means is surprising if it does not follow from 
any known premises and  rules. There were no known premises and rules for Matt 
to connect altimeters and the measurement of progress, or, if you remember my 
visual pun, for connecting the head of a clarinet and the head of a swan. Logic 
depends on inferences from known premises and rules. Logic is therefore quite 
incapable of - and has always been expressly prohibited from - making 
surprising connections (and therefore solving AGI). It is dedicated to the 
maintenance not the breaking of rules.

  As for Logic, its syllogisms and the majority of its other precepts are 
of avail rather in the communication of what we already know, or... even in 
speaking without judgment of things of which we are ignorant, than in the 
investigation of the unknown.
  Descartes

  If I and Descartes are right - and there is every reason to think so, 
(incl. the odd million, logically inexplicable metaphors not to mention many 
millions of logically inexplicable jokes)  - you surely should be addressing 
this matter urgently, not evading it..

  P.P.S. You should also bear in mind that a vast amount of jokes (which 
involve the surprising crossing of domains) explicitly depend on ILLOGICALITY. 
Take the classic Jewish joke about the woman who, told that her friend's son 
has the psychological problem of an Oedipus Complex, says

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner
Ben, Just to be clear, when I said no argument re how logic will produce 
AGI.. I meant, of course, as per the previous posts, ..how logic will 
[surprisingly] cross domains etc. That, for me, is the defining characteristic 
of AGI. All the rest is narrow AI.  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-19 Thread Mike Tintner
Steve:question: Why bother writing a book, when a program is a comparable 
effort that is worth MUCH more?

Well,because when you do just state basic principles - as you constructively 
started to do - I think you'll find that people can't even agree about those - 
any more than they can agree about say, the principles of self-help. If they 
can - if you can state some general systems principles that gain acceptance -  
then you have the basis for your program, and it'll cost you a helluva lot less 
effort.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Where the Future of AGI Lies

2008-09-19 Thread Mike Tintner
[You'll note that arguably the single greatest influence on people's thoughts 
about AGI here is Google -  basically Google search - and that still means to 
most text search. However, video search  other kinds of image search [along 
with online video broadcasting] are already starting to transform the way we 
think about the world in an equally powerful way - and will completely 
transform thinking about AGI. This is from the Google blog].

The future of online video 
9/16/2008 06:25:00 AM 
The Internet has had an enormous impact on people's lives around the world in 
the ten years since Google's founding. It has changed politics, entertainment, 
culture, business, health care, the environment and just about every other 
topic you can think of. Which got us to thinking, what's going to happen in the 
next ten years? How will this phenomenal technology evolve, how will we adapt, 
and (more importantly) how will it adapt to us? We asked ten of our top experts 
this very question, and during September (our 10th anniversary month) we are 
presenting their responses. As computer scientist Alan Kay has famously 
observed, the best way to predict the future is to invent it, so we will be 
doing our best to make good on our experts' words every day. - Karen Wickre and 
Alan Eagle, series editors

Ten years ago the world of online video was little more than an idea. It was 
used mostly by professionals like doctors or lawyers in limited and closed 
settings. Connections were slow, bandwidth was limited, and video gear was 
expensive and bulky. There were many false starts and outlandish promises over 
the years about the emergence of online video. It was really the dynamic growth 
of the Internet (in terms of adoption, speed and ubiquity) that helped to spur 
the idea that online video - millions of people around the world shooting it, 
uploading it, viewing it via broadband - was even possible.

Today, there are thousands of different video sites and services. In fact it's 
getting to be unusual not to find a video component on a news, entertainment or 
information website. And in less than three years, YouTube has united hundreds 
of millions of people who create, share, and watch video online. What used to 
be a gap between professional entertainment companies and home movie buffs 
has disappeared. Everyone from major broadcasters and networks to vloggers and 
grandmas are taking to video to capture events, memories, stories, and much 
more in real time.

Today, 13 hours of video are uploaded to YouTube every minute, and we believe 
the volume will continue to grow exponentially. Our goal is to allow every 
person on the planet to participate by making the upload process as simple as 
placing a phone call. This new video content will be available on any screen - 
in your living room, or on your device in your pocket. YouTube and other sites 
will bring together all the diverse media which matters to you, from videos of 
family and friends to news, music, sports, cooking and much, much more.

In ten years, we believe that online video broadcasting will be the most 
ubiquitous and accessible form of communication. The tools for video recording 
will continue to become smaller and more affordable. Personal media devices 
will be universal and interconnected. Even more people will have the 
opportunity to record and share even more video with a small group of friends 
or everyone around the world.

Over the next decade, people will be at the center of their video and media 
experience. More and more consumers will become creators. We will continue to 
help give people unlimited options and access to information, and the world 
will be a smaller place.

Posted by Chad Hurley, CEO and Co-Founder, YouTube


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Mike Tintner

  Mike, Google has had basically no impact on the AGI thinking of myself or 95% 
of the other serious AGI researchers I know..

  When did you start thinking about creating an online virtual AGI?.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Mike Tintner

  Mike, Google has had basically no impact on the AGI thinking of myself or 95% 
of the other serious AGI researchers I know...


  Ben,

  Come again. Your thinking about a superAGI, and AGI takeoff, is not TOTALLY 
dependent on Google? You would stlll argue that a superAGI is possible WITHOUT 
access to the information resources of Google? 

  I suggest that you have made a blind claim above - and a classic illustration 
of McLuhan's argument that most people, including intellectuals, do tend to be 
blind to how the media they use massively shape their thinking about the world 
- and reshape their nervous system. 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-19 Thread Mike Tintner
Steve:
Thanks for wringing my thoughts out. Can you twist a little tighter?!

Steve,

A v. loose practical analogy is mindmaps - it was obviously better for Buzan to 
develop a sub-discipline/technique 1st, and a program later.

What you don't understand, I think, in all your reasoning about repair is 
that there is probably no principle - however obvious it seems to you, that 
will not be totally questioned and contradicted, and reasonably so, by someone 
else. 

The proof is in the pudding. Get yourself a set of principles together, and try 
them out on appropriately interested parties - some of your potential 
audience/customers - *before* you go to the trouble of programming. That's 
obviously good technological/business practice. Do some market research. I 
think you'll learn a lot.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Mike Tintner
Ben:I would not even know about AI had I never encountered paper, yet the 
properties of paper have really not been inspirational in my AGI design 
efforts...

Your unconscious keeps talking to you. It is precisely paper that mainly shapes 
your thinking about AI. Paper has been the defining medium of literate 
civilisation. And what characterises all literate forms is nice, discrete, 
static, fragmented, crystallised units on the page.  Whether linguistic, 
logical, or mathematical. Words, letters and numbers. That was uni-media 
civilisation.

That's the main reason why you think logic, maths and language are all you 
really need for intelligence - paper.

The defining medium now is the screen. And on a screen, everything either 
changes or is changeable. Fluid. Words can become pictures. And pictures, if 
they're video, can move and talk. And you can see things whole and complicated 
, and not just in simplified,  verbal/symbolic pieces. This is multi-media 
civilisation.

As video  becomes as plentiful and cheap as paper over the next 10 years,  the 
literary/ paper prejudices that you have inherited from Plato,  will be 
dissolved. (Narrow AI is crystallised intelligence, GI is fluid 
intelligence,  Betcha that after fuzzy programming, you will soon see some 
form of  fluid  (or bio-logical) programming). 

The slogan for the next decade is - you ain't seen nothing yet.








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Mike Tintner
Steve:View #2 (mine, stated from your approximate viewpoint) is that simple 
programs (like Dr. Eliza) have in the past and will in the future do things 
that people aren't good at. This includes tasks that encroach on 
intelligence, e.g. modeling complex phonema and refining designs.

Steve,

In principle, I'm all for the idea that I think you (and perhaps Bryan) have 
expressed of a GI Assistant - some program that could be of general 
assistance to humans dealing with similar problems across many domains. A 
diagnostics expert, perhaps, that could help analyse breakdowns in say, the 
human body, a car or any of many other machines, a building or civil structure, 
etc. etc. And it's certainly an idea worth exploring.

 But I have yet to see any evidence that it is any more viable than a proper 
AGI - because, I suspect, it will run up against the same problems of 
generalizing -  e.g. though breakdowns may be v. similar in many different 
kinds of machines, technological and natural, they will also each have their 
own special character.

If you are serious about any such project, it might be better to develop it 
first as an intellectual discipline.rather than a program to test its viability 
- perhaps what it really comes down to is a form of systems thinking or science.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner

TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang

ABSTRACT: Case-by-case Problem Solving is an approach in which the
system solves the current occurrence of a problem instance by taking
the available knowledge into consideration, under the restriction of
available resources. It is different from the traditional Algorithmic
Problem Solving in which the system applies a given algorithm to each
problem instance. Case-by-case Problem Solving is suitable for
situations where the system has no applicable algorithm for a problem.
This approach gives the system flexibility, originality, and
scalability, at the cost of predictability. This paper introduces the
basic notion of case-by-case problem solving, as well as its most
recent implementation in NARS, an AGI project.



Philosophically, this is v. interesting and seems to be breaking important 
ground. It's  moving in the direction I've long been urging - get rid of 
algorithms; they just don't apply to GI problems.


But you seem to be reinventing the term for wheel. There is an extensive 
literature, including AI stuff, on wicked, ill-structured problems,  (and 
even nonprogrammed decisionmaking  which won't, I suggest, be replaced by 
case-by-case PS. These are well-established terms.  You similarly seemed 
to be unaware of the v. common distinction between convergent  divergent 
problem-solving.


As usual, you don't give examples of problems that you're applying your 
method to .


Consequently, it's difficult to know how to interpret:

Do not define a problem as a class and use the same method to solve all 
of its
instances. Instead, treat each problem instance as a problem on its own, 
and
solve it in a case-by-case manner, according to the current 
(knowledge/resource)

situation in the system.

I would argue that you *must* define every problem, however wicked, as a 
class, even if only v. roughly, in order to be able to solve it at all. If, 
for example, the problem is how to physically explore a totally new kind of 
territory, you must know that it involves some kind of exploration/travel. 
But you may then have to radically redefine travel - from say walking to 
swimming/ crawling/ swinging on vines etc. etc. or walking with one foot up, 
one foot on the level.


Typically, some form of creative particular example of the general kind of 
problem-and-solution may be required -  e.g. a strange form of 
walking/crawling. I would v. much like to know  how you propose that logic 
can achieve that. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner
Ben,

I'm only saying that CPS seems to be loosely equivalent to wicked, 
ill-structured problem-solving, (the reference to convergent/divergent (or 
crystallised vs fluid) etc is merely to point out a common distinction in 
psychology between two kinds of intelligence that Pei wasn't aware of in the 
past - which is actually loosely equivalent to the distinction between narrow 
AI and general AI problemsolving).

In the end, what Pei is/isn't aware of in terms of general knowledge, doesn't 
matter much -  don't you think that his attempt to do without algorithms IS v. 
important? And don't you think any such attempt would be better off  referring 
explicitly to the literature on wicked, ill-structured problems?

I don't think that pointing all this out is silly - this (a non-algorithmic 
approach to CPS/wicked/whatever) is by far the most important thing currently 
being discussed here - and potentially, if properly developed, revolutionary.. 
Worth getting excited about, no?

(It would also be helpful BTW to discuss the wicked literature because it 
actually has abundant examples of wicked problems - and those, you must admit, 
are rather hard to come by here ).


Ben: TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang



   


But you seem to be reinventing the term for wheel. There is an extensive 
literature, including AI stuff, on wicked, ill-structured problems,  (and 
even nonprogrammed decisionmaking  which won't, I suggest, be replaced by 
case-by-case PS. These are well-established terms.  You similarly seemed to 
be unaware of the v. common distinction between convergent  divergent 
problem-solving.


  Mike, I have to say I find this mode of discussion fairly silly..

  Pei has a rather comprehensive knowledge of AI and a strong knowledge of 
cog-sci as well.   It is obviously not the case that he is unaware of these 
terms and ideas you are referring to.

  Obviously, what he means by case-by-case problem solving is NOT the same as 
nonprogrammed decisionmaking nor divergent problem-solving.

  In his paper, he is presenting a point of view, not seeking to compare this 
point of view to the whole corpus of literature and ideas that he has absorbed 
during his lifetime.

  I happen not to fully agree with Pei's thinking on these topics (though I 
like much of it), but I know Pei well enough to know that those. places where 
his thinking diverges from mine, are *not* due to ignorance of the literature 
on his part...




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner
Ben,

Ah well, then I'm confused. And you may be right - I would just like 
clarification.

You see,  what you have just said is consistent with my understanding of Pei up 
till now. He explicitly called his approach in the past nonalgorithmic while 
acknowledging that others wouldn't consider it so. It was only nonalgorithmic 
in the sense that the algortihm or problemsolving procedure had the potential 
to keep changing every time - but there was still (as I think we'd both agree) 
a definite procedure/algorithm each time.

This current paper seems to represent a significant departure from that. There 
doesn't seem to be an algorithm or procedure to start with, and it does seem to 
represent a challenge to your conception of AGI design. But I may have 
misunderstood (which is easy if there are no examples :) ) - and perhaps you 
or, better still, Pei, would care to clarify.

  Ben:

  A key point IMO is that: problem-solving that is non-algorithmic (in Pei's 
sense) at one level (the level of the particular problem being solved) may 
still be algorithmic at a different level (for instance, NARS itself is a set 
of algorithms).  

  So, to me, calling NARS problem-solving non-algorithmic is a bit odd... 
though not incorrect according to the definitions Pei lays out...

  AGI design then **is** about designing algorithms (such as the NARS 
algorithms) that enable an AI system to solve problems in both algorithmic and 
non-algorithmic ways...

  ben


  On Thu, Sep 18, 2008 at 8:51 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Ben,

I'm only saying that CPS seems to be loosely equivalent to wicked, 
ill-structured problem-solving, (the reference to convergent/divergent (or 
crystallised vs fluid) etc is merely to point out a common distinction in 
psychology between two kinds of intelligence that Pei wasn't aware of in the 
past - which is actually loosely equivalent to the distinction between narrow 
AI and general AI problemsolving).

In the end, what Pei is/isn't aware of in terms of general knowledge, 
doesn't matter much -  don't you think that his attempt to do without 
algorithms IS v. important? And don't you think any such attempt would be 
better off  referring explicitly to the literature on wicked, ill-structured 
problems?

I don't think that pointing all this out is silly - this (a non-algorithmic 
approach to CPS/wicked/whatever) is by far the most important thing currently 
being discussed here - and potentially, if properly developed, revolutionary.. 
Worth getting excited about, no?

(It would also be helpful BTW to discuss the wicked literature because it 
actually has abundant examples of wicked problems - and those, you must admit, 
are rather hard to come by here ).


Ben: TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang



   


But you seem to be reinventing the term for wheel. There is an 
extensive literature, including AI stuff, on wicked, ill-structured problems, 
 (and even nonprogrammed decisionmaking  which won't, I suggest, be replaced 
by case-by-case PS. These are well-established terms.  You similarly seemed 
to be unaware of the v. common distinction between convergent  divergent 
problem-solving.


  Mike, I have to say I find this mode of discussion fairly silly..

  Pei has a rather comprehensive knowledge of AI and a strong knowledge of 
cog-sci as well.   It is obviously not the case that he is unaware of these 
terms and ideas you are referring to.

  Obviously, what he means by case-by-case problem solving is NOT the 
same as nonprogrammed decisionmaking nor divergent problem-solving.

  In his paper, he is presenting a point of view, not seeking to compare 
this point of view to the whole corpus of literature and ideas that he has 
absorbed during his lifetime.

  I happen not to fully agree with Pei's thinking on these topics (though I 
like much of it), but I know Pei well enough to know that those. places where 
his thinking diverges from mine, are *not* due to ignorance of the literature 
on his part...




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome  - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving PS

2008-09-18 Thread Mike Tintner
Ben,

It's hard to resist my interpretation here - that Pei does sound as if he is 
being truly non-algorithmic. Just look at the opening abstract sentences. 
(However, I have no wish to be pedantic - I'll accept whatever you guys say you 
mean).

  Case-by-case Problem Solving is an approach in which the system solves the

  current occurrence of a problem instance by taking the available knowledge 
into

  consideration, under the restriction of available resources. It is different 
from the

  traditional Algorithmic Problem Solving in which the system applies a given 
algorithm

  to each problem instance. Case-by-case Problem Solving is suitable for 
situations

  where the system has no applicable algorithm for a problem



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner
Ben,

Well then so is S Kauffman's language unclear. I'll go with his definition in 
Chap 12 Reinventing the Sacred [all about algorithms and their impossibility 
for solving a whole string of human problems]

What is an algorithm? The quick definition is an *effective procedure to 
calculate a result.' A computer program is an algorithm, and so is long 
division.

See his explanation of how he solved the wicked problem of how to hide a 
computer cable - Is there an algorithmic way to bound the frame of the 
features of my table, computer, cord, plug and the rest of the universe, such 
that I could algorithmically find a solution to my problem? No. But solve it I 
did!

Ben, please listen carefully to the following :).  I really suspect that all 
the stuff I'm saying and others are writing about wicked problems is going in 
one ear and out the other. You hear it and know it, perhaps, but you really 
don't register it.

If you did register it, you would know that anyone who deals in psychology with 
wicked problems OBJECTS to the IQ test as a test of intelligence - as only 
dealing with convergent problem-solving, and not 
divergent/wicked/ill-structured problemsolving. It's a major issue. Pei clearly 
in the past didn't know much about this area of psychology, and I wonder 
whether you really do. (You don't have to know everything - it's not a crime if 
you don't - it's just that you would be well advised to familiarise yourself 
with it all..). 

There is no effective procedure, period, for dealing successfully with wicked, 
ill-structured, one-off (case-by-case) problems. There is for IQ tests and 
other examples of narrow AI.

(And what do you think Pei *does* mean?)


  Ben:
  Your language is unclear

  Could you define precisely what you mean by an algorithm

  Also, could you give an example of a computer program, that can be run on a 
digital computer, that is not does not embody an algorithm according to your 
definition?

  thx
  ben



  On Thu, Sep 18, 2008 at 9:15 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Ben,

Ah well, then I'm confused. And you may be right - I would just like 
clarification.

You see,  what you have just said is consistent with my understanding of 
Pei up till now. He explicitly called his approach in the past nonalgorithmic 
while acknowledging that others wouldn't consider it so. It was only 
nonalgorithmic in the sense that the algortihm or problemsolving procedure 
had the potential to keep changing every time - but there was still (as I think 
we'd both agree) a definite procedure/algorithm each time.

This current paper seems to represent a significant departure from that. 
There doesn't seem to be an algorithm or procedure to start with, and it does 
seem to represent a challenge to your conception of AGI design. But I may have 
misunderstood (which is easy if there are no examples :) ) - and perhaps you 
or, better still, Pei, would care to clarify.

  Ben:

  A key point IMO is that: problem-solving that is non-algorithmic (in 
Pei's sense) at one level (the level of the particular problem being solved) 
may still be algorithmic at a different level (for instance, NARS itself is a 
set of algorithms).  

  So, to me, calling NARS problem-solving non-algorithmic is a bit odd... 
though not incorrect according to the definitions Pei lays out...

  AGI design then **is** about designing algorithms (such as the NARS 
algorithms) that enable an AI system to solve problems in both algorithmic and 
non-algorithmic ways...

  ben


  On Thu, Sep 18, 2008 at 8:51 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Ben,

I'm only saying that CPS seems to be loosely equivalent to wicked, 
ill-structured problem-solving, (the reference to convergent/divergent (or 
crystallised vs fluid) etc is merely to point out a common distinction in 
psychology between two kinds of intelligence that Pei wasn't aware of in the 
past - which is actually loosely equivalent to the distinction between narrow 
AI and general AI problemsolving).

In the end, what Pei is/isn't aware of in terms of general knowledge, 
doesn't matter much -  don't you think that his attempt to do without 
algorithms IS v. important? And don't you think any such attempt would be 
better off  referring explicitly to the literature on wicked, ill-structured 
problems?

I don't think that pointing all this out is silly - this (a 
non-algorithmic approach to CPS/wicked/whatever) is by far the most important 
thing currently being discussed here - and potentially, if properly developed, 
revolutionary.. Worth getting excited about, no?

(It would also be helpful BTW to discuss the wicked literature 
because it actually has abundant examples of wicked problems - and those, you 
must admit, are rather hard to come by here ).


Ben: TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang



   


But you

Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner
Matt,

Thanks for reference. But it's still somewhat ambiguous. I could somewhat 
similarly outline a non-procedure procedure  which might include steps like 
Think about the problem  then Do something, anything - whatever first comes 
to mind and If that doesn't work, try something else.

But as I said, I'm only seeking clarification and a distinction between CPS and 
explicitly *Algorithmic* PS surely does require clarification.

  Matt:
Actually, CPS doesn't mean solving problems without algorithms. CPS is 
itself an algorithm, as described on pages 7-8 of Pei's paper. However, as I 
mentioned, I would be more convinced if there were some experimental results 
showing that it actually worked.

- 


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-15 Thread Mike Tintner


Terren:  I send this along because it's a great example of how systems that

self-organize can result in structures and dynamics that are more complex
and efficient than anything we can purposefully design. The applicability 
to

the realm of designed intelligence is obvious.




Vlad: . Even if there is no top manager of the

design and production process, even if nobody holds the whole process
in one mind, it is a result of application of optimization pressure of
individual people. I don't see how ability to create economically
driven processes fundamentally differs from complicated engineering
projects like putting a man on the moon of a Boeing.



The difficulty here - no? - is that we really don't as a culture have the 
appropriate life paradigm yet to think about all this.


For instance, self-organization for living organisms, seems inevitably to 
entail:


1) a self,  which is

2) an integrated brain-body unit, (has organic integrity)

All our machines are basically separate parts yoked together to fit the 
external plan of a designer. They don't have a self, or any real integrity.


I suspect we are going to have to wait for the first artificial organisms to 
really start to understand the differences between living organisms and dead 
machines.


This v. much affects intelligence. In human brains, thinking is very much a 
self-directed process, and that is essential to deal with the kinds of 
problems that characterise GI. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Mike Tintner

Jiri and Matt et al,

I'm getting v. confident about the approach I've just barely begun to 
outline.  Let's call it realistics - the title for a new, foundational 
branch of metacognition, that will oversee all forms of information, incl. 
esp. language, logic, and maths, and also all image forms, and the whole 
sphere of semiotics.


The basic premise:

to understand a piece of information and its information objects, (eg 
words) , is to realise (or know) how they refer to real objects in the 
real world, (and, ideally, and often necessarily,  to be able to point to 
and engage with those real objects).


- this includes understanding/realising when they are unreal - when they 
do NOT refer directly to real objects, but for example to sur-real or 
metaphorical or abstract or non-existent objects


Realistics recognizes that understanding involves, you could say, 
object-ivity.


Complementarily,

to 'disunderstand  is to fail to see how information objects refer to real 
objects.


to be confused is not only to fail to see, but to be unsure *which* of the 
information objects in a piece of information do not refer to real objects 
(it's all a bit of a blur)


Bear in mind  that human information-processing involves an ENORMOUS amount 
of disunderstanding and confusion.


And a *major point* of this approach (to be explained on another occasion) 
is precisely that a great deal of the time people do not understand/realise 
*why* they do not understand/ are confused  - *why* they have such 
difficulty understanding genetics, atomic physics, philosophy, logic, maths, 
ethics, neuroscience etc. etc - just about every subject in the curriculum, 
academic or social - because, like virtual AGI-ers they fall into the trap 
of FAILING to refer the information to real objects. They do not try to 
realise what on earth is being talked about. And they even end up concluding 
(completely wrongly) that there is something wrong with their brain and its 
information-processing capacity, ending up with a totally unecessary 
inferiority complex. (There will probably be v. few here, even at this 
exalted level of intelligence, who are not so affected).


(Realistics should enormously improve human understanding, and holds out the 
promise that no one will ever fail to understand any information/subject 
ever again for want of anything other than time and effort).


Now there is a LOT more to expand here [later]. But for now it immediately 
raises the obvious, and inevitable object-ion to any contradictory, 
unreal /artificial  approach to information and esp language 
processing/NLP such as you and many other AGIers are outlining.


How will you understand, and recognize when information objects/ e.g 
language/words are unreal ?


e.g.
Turn yourself inside out.
Turn that block of wood inside out.
Turn around in a straight line.
What's inside is not more beautiful than what's on the outside
Drill down into Steve's logic.
Cars can hover just above the ground
The car flew into the wall.
The wall flew away.
Bush wants to liberalise sexual mores.
Truth and beauty are incompatible.

[all such statements obviously real/unreal/untrue/metaphorical in different 
and sometimes multiple simultaneous ways]


You might also ask yourself how you will, if your approach extends beyond 
language, know that any image or photo is unreal.


IOW how is any unreal approach to information processing (contradictory to 
mine) different from a putative logic that does *not* recognize truth or a 
maths that does *not* recognize equality/equations?




Mike,


The plane flew over the hill
The play is over


Using a formal language can help to avoid many of these issues.

But then the program must be able to tell what is in what or outside, 
what is behind/over etc.


The communication module in my experimental AGI design includes
several specialized editors, one of which is a Space Editor which
allows to use simple objects in a small nD sample-space to define
the meaning of terms like in, outside, above, under etc. The
goal is to define the meaning as simply as possible and the knowledge
can then be used in more complex scenes generated for problem solving
purposes.
Other editors:
Script Editor - for writing stories the system learns from.
Action Concept Editor - for learning about actions/verbs  related
roles/phases/changes.
Category Editor - for general categorization/grouping concepts.
Formula Editor - math stuff.
Interface Mapper - for teaching how to use tools (e.g. external software)
...
Some of those editors (probably including the Space Editor) will be
available only to privileged users. It's all RBAC-based. Only
lightweight 3D imagination - for performance reasons (our brains
cheat too), and no embodiment.. BTW I still have a lot to code
before making the system publicly accessible.

To understand is .. in principle, ..to be able to go into the real world 
and point to the real objects/actions being referred to..


Not from my perspective.


Re: [agi] Artificial humor... P.S

2008-09-12 Thread Mike Tintner

Matt,

What are you being so tetchy about?  The issue is what it takes  for any 
agent, human or machine.to understand information .


You give me an extremely complicated and ultimately weird test/paper, which 
presupposes that machines, humans and everyone else can only exhibit, and be 
tested on, their thinking and understanding in an essentially Chinese room, 
insulated from the world.


I am questioning, and refuting the entire assumption, behind those 
extraordinarily woolly ideas of Turing, (witness the endlessly convoluted 
discussions of his test on this group - which clearly people had great 
difficulty understanding precisely because it is so woolly, when you try 
to understand exactly what's testing).


An agent understands information and information objects,IMO, if he can 
point to  the real objects referred to in the real world, OUTSIDE any 
insulated room. (I am taking Searle one step further). It is on his ability 
to use language to engage with the real world, - fulfil commands/requests 
like where's the key?, what food is in the fridge? is the room tidy? 
(and progressively more general information objects), that an agent's 
understanding must be tested.


That is consistent with every principle that you seem to like to invoke, of 
evolutionary fitness. Language and other forms of information exist 
primarily to enable humans to deal with real objects - and to survive  - in 
the real world,   and not in any virtual world, that academics and AGI-ers 
prefer to inhabit.


My special distinction, I think, is v. useful - the Chinese translator and 
AGI's  comprehend information/language - merely substituting symbols for 
other symbols. The agent who can use that language to deal with real 
objects, truly *understands* it.


This explanation is consistent with how humans actually fail to understand 
on inumerable occasions, and also how computers and would-be AGI's fail to 
understand  - not just outside in the real world, but *inside* their 
rooms/virtual worlds. All language understanding collapses without real 
object/world engagement.


In case you are unaware how academics will go to quite extraordinary mental 
lengths to stay inside their rooms, see this famous passage  which helped 
give birth to science , - re natural philosophers who,  (with small 
modifications, like AGI-ers)


having sharp and strong wits, and abundance of leisure, . as their persons 
were shut up in the cells of monasteries and colleges, and knowing little 
history, either of nature or time, did out of no great quantity of matter, 
and infinite agitation of wit spin out unto those laborious webs of learning 
which are extant in their books. For the wit and mind of man, if it work 
upon matter, worketh according to the stuff; but if it work upon itself, as 
the spider worketh his web, then it is endless, and brings forth indeed 
cobwebs of learning, admirable for the fineness of thread and work, but of 
no substance or profit. Francis Bacon, The Advancement of Learning.


.
Matt:



To understand/realise is to be distinguished
from (I would argue) to comprehend statements.


How long are we going to go round and round with this? How do you know if 
a machine comprehends something?


Turing explained why he ducked the question in 1950. Because you really 
can't tell. http://www.loebner.net/Prizef/TuringArticle.html



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor... P.S

2008-09-12 Thread Mike Tintner


Matt:  How are you going to understand the issues behind programming a 
computer for human intelligence if you have never programmed a computer?


Matt,

We simply have a big difference of opinion. I'm saying there is no way a 
computer [or agent, period] can understand language if it can't basically 
identify/*see* (and sense) the real objects - (and therefore doesn't know 
what) - it's talking about. Hence people say when they understand at last - 
ah now I see.. now I see what you're talking about.. now I get the 
picture.


The issue of what faculties are needed to understand language (and be 
intelligent)  is not, *in the first instance,* a matter of programming.  I 
suggest you may have been v. uncharacteristically short in this exchange, 
because you may not like the starkness of the message. It is stark, but I 
believe it's the truth. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner

Jiri,

Quick answer because in rush. Notice your if ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


The v. word orientation indicates the reality that every picture has a 
point of view, and refers to an observer. And there is no physical way 
around that.


You have been seduced by an illusion - the illusion of the flat, printed 
page, existing in a timeless space. And you have accepted implicitly that 
there really is such a world - flatland - where geometry and geometrical 
operations take place, utterly independent of you the viewer and puppeteer, 
and the solid world of real objects to which they refer. It demonstrably 
isn't true.


Remove your eyes from the page and walk around in the world - your room, 
say. Hey, it's not flat...and neither are any of the objects in it. 
Triangular objects in the world are different from triangles on the page, 
fundamentally different.


But it  is so difficult to shed yourself of this illusion. You  need to look 
at the history of culture and realise that the imposition on the world/ 
environment of first geometrical figures, and then, more than a thousand 
years later,  the fixed point of view and projective geometry,  were - and 
remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist, 
Jiri. They're just one of many possible frameworks (albeit v useful)  to 
impose on the physical world. Nomadic tribes couldn't conceive of squares 
and enclosed spaces. Future generations will invent new frameworks.


Simple example of how persuasive the illusion is. I didn't understand until 
yesterday what the introduction of a fixed point of view really meant - it 
was that word fixed. What was the big deal? I couldn't understand. Isn't 
it a fact of life, almost?


Then it clicked. Your natural POV is mobile - your head/eyes keep moving - 
even when reading. It is an artificial invention to posit a fixed POV. And 
the geometric POV is doubly artificial, because it is one-eyed, no?, not 
stereoscopic. But once you get used to reading pages/screens you come to 
assume that an artificial fixed POV is *natural*.


[Stan Franklin was interested in a speculative paper suggesting that the 
evolutionary brain's stabilisation of vision, (a  software triumph because 
organisms are so mobile) may have led to the development of consciousness).


You have to understand the difference between 1) the page, or medium,  and 
2) the real world it depicts,  and 3) you, the observer, reading/looking at 
the page. Your idea of AGI is just one big page [or screen] that apparently 
exists in splendid self-contained isolation.


It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you 
want to cling to excessive optimism and a simple POV or do you want to try 
and grasp the admittedly complicated  more sophisticated reality?

.

Jiri: If you talk to a program about changing 3D scene and the program then

correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs to be the 3D-scene-aware and the scene may contain just
a few basic 3D objects (e.g. the Shrdlu stuff).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner


Jiri,

Clearly a limited 3d functionality is possible for a program such as you 
describe - as for SHRDLU. But what we're surely concerned with here is 
generality. So fine start with a restricted world of say different kinds of 
kid's blocks and similar. But then the program must be able to tell what is 
in what or outside, what is behind/over etc. - and also what is moving 
towards or away from an object, ( it surely should be a mobile program) - 
and be able to move objects. My assumption is that even a relatively simple 
such general program wouldn't work - (I obviously haven't thought about this 
in any detail). It would be interesting to have the details about how SHRDLU 
broke down.


Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


Mike,

Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question Is the smaller sphere inside the bigger sphere?
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals with 3D, it demonstrates certain level
of 3D understanding without embodyment and there is no need to pass
the orientation parameter to the query function. Note that the
orientation is just a parameter. It Doesn't represent a body and it
can be added. Of course understanding all the real-world 3D concepts
would take a lot more code and data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.

Jiri

On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Jiri,

Quick answer because in rush. Notice your if ... Which programs 
actually

do understand any *general* concepts of orientation? SHRDLU I will gladly
bet, didn't...and neither do any others.

The v. word orientation indicates the reality that every picture has a
point of view, and refers to an observer. And there is no physical way
around that.

You have been seduced by an illusion - the illusion of the flat, printed
page, existing in a timeless space. And you have accepted implicitly that
there really is such a world - flatland - where geometry and 
geometrical
operations take place, utterly independent of you the viewer and 
puppeteer,

and the solid world of real objects to which they refer. It demonstrably
isn't true.

Remove your eyes from the page and walk around in the world - your room,
say. Hey, it's not flat...and neither are any of the objects in it.
Triangular objects in the world are different from triangles on the page,
fundamentally different.

But it  is so difficult to shed yourself of this illusion. You  need to 
look

at the history of culture and realise that the imposition on the world/
environment of first geometrical figures, and then, more than a thousand
years later,  the fixed point of view and projective geometry,  were - 
and

remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist,
Jiri. They're just one of many possible frameworks (albeit v useful)  to
impose on the physical world. Nomadic tribes couldn't conceive of squares
and enclosed spaces. Future generations will invent new frameworks.

Simple example of how persuasive the illusion is. I didn't understand 
until
yesterday what the introduction of a fixed point of view really meant - 
it
was that word fixed. What was the big deal? I couldn't understand. 
Isn't

it a fact of life, almost?

Then it clicked. Your natural POV is mobile - your head/eyes keep 
moving -
even when reading. It is an artificial invention to posit a fixed POV. 
And
the geometric POV is doubly artificial, because it is one-eyed, no?, 
not

stereoscopic. But once you get used to reading pages/screens you come to
assume that an artificial fixed POV is *natural*.

[Stan Franklin was interested in a speculative paper suggesting that the
evolutionary brain's stabilisation of vision, (a  software triumph 
because
organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium, 
and
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at
the page. Your idea of AGI is just one big page [or screen] that 
apparently

exists in splendid self-contained isolation.

It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you
want to cling to excessive optimism and a simple POV or do you want to 
try

and grasp the admittedly complicated  more sophisticated reality?
.

Jiri: If you talk to a program about changing 3D scene and the program 
then


correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a body. I mean it doesn't need to be a real-world
robot, it doesn't need to associate self with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner

Matt,

Jeez, massive question :).

Let me 1st partly dodge it, by giving you an example of the difficulty of 
understanding, say, over, both in NLP terms and ultimately (because it 
will be the same more or less) in practical object recognition/movement 
terms -  because  I suspect none of you have done what I told you, (naughty) 
 looked at Lakoff.


You will note the very different physical movements or positionings involved 
in:


The painting is over the mantle
The plane flew over the hill
Sam walked over the hill
Sam lives over the hill
The wall fell over
Sam turned the page over
She spread the cloth over the table.
The guards stood all over th ehill
Look over my page
He went over the horizon
The line stretches over the yard
The board is over the hole

[not to mention]
The play is over
There are over a hundred
Do it over, but don't overdo it.

 there are many more.

See Lakoff for schema illustrations. Nearly all involve very different 
trajectories, physical relationships.


That is why I'm confident that no program can handle that, but yes, Mark, I 
was putting forward a new idea (certainly to me) in the orientation 
framework - and doing no more than presenting a reasoned, but pretty 
ill-informed hypothesis. (And that is what I think this forum is for. And I 
will be delighted if you, or anyone else, will correct my 
overgeneralisations and errors).


Now a brief, rushed but, I suspect, massive, and new answer to your 
question - that I think, takes us, philosophically, way beyond the concept 
of grounding, which a lot of people are currently using for 
understanding.


To understand is to REALISE what [on earth, or in the [real] world] is 
being talked about. It is, in principle, and often in practice, to be able 
to go into the real world and point to the real objects/actions being 
referred to, (or realise that they are unreal/fantastic). So in terms of 
understanding a statement containing how something is over something else, 
it is to be able to go and point to the relevant objects in a scene, or, if 
possible, to recreate the physical events or relationship..


I believe that is actually how we *do* understand, how the brain does work, 
how a GI *must* work - , if correct, it automatically moves us beyond 
virtual AGI. I shall hopefully return to this concept on further 
occasions - I believe it has enormous ramifications. There are many, many 
qualifications to be made, which I won't attempt now, nevertheless the basic 
principle holds - and will hold for the psychology of how humans understand 
or *don't* understand or get confused.


IOW not only must an AGI or any GI be embodied it must also be directly  
indirectly embedded in the world.


(Grounding is being currently interpreted in practice almost entirely from 
the embodied or agent's side - as referring to what goes on *inside* the 
agent's mind. Realisation involves complementarily defining intelligence 
from the out-side of its ability to deal with the environment/real world 
being-referred-to. BIG difference. Like between just using nature/heredity, 
OTOH,  and, OTOH, also using nurture/environment to explain behaviour).


I hope you realise what I've been saying :).




Matt:
Mike, your argument would be on firmer ground if you could distinguish 
between when a computer understands something and when it just reacts as 
if it understands. What is the test? Otherwise, you could always claim 
that a machine doesn't understand anything because only humans can do 
that.



-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/11/08, Mike Tintner [EMAIL PROTECTED] wrote:


From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Artificial humor
To: agi@v2.listbox.com
Date: Thursday, September 11, 2008, 1:31 PM
Jiri,

Clearly a limited 3d functionality is possible for a
program such as you
describe - as for SHRDLU. But what we're surely
concerned with here is
generality. So fine start with a restricted world of say
different kinds of
kid's blocks and similar. But then the program must be
able to tell what is
in what or outside, what is behind/over etc. -
and also what is moving
towards or away from an object, ( it surely should be a
mobile program) -
and be able to move objects. My assumption is that even a
relatively simple
such general program wouldn't work - (I obviously
haven't thought about this
in any detail). It would be interesting to have the details
about how SHRDLU
broke down.

Also - re BillK's useful intro. of DARPA - do those
vehicles work by GPS?

 Mike,

 Imagine a simple 3D scene with 2 different-size
spheres. A simple
 program allows you to change positions of the spheres
and it can
 answer question Is the smaller sphere inside the
bigger sphere?
 [Yes|Partly|No]. I can write such program in no time.
Sure, it's
 extremely simple, but it deals with 3D, it
demonstrates certain level
 of 3D understanding without embodyment and there is no
need to pass
 the orientation parameter to the query function. Note

Re: [agi] Artificial humor

2008-09-11 Thread Mike Tintner




Mike Tintner [EMAIL PROTECTED] wrote:


To understand is to REALISE what [on earth, or
in the [real] world] is being talked about.


Matt: Nice dodge. How do you distinguish between when a computer realizes 
something and when it just reacts as if it realizes it?


Yeah, I know. Turing dodged the question too.



Matt,

I don't understand this objection - maybe I wasn't clear. I said to 
realise is to be able to go and point to the real objects/actions referred 
to, and to make the real actions happen. You understand what a key is if you 
can go and pick one up. You understand what picking up a key is, if you 
can do it. You understand what sex is, if you can point to it, or, better, 
do it,  the scientific observers, or Turing testers, can observe it.


As I said, there are many qualifications and complications - for example to 
understand what mind is, is also to be able to point to one in action, but 
it is a complex business on both sides [both mind and the pointing]  - 
nevertheless if both fruitful scientific and philosophical discussion and 
discovery about the mind are to take place - that real engagement with 
real objects, is exactly what must happen there too. That is the basis of 
science (and technology).


The only obvious places where understanding/ realisation, as defined here, 
*don't* happen  - or *appear* not to happen - are - can you guess? - yes, 
logic and mathematics. And what are the subjects closest to the hearts of 
virtual AGI-ers?


So you are generally intelligent if you can not just have a Turing test 
conversation with me about going and shopping in the supermarket, but 
actually go there and do it, per verbal instructions.


Explain any dodge here.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor... P.S

2008-09-11 Thread Mike Tintner

Matt,

To understand/realise is to be distinguished from (I would argue) to
comprehend statements.

The one is to be able to point to the real objects referred to. The other is
merely to be able to offer or find an alternative or dictionary definition
of the statements. A translation. Like the Chinese room translator. Who is
dealing in words, just words. Mere words.

(I'm open to an alternative title for comprehend - if you find it in any
way grates on you as a term, please say).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
Matt: Humor detection obviously requires a sophisticated language model and 
knowledge of popular culture, current events, and what jokes have been told 
before. Since entertainment is a big sector of the economy, an AGI needs all 
human knowledge, not just knowledge that is work related.


In many ways, it was brave of you to pursue this idea,  the results are 
fascinating. You see, there is one central thing you need in order to write 
a joke. (Have you ever tried it? You must presumably in some respect). You 
can't just logically, formulaically analyse those jokes - the common 
ingredients of, say, the lightbulb jokes. When you write something - even 
some logical extension, say, re how many plumbers it takes to change a light 
bulb - the joke *has to strike you as funny. You have to laugh. It's the 
only way to test the joke.


Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


But what makes you laugh? The common ingredient of humour is human error. We 
laugh at humans making mistakes - mistakes that were/are preventable. People 
having their head stuck snootily in the air, and so falling on banana skins. 
Mrs Malaprop mispronouncing, misconstruing big words while trying to look 
clever, and refusing to admit her ignorance. And we laugh because we can 
personally identify, because we've made those kinds of mistakes. They are a 
fundamental and continuous part of our lives.(How will your AGI identify?)


So are AGI-ers *heroic* figures trying to be/produce giants, or are they 
*comic* figures, like Don Quixote, who are in fact tilting at windmills, and 
refusing even to check whether those windmill arms actually belong to 
giants?


There isn't a purely logicomathematical way to decide that. It takes an 
artistic as well as a scientific mentality involving not just whole 
different parts of your brain, but different faculties and sensibilities - 
all v. real, and not reducible to logic and maths. When you deal with AGI 
problems -  like the problem of AGI itself - you need them.


(You may think this all esoteric, but in fact, you need all those same 
faculties to understand everything that is precious to you - the universe/ 
world/ society/ atoms/ genes /  machines -  even logic  maths. But more of 
that another time).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


Actually, many of us do.  And this is why your posts are so problematical. 
You invent what *we* believe and what we intend to do.  And then you 
criticize your total fabrications (a.k.a. mental masturbation).


You/others have plans for an *embodied* computer with the equivalent of an 
autonomic nervous systems and the relevant, attached internal organs? A 
robot? That's certainly news to me. Please expand.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mike Tintner
There is no computer or robot that keeps getting physically excited or 
depressed by its computations. (But it would be a good idea).


you don't even realize that laptops (and many other computers -- not to 
mention appliances) currently do precisely what you claim that no computer 
or robot does.


Emotional laptops, huh? Sounds like a great story idea for kids learning to 
love their laptops. Pixar needs you. [It hasn't crashed, it's just v. 
depressed].





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Re Artificial Humor

2008-09-10 Thread Mike Tintner
Emotional laptops? On 2nd thoughts it's like Thomas the Tank Engine... If 
s.o. hasn't done it already, there is big money here. Even bigger than you 
earn, if that's humanly possible. Lenny the Laptop...? A really personal 
computer. Whatddya think? Ideas?  [Shh, darling, Lenny's thinking...]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Perception Understanding of Space

2008-09-10 Thread Mike Tintner

[n.b. my posts are arriving in a weird order]

Jiri: MTWithout a body, you couldn't understand the joke.


False. Would you also say that without a body, you couldn't understand
3D space ?


Jiri,

You have to offer a reason why something is False  :).  You're saying it's 
that 3D space *can* be understood without a body?


Er, false. Because

1. Orientation Framework. Your ability to orient yourself in space - or to 
understand references to orientation in space - e.g  whether something is 
up or down , in or out, towards or away,  here or there, 
on top of or underneath or upside down or right side up, near or 
far, left or right, over or under, or going through or around , 
somewhere or nowhere   - as distinct from say just being a point[s] or 
line[s] on a surface -   all depend on having a body, and your capacity to 
move that body in different directions (and understand other bodies as doing 
the same).


(We are talking here about what might be called an orientation framework - 
anyone got better ideas? - that is as fundamental to your navigation 
through, and perception, of space, as Descartes' coordinate axes are to 
geometry - and from which just possibly those axes may have evolved).


2. 3-D Geometry. Similarly, your ability to, and indeed incapacity to do 
otherwise than, see and understand the lines in those classic depth 
illusions as being smaller and nearer than the further ones, (when of 
course they're actually the same size),  depends on the embodiment of that 
process, and your imaginatively, embodied-ly, travelling down the lines. The 
whole of 3-d geometry is similarly embodied -  at a certain depth from you 
the viewer - who continually imaginatively and embodied-ly travel around its 
objects and scenes.


3.Photographs of  Physical Scenes and Objects. You cannot look at a physical 
scene without seeing it as entailing a pov from a viewer. You cannot 
understand how the objects within that scene are about to move - whether a 
tipping bucket say is about to fall on someone going under the ladder or 
ascend to heaven  - without embodying those objects. You cannot understand 
the tipping or the falling - or which direction even the objects are 
moving in - without imaginatively, embodiedly projecting their movements 
(despite their actual stillness on the page). You cannot look at a street 
without walking down it.


4. Object-ification. Similarly, your remarkable ability to even conceive of 
objects as you look around in real space, let alone a screen, depends on 
having a body and being able to embody them. You don't actually see whole 
objects  as you experience cups/chairs/pencils etc. You just see v. partial 
facades/surfaces - never the whole object. Objects have to be reconstructed 
in the observing mind - an embodied process - which derives from your 
physically having touched and/or travelled around them. [Hence, I guess, 
touch and movement precede vision in the evolution of species/mind - and 
blind people don't need vision to see and draw the outlines of objects]


5.Late Development of Transcendental  Perspective. Your very capacity to 
conceive of a disembodied mind out of space and time, whether in the form of 
a computer or a divine entity or , say, some meditative process that steps 
outside space and time - is a capacity that has to be developed over time 
through childhood, pre-stage by pre-stage, from a primarily here and now 
perspective. (Your whole concept of disembodied or functionalism or any 
other variation, *presupposes* being embodied). See Margaret Donaldson: 
Human Minds: An Exploration  the development of the Transcendent Mode, 
summarised in:


http://www.imprint.co.uk/pdf/Thompson.pdf

Well worth reading whole paper. V. important.

6. Understanding of Number and Operations on Numbers/Objects. Your very 
ability to understand number and numerical operations like adding and 
dividing, depends on your ability to embody them. You automatically 
understand, for example, that 1 + 1 do NOT equal 2, if you are adding one 
ice cream to another ice cream. You only came to numbers through your body, 
and tallying and counting and physically conjoining, objects.


In fact, the whole of rationality - logic and maths and formal languages - 
depend utterly on imagination and embodiment.


In fact, your entire worldview - your ability to conceive of the far 
universe, and the deep interior of atoms and fundamental particles, and the 
distant past of evolution and the big bang, and the distant future of AGI 
or, pace Death Race - 2012: The US Economy Has Collapsed - all depend 
utterly on your capacity for imaginative, embodied projection,  and space 
and time travel, way beyond the very narrow horizons of your immediate 
environment.


Exciting, no?

It's best not to try to fight it, but to go with it, and better understand 
the details.


P.S. What has emerged in this post for me is an interesting, single 
concept - that our understanding of the world and 

Re: [agi] Perception Understanding of Space

2008-09-10 Thread Mike Tintner


:

You're saying it's that 3D space *can* be understood without a body?
Er, false.


http://en.wikipedia.org/wiki/SHRDLU

And SHRDLU can generally recognize whether any obect is  in any another 
object - whether a doll is in a box or lying between two walls, whether a 
box is in another box, whether it's open or lidless, or upside down etc 
etc.?


The result was a tremendously successful demonstration of AI. This led 
other AI researchers to excessive optimism which was soon lost when later 
systems attempted to deal with more realistic situations with real-world 
ambiguity and complexity


In out etc are supremely open-ended concepts. Do you think SHRDLU 
understood such concepts? Check out the extensive discussion of over in 
Women, Fire and Dangerous Objects and over 180 meanings of  the concept, 
(although I'd take a somewhat different approach). ..unless you prefer 
excessive optimism :)


...





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Philosophy of General Intelligence

2008-09-09 Thread Mike Tintner

Narrow AI : Stereotypical/ Patterned/ Rational

Matt:  Suppose you write a program that inputs jokes or cartoons and outputs 
whether or not they are funny


AGI : Stereotype-/Pattern-breaking/Creative

What you rebellin' against?
Whatcha got?

Marlon Brando. The Wild One (1953)  On screen, he rebelled against the 
man; offscreen, he rebelled against the rebel stereotype imposed on him. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-09 Thread Mike Tintner

Matt,

Humor is dependent not on inductive reasoning by association, reversed or 
otherwise, but on the crossing of whole matrices/ spaces/ scripts .. and 
that good old AGI standby, domains. See Koestler esp. for how it's one 
version of all creativity -


http://www.casbs.org/~turner/art/deacon_images/index.html

Solve humor and you solve AGI. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-07 Thread Mike Tintner

Pei:As I said before, you give symbol a very narrow meaning, and insist
that it is the only way to use it. In the current discussion,
symbols are not 'X', 'Y', 'Z', but 'table', 'time', 'intelligence'.
BTW, what images you associate with the latter two?

Since you prefer to use person as example, let me try the same. All of
my experience about 'Mike Tintner' is symbolic, nothing visual, but it
still makes you real enough to me...

I'm sorry if it sounds rude


Pei,

You attribute to symbols far too broad powers that they simply don't have - 
and demonstrably, scientifically, don't have.


For example, you think that your experience of Mike Tintner - the rude 
guy - is entirely symbolic. Yes, all your experience of me has been mediated 
entirely via language/symbols -these posts.  But by far the most important 
parts of it have actually been images. Ridiculous, huh?


Look at this sentence:

If you want to hear about it, you'll probably want to know where I was 
born, and what a lousy childhood I had, and how my parents were occupied 
before they had me, and all the David Copperfield crap, but if you want to 
know the truth, I don't really want to get into it.


In 60 words,  one of the great opening sentences of a novel, Salinger has 
created a whole character. How? He did it by creating a voice. He did it by 
what is called prosody (and also diction). No current AGI method has the 
least idea of how to process that prosody. But your brain does. Pei doesn't. 
But his/your brain does.


And your experience of MT has been heavily based similarly on processing the 
*sound* images - the voice behind my words. Hence your I'm sorry if it 
*sounds* rude..


Words, even written words, aren't just symbols, they are sounds. And your 
brain hears those sounds and from their music can tell many, many things, 
including the emotions of the speaker, and whether they're being angry or 
ironic or rude.


Now, if you had had more of a literary/arts education, you would probably be 
alive to that dimension. But, as it is, you've missed it, and you're missing 
all kinds of dimensions of how symbols work.


Similarly, if you had more of a visual education, and also more of a 
psychological developmental background, you wouldn't find time and 
intelligence so daunting to visualise.


You would realise that it takes a great deal of time and preparatory 
sensory/imaginative to build up abstract concepts


You would realise that it takes time for an infant to come to use that 
word, and still more for a child to understand the word intelligence. I 
doubt that any child will understand time before they've seen a watch or 
clock, and that's what they will probably visualise time as, first. Your 
capacity to abstract time still further, will have come from having become 
gradually acquainted with a whole range of time-measuring devices, and 
seeing the word time and associating that with many other kinds of 
measurement especially in relation to maths. and science.


Similarly,  a person's concept of intelligence will come from seeing and 
hearing people solving problems in different ways - quickly and slowly, for 
example.. It will be deeply grounded in sensory images and experience.


All the most abstract maths and logic that you may think totally abstract 
are similarly and necessarily grounded. Ben, in parallel to you, didn't 
realise that the decimal numeral system is digital, based on the hand, and 
so, a little less obviously, is the roman numeral system. Numbers and logic 
have to be built up out of experience.


[You might profit BTW by looking at Barsalou, [many of his papers online], 
to see how the mind modally simulates concepts - with lots of experimental 
evidence]


I, as you know, am very ignorant about computers; but you are also very 
ignorant about all kinds of dimensions of how symbols work, and intelligence 
generally, that are absolutely essential for AGI. You can continue to look 
down on me, or you can open your mind, recognize that general intelligence 
can only be achieved by a confluence of disciplines way beyond the reach of 
any single individual, and see that maybe useful exchanges can take place. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Philosophy of General Intelligence

2008-09-07 Thread Mike Tintner

Jiri: Mike,

If you think your AGI know-how is superior to the know-how of those
who already built testable thinking machines then why don't you try to
build one yourself?

Jiri,

I don't think I know much at all about machines or software  never claim 
to. I think I know certain, only certain, things about the psychological and 
philosophical aspects of general intelligence - esp. BTW about the things 
you guys almost never discuss, the kinds of problems that a general 
intelligence must solve.


You may think that your objections to me are entirely personal  about my 
manner. I suggest that there is also a v. deep difference of philosophy 
involved here.


I believe that GI really is about *general* intelligence - a GI, and the 
only serious example we have is human, is, crucially, and must be, able to 
cross domains - ANY domain. That means the whole of our culture and society. 
It means every kind of representation, not just mathematical and logical and 
linguistic, but everything - visual, aural, solid, models, embodied etc etc. 
There is a vast range. That means also every subject domain  - artistic, 
historical, scientific, philosophical, technological, politics, business 
etc. Yes, you have to start somewhere, but there should be no limit to how 
you progress.


And the subject of general intelligence is tberefore, in no way, just the 
property of a small community of programmers, or roboticists - it's the 
property of all the sciences, incl. neuroscience, psychology, semiology, 
developmental psychology, AND the arts and philosophy etc. etc. And it can 
only be a collaborative effort. Some robotics disciplines, I believe, do 
think somewhat along those lines and align themselves with certain sciences. 
Some AI-ers also align themselves broadly with scientists and philosophers.


By definition, too, general intelligence should embrace every kind of 
problem that humans have to deal with - again artistic, practical, 
technological, political, marketing etc. etc.


The idea that general intelligence really could be anything else but truly 
general is, I suggest, if you really think about it, absurd. It's like 
preaching universal brotherhood, and a global society, and then practising 
severe racism.


But that's exactly what's happening in current AGI. You're actually 
practising a highly specialised approach to AGI - only certain kinds of 
representation, only certain kinds of problems are considered - basically 
the ones you were taught and are comfortable with - a very, very narrow 
range - (to a great extent in line with the v. narrow definition of 
intelligence involved in the IQ test).


When I raised other kinds of problems, Pei considered it not constructive. 
When I recently suggested an in fact brilliant game for producing creative 
metaphors, DZ considered it childish,  because it was visual and 
imaginative, and you guys don't do those things, or barely. (Far from being 
childish, that game produced a rich series of visual/verbal metaphors, where 
AGI has produced nothing).


If you aren't prepared to use your imagination and recognize the other half 
of the brain, you are, frankly, completely buggered as far as AGI is 
concerned. In over 2000 years, logic and mathematics haven't produced a 
single metaphor or analogy or crossed any domains. They're not meant to, 
that's expressly forbidden. But the arts produce metaphors and analogies on 
a daily basis by the thousands. The grand irony here is that creativity 
really is - from a strictly technical pov -  largely what our culture has 
always said it is - imaginative/artistic and not rational.. (Many rational 
thinkers are creative - but by using their imagination). AGI will in fact 
only work if sciences and arts align.


Here, then is basically why I think you're getting upset over and over by 
me. I'm saying in many different ways, general intelligence really should be 
general, and embrace the whole of culture and intelligence, not just the 
very narrow sections you guys espouse. And yes, I think you should be 
delighted to defer to, and learn from outsiders, (if they deserve it), 
just as I'm delighted to learn from you. But you're not - you resent 
outsiders like me telling you about your subject.


I think you should also be prepared to admit your ignorance - and most of 
you, frankly, don't have much of a clue about imaginative/visual/artistic 
intelligence and vast swathes of problemsolving, ( just as I have don't have 
much of a clue re your technology and many kinds of problemsolving...etc). 
But there is v. little willingness to admit ignorance, or to acknowledge the 
value of other disciplines.


IN the final analysis, I suggest, that's just sheer cultural prejudice. It 
doesn't belong in the new millennium when the defining paradigm is global 
(and general) as opposed to the local (and specialist) mentality of the old 
one - recognizing the value and interdependence of ALL parts of society and 
culture. And it doesn't 

Re: [agi] Philosophy of General Intelligence

2008-09-07 Thread Mike Tintner

Terren,

You may be right - in the sense that I would have to just butt out of 
certain conversations, to go away  educate myself.


There's just one thing here though - and again this is a central 
philosophical difference this time concerning the creative process.


Can you tell me which kind of programming is necessary for which 
end-problem[s] that general intelligence must solve? Which kind of 
programming, IOW, can you *guarantee* me  will definitely not be a waste of 
my time (other than by way of general education) ?  Which kind are you 
*sure* will help solve which unsolved problem of AGI?


P.S. OTOH the idea that in the kind of general community I'm espousing, (and 
is beginning to crop up in other areas), everyone must be proficient in 
everyone else's speciality is actually a non-starter, Terren. It defeats the 
object of the division of labour central to all parts of the economy. If you 
had to spend as much time thinking about those end-problems as I have, I 
suggest you'd have to drop everything. Let's just share expertise instead?



Terren: Good summary. I think your point of view is valuable in the sense of 
helping engineers in AGI to see what they may be missing. And your call for 
technical AI folks to take up the mantle of more artistic modes of 
intelligence is also important.


But it's empty, for you've demonstrated no willingness to cross over to 
engage in technical arguments beyond a certain, quite limited, depth. 
Admitting your ignorance is one thing, and it's laudable, but it only goes 
so far. I think if you're serious about getting folks (like Pei Wang) to 
take you seriously, then you need to also demonstrate your willingness to 
get your hands dirty and do some programming, or in some other way abolish 
your ignorance about technical subjects - exactly what you're asking 
others to do.


Otherwise, you have to admit the folly of trying to compel any such folks 
to move from their hard-earned perspectives, if you're not willing to do 
that yourself.


Terren


--- On Sun, 9/7/08, Mike Tintner [EMAIL PROTECTED] wrote:


From: Mike Tintner [EMAIL PROTECTED]
Subject: [agi] Philosophy of General Intelligence
To: agi@v2.listbox.com
Date: Sunday, September 7, 2008, 6:26 PM
Jiri: Mike,

If you think your AGI know-how is superior to the know-how
of those
who already built testable thinking machines then why
don't you try to
build one yourself?

Jiri,

I don't think I know much at all about machines or
software  never claim
to. I think I know certain, only certain, things about the
psychological and
philosophical aspects of general intelligence - esp. BTW
about the things
you guys almost never discuss, the kinds of problems that a
general
intelligence must solve.

You may think that your objections to me are entirely
personal  about my
manner. I suggest that there is also a v. deep difference
of philosophy
involved here.

I believe that GI really is about *general* intelligence -
a GI, and the
only serious example we have is human, is, crucially, and
must be, able to
cross domains - ANY domain. That means the whole of our
culture and society.
It means every kind of representation, not just
mathematical and logical and
linguistic, but everything - visual, aural, solid, models,
embodied etc etc.
There is a vast range. That means also every subject domain
 - artistic,
historical, scientific, philosophical, technological,
politics, business
etc. Yes, you have to start somewhere, but there should be
no limit to how
you progress.

And the subject of general intelligence is tberefore, in no
way, just the
property of a small community of programmers, or
roboticists - it's the
property of all the sciences, incl. neuroscience,
psychology, semiology,
developmental psychology, AND the arts and philosophy etc.
etc. And it can
only be a collaborative effort. Some robotics disciplines,
I believe, do
think somewhat along those lines and align themselves with
certain sciences.
Some AI-ers also align themselves broadly with scientists
and philosophers.

By definition, too, general intelligence should embrace
every kind of
problem that humans have to deal with - again artistic,
practical,
technological, political, marketing etc. etc.

The idea that general intelligence really could be anything
else but truly
general is, I suggest, if you really think about it,
absurd. It's like
preaching universal brotherhood, and a global society, and
then practising
severe racism.

But that's exactly what's happening in current AGI.
You're actually
practising a highly specialised approach to AGI - only
certain kinds of
representation, only certain kinds of problems are
considered - basically
the ones you were taught and are comfortable with - a very,
very narrow
range - (to a great extent in line with the v. narrow
definition of
intelligence involved in the IQ test).

When I raised other kinds of problems, Pei considered it
not constructive.
When I recently suggested an in fact brilliant game for
producing

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner

Will,

Yes, humans are manifestly a RADICALLY different machine paradigm- if you 
care to stand back and look at the big picture.


Employ a machine of any kind and in general, you know what you're getting - 
some glitches (esp. with complex programs) etc sure - but basically, in 
general,  it will do its job.


Humans are only human, not a machine. Employ one of those, incl. yourself, 
and, by comparison, you have only a v. limited idea of what you're getting - 
whether they'll do the job at all, to what extent, how well. Employ a 
programmer, a plumber etc etc.. Can you get a good one these days?... 
VAST difference.


And that's the negative side of our positive side - the fact that we're 1) 
supremely adaptable, and 2) can tackle those problems that no machine or 
current AGI  - (actually of course, there is no such thing at the mo, only 
pretenders) - can even *begin* to tackle.


Our unreliability
.

That, I suggest, only comes from having no set structure - no computer 
program - no program of action in the first place. (Hey, good  idea, who 
needs a program?)


Here's a simple, extreme example.

Will,  I want you to take up to an hour, and come up with a dance, called 
the Keyboard Shuffle. (A very ill-structured problem.)


Hey, you can do that. You can tackle a seriously ill-structured problem. You 
can embark on an activity you've never done before, presumably had no 
training for, have no structure for,  yet you will, if cooperative, come up 
with something - cobble together a session of that activity, and 
end-product, an actual dance. May be shit, but it'll be a dance.


And that's only an extreme example of how you approach EVERY activity. You 
similarly don't have a structure for your next hour[s], if you're writing an 
essay, or a program, or spending time watching TV, flipping chanels. You may 
quickly *adopt* or *form* certain structures/ routines. But they only go 
part way, and you do have to adopt and/or create them.


Now, I assert,  that's what an AGI is - a machine that has no programs, (no 
preset, complete structures for any activities), designed to tackle 
ill-structured problems by creating and adopting structures, not 
automatically following ones that have been laboured over for ridiculous 
amounts of time by human programmers offstage.


And that in parallel, though in an obviously more constrained way, is what 
every living organism is - an extraordinary machine that builds itself 
adaptively and flexibly, as it goes along  -  Dawkins' famous plane that 
builds itself in mid-air. Just as we construct our activities in mid-air. 
Also a very different machine paradigm to any we have at the mo  (although 
obviously lots of people are currently trying to design/understand such 
self-building machines).


P.S. The irony is that scientists and rational philosophers, faced with the 
extreme nature of human imperfection - our extreme fallibility (in the sense 
described above - i.e. liable to fail/give up/procrastinate at any given 
activity at any point in a myriad of ways) - have dismissed it as, 
essentially, down to bugs in the system. Things that can be fixed.


AGI-ers have the capacity like no one else to see and truly appreciate that 
such fallibility = highly desirable adaptability and that humans/animals 
really are fundamentally different machines.


P.P.S.  BTW that's the proper analogy for constructing an AGI - not 
inventing the plane (easy-peasy), but inventing the plane that builds itself 
in mid-air, (whole new paradigm of machine- and mind- invention).


Will: MT:By contrast, all deterministic/programmed machines and computers 
are


guaranteed to complete any task they begin.


Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so.

Will,

That's a legalistic, not a valid objection, (although heartfelt!).In the
above case, the computer is guaranteed to hang - and it does, strictly,
complete its task.


Not necessarily, the task could be interrupted at that process stopped
or paused indefinately.


What's happened is that you have had imperfect knowledge of the program's
operations. Had you known more, you would have known that it would hang.


If it hung because of mult-process issues, you would need perfect
knowledge of the environment to know the possible timing issues as
well.


Were your computer like a human mind, it would have been able to say (as
you/we all do) - well if that part of the problem is going to be 
difficult,
I'll ignore it  or.. I'll just make up an answer... or by God I'll 
keep

trying other ways until I do solve this.. or... ..  or ...
Computers, currently, aren't free thinkers.



Computers aren't free thinkers, but it does not follow from an
inability to switch,  cancel, pause and restart or modify tasks. All
of which they can do admirably. They just don't tend to do so, because
they aren't smart enough (and cannot change 

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner

Sorry - para Our unreliability ..  should have contined..

Our unreliabilty is the negative flip-side of our positive ability to stop 
an activity at any point, incl. the beginning and completely change tack/ 
course or whole approach, incl. the task itself, and even completely 
contradict ourself. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner
DZ:AGI researchers do not think of intelligence as what you think of as a 
computer program -- some rigid sequence of logical operations programmed by a 
designer to mimic intelligent behavior.

1. Sequence/Structure. The concept I've been using is not that a program is  a 
sequence of operations but a structure., including as per NARS, as I 've 
read Pei, a structure that may change more or less continuously. Techno-idiot 
that I am, I am fairly aware that many modern programs are extremely 
sophisticated and complex structures. I take into account, for example, 
Minsky's idea of a possible society of mind, with many different parts 
perhaps competing - not obviously realised in program form yet. 

But programs are nevertheless manifestly structures. Would you dispute that?

And a central point I've been making is that human life and activities  are 
manifestly *unstructured* - that in just about everything we do, we struggle to 
impose structure on our activities - to impose order and organization., 
planning, focus etc. .

Especially in AGI's central challenge -creativity. Creative activities are 
outstanding examples of unstructured activities, in which structures have to be 
created - painting scenes, writing stories, designing new machines, writing 
music/pop songs - often starting from an entirely blank page. (What's the 
program equivalent?)

2. A Programmer on Programs.  I am persuaded on multiple grounds that the 
human mind is not always algorithmic, nor merely computational in the syntactic 
sense of computational.
S Kauffman, Reinventing the Sacred

Try Chap 12.  Computationally, he trumps most AGI-ers in terms of most AI 
departments, incl. complexity, bioinformatics and general standing, no? Read 
the whole book in fact - it can be read as being entirely about the creative 
problem/challenge of AGI -  you liked Barsalou, you'll like this. . 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Mike Tintner
Er sorry - my question is answered in the interesting Slashdot thread 
(thanks again):


Past studies have shown how many neurons are involved in a single, simple 
memory. Researchers might be able to isolate a few single neurons in the 
process of summoning a memory, but that is like saying that they have 
isolated a few water molecules in the runoff of a giant hydroelectric dam. 
The practical utility of this is highly questionable.  (and much more.. 
good thread) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner

OK, I'll bite: what's nondeterministic programming if not a contradiction?

Again - v. briefly - it's a reality - nondeterministic programming is a 
reality, so there's no material, mechanistic, software problem in getting a 
machine to decide either way. The only problem is a logical one of doing it 
for sensible reasons. And that's the long part - there are a continuous 
stream of sensible reasons, as there are for current nondeterministic 
computer choices.


Yes, strictly, a nondeterministic *program* can be regarded as a 
contradiction - i.e. a structured *series* of instructions to decide freely 
. The way the human mind is programmed is that we are not only free, and 
have to, *decide* either way about certain decisions, but we are also free 
to *think* about it - i.e. to decide metacognitively whether and how we 
decide at all - we continually decide. for example, to put off the 
decision till later.


So the simple reality of being as free to decide and think as you are, is 
that when you sit down to engage in any task, like write a post, essay, or 
have a conversation, or almost literally anything, there is no guarantee 
that you will start, or continue to the 2nd, 3rd, 4th step, let alone 
complete it. You may jack in your post more or less immediately.  This is at 
once the bane and the blessing of your life, and why you have such 
extraordinary problems finishing so many things. Procrastination.


By contrast, all deterministic/programmed machines and computers are 
guaranteed to complete any task they begin. (Zero procrastination or 
deviation). Very different kinds of machines to us. Very different paradigm. 
(No?)


I would say then that the human mind is strictly not so much 
nondeterministically programmed as briefed. And that's how an AGI will 
have to function. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


<    1   2   3   4   5   6   7   8   >