Re: [singularity] pattern definition

2008-05-05 Thread Mike Tintner

Xav,

Really interesting post.

A pattern is an underlying structure of  things - from a herringbone pattern 
that could be variously implemented on different cloths to a structure 
underlying different series of numbers..


It is not a "representation."

Nor is a program a representation. A program that produces a circle, is not 
a representation of a circle. It is essentially a recipe - a set of 
instructions to CONSTRUCT a circle.


To confuse pattern and program with representations is to confuse the recipe 
with the actual dish of food.


If you start with a photo or detailed image of an object, and from that do 
an outline drawing or schematic imate, you then have a representation on a 
simpler scale.



Hello

I am writing a literature review on AGI and I am mentioning the 
definition of pattern as explained by Ben in his work.


"A pattern is a representation of an object on a simpler scale. For 
example, a pattern in a drawing of a mathematical curve could be a 
program that can compute the curve from a formula (Looks et al. 2004).  My 
supervisor told me that "she doesn?t see how this can be simpler  than the 
actual drawing".


Any other definition I could use in the same context to explain to a 
non-technical audience?


thanks

xav




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner

Jean-Paul,

More or less yes to your points. (I was only tossing off something quickly). 
Actually I think there's a common core to 2)-7) and will be setting out 
something about that soon. But I don't think it's recognizing patterns - on 
the contrary, the common problem is partly that there ISN'T a pattern to be 
recognized. If you have to understand the metaphor, the "dancing towers," 
there's no common pattern between human dancers and the skyscrapers referred 
to.


I also think that while there's a common core, each problem has its own 
complications. Maybe Hawkins is right that all the senses process inputs in 
basically the same hierarchical fashion - and any mechanical AGI's senses 
will have to do the same - but if you think about it, the senses evolved 
gradually, so there must be different reasons for that.


(And I would add another unsolved (& unrecognized) problem for AGI:

9)Common Sense Processing - being able to process an event in multiple 
sensory modalities, and switch between them to solve problems - for example, 
to be able to touch an object blindfolded, and then draw its outlines 
visually.  )



Jean-Paul:  Your "1" consists of two separate challenges: (1) reasoning 
& (2) learning
IMHO your 3 to 6 can be classified under (3) pattern recognition. I think 
perhaps even your 2 may flow out of pattern recognition.
Of course, the real challenge is to find an algorithmic way (or 
architecture) to do the above without bumping into exponential explosion.e. 
move the problem out of the NP-complete arena. (Else an AGI will never 
exceed human intelligence by a real margin.)




"Mike Tintner" <[EMAIL PROTECTED]> wrote:

Your comments are irresponsible.  Many problems of AGI have been solved.
If you disagree with that, specify exactly what you mean by a "problem of
AGI", and let us list them.

1.General Problem Solving and Learning (independently learning/solving
problem in, a new domain)

2.Conceptualisation [Invariant Representation] -  forming concept of 
Madonna

which can embrace rich variety of different faces/photos of her

3.Visual Object Recognition

4.Aural "Object" Recognition [dunno proper term here - being able to
recognize same melody played in any form]

5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual
scenario ( a movie)   [just made this problem up - but it's a good one]





---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner
Ben:So why is it worth repeating the point?Similarly, up till the moment 
when the first astronauts walked on the moon,

you could have run around yelping that "no one has solved the problem of
how to make a person walk on the moon, all they've done is propose methods
that seem to have promise."

I repeated the details because I was challenged. (And unlike Richard, I do 
answer challenges). The original point -  a valid one, I think - is until 
you've solved one AGI problem, you can't make any reasonable prediction as 
to WHEN the rest will be solved and how much it will cost in resources. And 
it's not worth much discussion.


AGI is different from moonwalking - that WAS successfully predicted by JFK 
because they did indeed have technology reasonably likely to bring it about.


I would compare AGI predictions with predicting when we will have a 
mind-reading machine, (except that personally, I think AGI is much harder). 
Yes, you can have a bit of interesting discussion about that to begin with, 
but then the subject, i.e. making predictions,  exhausts itself, because 
there are too many unknowns. Ditto here. No? 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner



Mike,

Your comments are irresponsible.  Many problems of AGI have been solved. 
If you disagree with that, specify exactly what you mean by a "problem of 
AGI", and let us list them.


1.General Problem Solving and Learning (independently learning/solving 
problem in, a new domain)


2.Conceptualisation [Invariant Representation] -  forming concept of Madonna 
which can embrace rich variety of different faces/photos of her


3.Visual Object Recognition

4.Aural "Object" Recognition [dunno proper term here - being able to 
recognize same melody played in any form]


5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual 
scenario ( a movie)   [just made this problem up - but it's a good one]


[By all means let's identify some more unsolved problems BTW..]

I think Ben & I more or less agreed that if he had really solved 1) - if his 
pet could really independently learn to play hide-and-seek after having been 
taught to fetch, it would constitute a major breakthrough, worthy of 
announcement to the world. And you can be sure it would be provoking a great 
deal of discussion.


As for your "discoveries,"fine, have all the self-confidence you want, but 
they have had neither public recognition nor, as I understand, publication 
or identification. Nor do you have a working machine. And if you're going to 
claim anyone in AI, like Hofstadter, has solved 5 or 6...puh-lease.


I don't think any reasonable person in AI or AGI will claim any of these 
have been solved. They may want to claim their method has promise, but not 
that it has actually solved any of them.


Which of the above, or any problem of AGI, period, do you claim to have been 
solved?



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner
Samantha:From what you said above $50M will do the entire job.   If that is 
all

that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it is 
also a framework to end discussions for the moment.


1)  Given our general ignorance, everyone is, strictly, entitled to their 
opinions about the future of AGI. Ben is entitled to his view that it will 
only take $50M or thereabouts.


BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly make 
*reasonable* predictions about how long it will take to solve the rest - 
predictions that anyone, including yourself should take seriously- 
especially if you've got any sense, any awareness of AI's long, ridiculous 
and incorrigible record of crazy predictions here, (and that's by Minsky's & 
Simon's as well as lesser lights) - by people also making predictions 
without having solved any of AGI's problems. All investors beware. Massive 
health & wealth warnings.


MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the human 
brain/body is the most awesomely complex machine in the known universe, the 
product of billions of years of evolution.  To emulate it, or parallel its 
powers, is going to take more like many not just trillions but "zillions" of 
dollars - many times global output, many, many Microsoft's. Now right now 
that's a reasonable POV too.


But until you've solved one, just a measly one of AGI's problems, there's 
not a lot of point in further discussion, is there? Nobody's really gaining 
from it, are they? It's just masturbation, isn't it? 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Mike Tintner
My point was how do you test the *truth* of items of knowledge. Google tests 
the *popularity* of items. Not the same thing at all. And it won't work.


Science and scientists gain knowledge not just by passing info. about things 
around, (as you propose),  but by continually testing and expanding that 
info through interdependent and continuous physical observation of those 
things, physical experiment on those things,  physical discovery/ 
dissections of parts of those things, and physical invention of new sensors 
to see new dimensions of those things AND several other processes.


So basically do infants and so does Matt M.

An AGI s/trapped in a box is the equivalent of Plato's cave-dwellers chained 
in a dark cave. It won't be able to get v. far knowledge-wise or anywise. 
Poor thing. How could you? You obviously need an A.S.P.C,A.G.I.  alongside 
your A.S.P.C.A. Enough of these egocentric concerns with what an un/friendly 
AGI will do to *you*. Think about what you're doing to it. How would you 
like it?


Matt:--- Mike Tintner :

How do you resolve disagreements?


This is a problem for all large databases and multiuser AI systems.  In my
design, messages are identified by source (not necessarily a person) and a
timestamp.  The network economy rewards those sources that provide the most
useful (correct) information. There is an incentive to produce reputation
managers which rank other sources and forward messages from highly ranked
sources, because those managers themselves become highly ranked.

Google handles this problem by using its PageRank algorithm, although I
believe that better (not perfect) solutions are possible in a distributed,
competitive environment.  I believe that these solutions will be deployed
early and be the subject of intense research because it is such a large
problem.  The network I described is vulnerable to spammers and hackers
deliberately injecting false or forged information.  The protocol can only 
do
so much.  I designed it to minimize these risks.  Thus, there is no 
procedure

to delete or alter messages once they are posted.  Message recipients are
responsible for verifying the identity and timestamps of senders and for
filtering spam and malicious messages at risk of having their own 
reputations

lowered if they fail.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG.
Version: 7.5.519 / Virus Database: 269.22.10/1367 - Release Date: 4/9/2008 
7:10 AM




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Mike Tintner
Matt: Which are these areas of science, technology, arts, or indeed any area 
of

human activity, period, where the experts all agree and are NOT in deep
conflict?

MT:And if that's too hard a question, which are the areas of AI or AGI, 
where

the experts all agree and are not in deep conflict?


Matt I don't expect the experts to agree.

That's the deadly serious criticism (among many others) of the fantasy of a 
mushrooming database of knowledge. How do you test the supposed "facts" 
resulting from your data mining?


How do you resolve disagreements? How do you know when to disagree with the 
experts? How do you know what is truth and what fantasy? What would your 
superAGI make from these archives about the future of AGI, & the many 
problems of AGI? How would it deal with 40 or however many participants on 
this forum, with their 80 plus opinions on everything? How would it resolve 
the many thousands of different opinions on the Internet on issues like free 
will/determinism or global warming or Iraq or how to seduce women  or the 
role of DNA or where to invest right now?


This -how you test knowledge - is a totally unsolved problem, just as every 
other problem in AGI is totally unsolved.


Until there's the merest glimpse of a solution of just one problem, 
fantasying about what shape a superAGI will or should take is not serious, 
but a total waste of precious time that could be spent trying to solve those 
problems. 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Mike Tintner

Matt : a super-google will answer these questions by routing them to
experts on these topics that will use natural language in their narrow
domains of expertise.

Another interesting question here is: on how many occasions are the majority 
of experts in any given field, wrong? I don't begin to know how to start 
assessing that. But there's a basic truth - which is that they are often 
wrong and in crucial areas - like politics, economics, investment, medicine 
etc etc.


You guys don't seem to have understood one of the basic functions of Google, 
which is precisely to enable you to get a 2nd, 3rd etc opinion - and NOT 
have to rely on the experts! 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Mike Tintner

Matt : a super-google will answer these questions by routing them to
experts on these topics that will use natural language in their narrow
domains of expertise.

And Santa will answer every child's request, and we'll all live happily ever 
after.  Amen.


Which are these areas of science, technology, arts, or indeed any area of 
human activity, period, where the experts all agree and are NOT in deep 
conflict?


And if that's too hard a question, which are the areas of AI or AGI, where 
the experts all agree and are not in deep conflict?



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-07 Thread Mike Tintner

J.A.R. Like I stated at the beginning, *most* models are at least
theoretically valid.

1. VALID MODELS/IDEAS. I am not aware of ONE model that has one valid or 
even interesting idea about how to produce "general intelligence" - how to 
get an agent to independently learn, or solve problems in, a new domain - to 
cross domains.


Which ones & which ideas are you thinking of?

1a. I am only aware of ONE thinker/systembuilder who has even ADDRESSED the 
problem in any shape or form directly - & IMO poorly - Baum in a recent 
paper, in wh. he defines general intelligence practically as moving 
independently from one level of a computer game to another. But at least he 
made an attempt to address the problem. (The recent Swedish ACS robotic 
effort talks about the problem, but the robot only appears to tackle one 
task, rather than moving on from one to another).


Are you aware of any others?

2. FLEDGED INVENTORS/ INNOVATORS

Are there any people in this discussion/group who have any proven record of 
inventing or innovating - e.g. creating a marketed new kind of program? 
Clearly there are many with an extensive professional background, but that's 
different.


IMO while these groups are v. constructive, helpful & friendly, they 
strikingly lack a true CREATIVE culture. Witness the number of people who 
insist that no great/revolutionary, creative ideas are needed for AGI. (In 
fact, I can't think of any AGI leader who doesn't take this position). You 
guys want to be Frankenstein's - to create life - one of the greatest 
creative challenges of all time -  a task that IMO requires at least a few 
Da Vinci's/Turing's & an army of Michelangelo's/Edison's, - but according to 
you guys doesn't even require one big idea! (Does Steve Grand BTW take this 
position?)


That truly makes me weep & want to start pounding my head on the table.

But it might explain why would-be investors aren't excited?

I would strongly urge people to associate more with - and/or seek the 
opinions here of - fledged creatives like Hawkins. 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Mike Tintner


Stathis: Sure, you can't interact with the raindrop computation, but that
doesn't mean it isn't conscious

Perhaps this conversation helps define something of consciousness - i.e. to 
be conscious, you have to be able to form and HOLD a 
representation/impression of the world around you, which could be just the 
simplest direct sense impression, as in simple one-celled organisms - and 
might not involve any REFLECTION (the power to recall images/sensory 
impressions later). But you have to be able to hold representations - KEEP 
LOOKING at something - IF you are seek goals, and get to that food. The 
bacteria have to keep zeroed in on that food they're "flagellating" towards.


Now that's what inanimate objects can't do - including raindrops. They may 
have continuously fleeting impressions of the world around, but those 
impressions do keep fleeting. Inanimate objects can't hold on to them. 
(Perhaps - though I can't give any reasonable explanation for this - they 
evolved in part in order to retain impressions).


Them's my first thoughts. Welcome responses. 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


[singularity] Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Mike Tintner

Matt: I realize that a
full (Turing test) model can only be learned by having a full range of human
experiences in a human body.

Pray expand. I thought v. few here think that. Your definition seems to 
imply AGI must inevitably be embodied.  It also implies an evolutionary 
model of embodied AGI - - a lower intelligence animal-level model will have 
to have a proportionately lower agility animal body. It also prompts the v. 
interesting speculation - (and has it ever been discussed on either 
forum?) - of what kind of superbody a superagi would have to have?  (I would 
personally find *that* area of future speculation interesting if not super). 
Thoughts there too? No superhero fans around? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=94885151-ef48f7


Re: [singularity] Wrong focus?

2008-01-31 Thread Mike Tintner


- Stathis: > The fact is, you are already living in a virtual 
environment. Your

brain creates a picture of the world based on sensory data. You can't
*really* know what a table is, or even that there is a table there in
front of you at all. All you can know is that you have particular
table-like experiences, which seem to be consistently generated by
what you come to think of as the external object "table". There is no
way to be certain that the picture in your head - including the
picture you have of your own body - is generated by a real external
environment rather than by a computer sending appropriately high
resolution signals to fool your brain:

http://en.wikipedia.org/wiki/Brain_in_a_vat



Stathis,

So when you see and touch your penis, you have no idea whether it's really 
there? And you cannot be certain that it's your penis and not someone 
else's? Despite having touched it...how many times?


The philosophical conceit that we do not really know that there is a table 
(or a penis) in front of us, is just that - a fanciful conceit. It shows 
what happens when you rely on words and symbols as your sole medium of 
intellectual thought - as philosophers mainly do.


In reality, you have no problem knowing and being sure of those objects and 
the world around you  - except in exceptional circumstances. Why? Two 
reasons.


First, all sensations/perceptions are continually being unconsciously tested 
for their reality -  a process which I would have thought every AI/robotics 
person would take for granted. Hence your brain occasionally thinks: "was 
that really so-and-so I saw?"...or: "where exactly in my foot *is* that 
pain?" Your unconscious brain has had problems checking some perception.


Secondly, your brain works by *common sense* perception and testing. We are 
continually testing our perceptions with all our senses and our whole body. 
You don't just look at things, you reach out and touch them, smell them, 
taste them, and confirm over and over that your perceptions are valid. (Also 
it's worth pointing out that since you are continually moving in relation to 
objects, your many different-angle "shots" of them are continually tested 
against each other for consistency). Like a good journalist, you check more 
than one source. Your perceptions are continually tested in a deeply 
embodied way - and in general v. much "in touch" with reality.


But when you and philosophers come to think intellectually about perception, 
because.you then rely solely on words and symbols - and cease to test your 
ideas about, as distinct from actual, perceptions in an embodied way - you 
come up with literally non-sense. (Hence it is that philosophers are the 
common butt of "how do you know you're really here?" jokes at parties). And 
disembodied AGI seems to me a loosely similar disembodied conceit..






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=92465648-36aeb0


Re: [singularity] Wrong focus?

2008-01-31 Thread Mike Tintner

Stathis,

I think the question should reverse - I and every (most?) creature can 
distinguish between a real and a virtual environment. How on earth can a 
virtual creature make the same distinction? How can it have a body, or a 
continuous sense of a body? How can it have a continous map of the world, 
with a continuous physical sense of up/down, forward/back, 
heaviness/lightness?  And a fairly continuous sense of time passing? How can 
it have a self? How can it have continuous (conflicting) emotions coursing 
through its body? How can it also have a continuous sense of its energy and 
muscles - of zest/apathy, strength/weakness, awakeness/tiredness? How can it 
have a sense of its posture, and muscles tight or loose?


How can it continually use these complex body maps and models to map and 
model other creatures - other humans, animals - other shapes and things - 
and get a sense of their solidity/ speed/ etc? How can it have continuous 
empathy for creatures around it and sense their moods and body states?


How BTW do computer visual systems visually interpret any ongoing physical 
scene? My understanding is they still can't identify basic physical shapes 
with any reasonable success rate. So how can they understand that a certain 
object is falling to the floor, & not being pulled - or floating upwards, & 
not being lifted? And so on and on for every physical interaction? My 
(pretty ignorant) understanding is they can do none of these - and without a 
body, I would think it unlikely that they will.


As I write all this, a central distinction re embodiment becomes clear - 
yes, computers can have a "society of mind". What they standalone can't have 
is a "society of body". And of course that society of body is even greater 
and inseparable from the society of mind - a simply vast (ahem 
"mind-blowing") organization.


BTW I'd be grateful if you & others would constructively & not just 
defensively engage with these points. I'm obviously just starting to reach 
for a systematic statement of the essential need for embodiment.


One part of that essential need, as I said,  is that your body and your 
sense of your self in that body provides your continuous set of maps and 
models of the world around - which is what mirror neurons help establish. 
So trying to exclude your body from your (or your AGI's) intelligence of the 
world is like trying to do *science without geometry* - quite impossible. 
And trying to exclude your body is also like trying to do *science without 
art* -  to see Stathis or let's say Brad Pitt/ Jennifer Aniston purely as a 
set of formulae, mathematical formulae and verbal descriptions - and not the 
living breathing complexly emotional and pyschological body/creature that is 
progressively evident in a photo, a movie, a theatrical stage, the living 
flesh in front of you.


The *grand illusion* here - shared by all of you - and this is crucial - is 
that of the power of words and symbols. You are all religious believers 
here - you basically are inheriters of :


"In the beginning was the Word, and the Word was with God, and the Word was 
God..."


That's what's going on here. The grand illusion is that because words and 
other symbols can POINT at everything, and NAME everything, they can 
therefore conjure up everything - they ARE the things they point to.They ARE 
God. And young children can literally have these illusions, whereas they are 
merely implicit in AGI-ers' thinking.


Actually no - the word or name is not the object it refers to, the map is 
not the territory. You have to have the real object to really know it. A 
real flower, the real Stathis, the real Brad Pitt etc.  The real world 
around you.  And your own real body in it.


And you need a real body not only to *know* that other body - but to *take 
it to pieces* in all physical senses - to touch and feel how heavy, how 
rough, how solid - and to do physical, scientific experiments on.


So a disembodied intelligence is like a *science that is theory without 
experiment* - again an impossibility. That's how science *started* thanks to 
Bacon  - by insisting on the need for experiment. If you want to know 
anything, you have to in some sense do scientific or quasi-scientific 
physical experiments on it - your body acting on its body. That's how you 
became proficient in AI - by doing physical embodied "experiments" with 
computers. Had you tried to do it all from books, you would never have got 
anywhere. That's how infants begin by fiddling and sucking etc with things 
and only later naming and numbering them.


Sorry, disembodied AGI is simply an enormous illusion like any of the 
hundreds of illusions in psychological experiments. Initially compelling but 
look closely and its unreality becomes clear. Get real. Get physical.




Stathis:   MT:
The latter. I'm arguing that a disembodied AGI has as much chance of 
getting

to know, understand and be intelligent about the world as Tommy - a deaf,
dumb and bl

Re: [singularity] Wrong focus?

2008-01-28 Thread Mike Tintner

Tom:"embodied cognitive science" gets 5,310 hits on Google. "cognitive
science" gets 2,730,000 hits. Please back up your statements,
especially ones which talk about "revolutions" in any field.

Check out the wiki article - look at the figures at the bottom such as 
Lakoff & co & Google them.  Check out Pfeiffer. Note how many recent books 
in philosophy, psychology and cognitive science are focussing on embodiment 
in one way or other. Check out the Berkeley/ California configuration of 
these guys. Check out morphological computation - and the relevant 
conference. Check out Ramachandran:


"Without a doubt it is one of the most important discoveries ever made about 
the brain, Mirror neurons will do for psychology what DNA did for biology. 
They will provide a unifying framework and help explain a host of mental 
abilities that have hitherto remained mysterious..."


Read Sandra Blakeslee - The Body has a Mind of its Own - also just out. [She 
did Jeff Hawkins before].


Even s.o. like Ben, if you track his development - he can correct me - is 
using embodied more and more - and promoting "virtually embodied AI's."


Unlike most mainstream cog. sci. , the embodied version, you'll find, 
really is scientific and has a commitment to scientific experiment and 
testing of its ideas.


It's as I said an untrumpeted revolution but if you think about it, it's 
inevitable.  Just try thinking without sensation, emotion and movement. 
Brains in a vat are fine for philosophers but they just haven't worked for 
any kind of AGI, or any of the faculties that AGI needs. [And stay cutting 
edge).






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90868213-6926fc


Re: [singularity] Wrong focus?

2008-01-28 Thread Mike Tintner

X:Of course this is a variation on "the grounding problem" in AI.  But
do you think some sort of **absolute** grounding is relevant to
effective interaction between individual agents (assuming you think
any such ultimate grounding could even perform a function within a
limited system), or might it be that systems interact effectively to
the extent their dynamics are based on **relevant** models, regardless
of even proximate grounding in any functional sense?

Er.. my body couldn't make any sense of this :). Could you be clearer giving 
examples of the agents/systems  and what you mean by absolute/ proximate 
grounding?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90585588-e20790


Re: [singularity] Wrong focus?

2008-01-28 Thread Mike Tintner


Stathis:  Are you simply arguing that an embodied AI that can interact with 
the

real world will find it easier to learn and develop, or are you
arguing that there is a fundamental reason why an AI can't develop in
a purely virtual environment?


The latter. I'm arguing that a disembodied AGI has as much chance of getting 
to know, understand and be intelligent about the world as Tommy - a deaf, 
dumb and blind and generally sense-less kid, that's totally autistic, can't 
play any physical game let alone a mean pin ball, and has a seriously 
impaired sense of self , (what's the name for that condition?) - and all 
that is even if the AGI *has* sensors. Think of a disembodied AGI as very 
severely mentally and physically disabled from birth - you wouldn't do that 
to a child, why do it to a computer?  It might be able to spout an 
encyclopaedia, show you a zillion photographs, and calculate a storm but it 
wouldn't understand, or be able to imagine/ reimagine, anything. As I 
indicated, a proper, formal argument for this needs to be made - and I and 
many others are thinking about it - and shouldn't be long in forthcoming, 
backed with solid scientific evidence. There is already a lot of evidence 
via mirror neurons that you do think with your body, and it just keeps 
mounting. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90550639-4bac43


Re: [singularity] Wrong focus?..P.S.

2008-01-28 Thread Mike Tintner

Gudrun cont..

Actually I got that wrong - a classic example of the old linguistic biasses 
and traps - it's more like:


Cog Sci is the idea that thought is a program

Embodied Cog sci - is the idea that there is no thought without sensation, 
emotion and movement .


("no mentation without re-presentation"..?  hmm... still an idea in 
progress)


We need to find ways of reconnecting the pieces that language has dissected. 
Hey, you're an artist.. do me a photo or model :). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90523934-d9e0df


Re: [singularity] Wrong focus?

2008-01-28 Thread Mike Tintner

Gudrun: I think this is not about
intelligence, but it is about our mind being inter-dependent (also via
evolution) with senses and body.

Sorry, I've lost a subsequent post in which you went on to say that the very 
terms "mind" and "body" in this context were splitting up something that 
can't be split up. Would you (or anyone else) like to discurse - riff - on 
that? However casually...


The background for me is this:  there is a great, untrumpeted revolution 
going on, which is called Embodied Cognitive Science. See Wiki. That is all 
founded on the idea of the "embodied mind". Cognitive science is based on 
the idea that thought is a program - which can in principle be instantiated 
on any computational machine - and is a science founded on AI/ computers. 
Embodied cog sci is Cog Sci Stage 2 and is based on the idea that thought is 
a brain-and-body affair - and cannot take place without both - and is a 
science founded on robotics.


But the whole terminology of this new science - "embodied mind" - is still 
lopsided, still unduly deferential - and needs to be replaced. So I'm 
interested in any thoughts related to this, however rough.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90517727-b31b76


Re: [singularity] Wrong focus?

2008-01-27 Thread Mike Tintner

Samantha: MT: You've been fooled by the puppet. It doesn't work without the
puppeteer.   Samantha:What's that, elan vitale, a "soul", a 
"consciousness" that is

independent of the puppet?

It's significant that you make quite the wrong assumption. You too are 
fooled. The puppeteer is the human operator/programmer. V. simple and 
obvious. Computers do not actually EXIST. All that exists here are lumps of 
metal - until human beings come along - and give them life and meaning. 
Without humans they lie there, dead.


All your thinking, I suggest,  is predicated on an obvious falsehood - that 
computers exist in their own right and are not just tools/extensions of 
human beings.  And it is still a very large set of unsolved problems as to 
what will be required to make a robot or computer exist in its own right.


If you are serious either as scientist or technologist, you have to start 
from the fact of those unsolved problems, and not just wishfully assume that 
they have all been magically answered. You can be sure that the answers, 
whatever they are, will transform your current thinking.


Re why only a moving body can think, it is still a large philosophical and 
scientific problem, and I'm just in the middle of working it out ! (But I'm 
increasingly confident it is soluble and soon). The basic biological 
evidence for that assertion though is obvious -  the only self-sufficient 
"in-their-own-right" entities that can actually think are indeed moving 
organisms/ animals. And the classic fact is that when a sea squirt stops 
moving, it immediately devours its own brain.


The other basic fact here is again obvious if you look at the whole and not 
just a part. Above it wasa  case of don't just look at the computer look at 
everything that happens with and around it, like the human operator. Here 
it's a sub-case of that - don't just look at the thoughts/ideas - the print, 
say on the screen, or the writing on the page - look at how they are 
produced.  And - hey - they don't happen without movement - someone 
typing/writing them.


As Daniel Wolpert says:

"Movement is the only way we have of interacting with the world, whether 
foraging for food or attracting a waiter's attention. Indeed, all 
communication, including speech, sign language, gestures and writing, is 
mediated via the motor system. Taking this viewpoint, the purpose of the 
human brain is to use sensory signals to determine future actions. The goal 
of our lab is to understand the computational principles underlying human 
sensorimotor control"


http://learning.eng.cam.ac.uk/wolpert/

(But that still doesn't solve the problem I referred to which is to explain 
why thinking generally is predicated in its very content on moving bodies - 
and not just produced by moving bodies). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90332223-75d797


Re: [singularity] Wrong focus?

2008-01-27 Thread Mike Tintner

Ben:   MT: Venter has changed everything

today - including the paradigms that govern both science and AI..

Ben:  Lets not overblow things -- please note that Venter's team has not yet

synthesized an artificial organism.


Here's why I think Venter's so important - to quote a post of mine to an 
evo-psych group [I also recommend here BTW Dennis Noble's "The Music of 
Life" - re "genetic keyboard"]:


"Over and above its immediate, technological significance for Artificial 
Life, I see this as the end of an era in science. I think the defining 
scientific paradigm of the last 50 years - the genetic code, or program, and 
with it the idea that we are determined by our genes - is now dead (or in 
its death throes).  [I would define genetic determinism BTW as ALLOWING for, 
and in no way excluding, environmental influences].


I think the replacement for that paradigm is now clear, even if it hasn't 
been exactly defined, and that is - the genetic keyboard. That might not be 
immediately obvious. But if you think about it, what has happened - Craig 
Venter & co creating a new genome - is an example of the genetic keyboard 
playing on itself, i.e. one genome [Craig Venter] has played with another 
genome and will eventually and inevitably play with itself. Clearly it is in 
the nature of the genome to recreate itself - and not just to execute a 
program. (And indeed, had the computational paradigm been properly thought 
through, it would have been noted that it is in the nature of programs - as 
actually produced and existing on computers - that they are NOT stable 
entities but  are normally,  and more or less demand to be,  endlessly 
reprogrammed - by the use, as it happens, of a keyboard).


Craig Venter  has disavowed genetic determinism: "There are two fallacies to 
be avoided," Dr Venter's team write in the journal Science.
"Determinism, the idea that all characteristics of a person are 'hard-wired' 
by the genome; and reductionism, that now the human sequence is completely 
known, it is just a matter of time before our understanding of gene 
functions and interactions will provide a complete causal description of 
human variability."


More significantly for EP, Venter has also disavowed natural selection:

"The key problem is that far from being the simple computer code we once 
thought it was, DNA is fabulously complex. When I last interviewed Venter a 
decade ago, he said our DNA was too complex to be designed by man and 
probably even too complex for natural selection. The problem has worsened: 
"With the publication now of the full genome, it's clearly more complicated 
than ever.


"All our data from the environment and other places is telling us there are 
different components to our personalities. Certainly step by step everything's 
just a point mutation and things change. But I don't think that can explain 
everything. People have this simplistic view of Darwinian evolution as 
random point mutations in the genetic code followed by natural selection. 
No, I don't think that would have got us out of our genome."


http://www.timesonline.co.uk/tol/news/uk/science/article2752196.ece

P.S. I would acknowledge that there is still philosophical/scientific work 
to be done  -  the case for the changeover of paradigms has been not fully 
made. But it is now inevitable.


P.P.S. The full new paradigm is something like -  "the self-driving/ 
self-conducting machine" -  it is actually the self that is the rest of the 
body and brain, that interactively plays upon, and is played by, the genome, 
(rather than the genome literally playing upon itself). And just as science 
generally has left the self out of its paradigms, so cog sci has left the 
indispensible human programmer/operator out of its computational paradigms.


To bring in the Gudrun discussion, you could say that science is about to 
tell us that what you - your self - do with your body (as distinct from how 
it works)  is not science but art (and, let's not forget, technology).  The 
idea that you are deterministically destined to play only one kind of music 
on your keyboard is quite mad - a keyboard, by definition, like your body 
and brain, offers you an infinite range of possibilities.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90331155-b349d9


Re: [singularity] Wrong focus?

2008-01-26 Thread Mike Tintner

Tom:A computer is not "disembodied" any more than you are. Silicon, as a
substrate, is fully equivalent to biological neurons in terms of
theoretical problem-solving ability.

You've been fooled by the puppet. It doesn't work without the puppeteer.

And contrary to Eliezer:
"A transhuman is a transhuman mind; anything else is a side issue."

the evidence of billions of years of evolution says that the mind doesn't 
work without the body. No body, no mind. No physics, no psychology, no AI. 
If it can't move, it can't think.  (And I think, thanks in part to mirrror 
neurons, that we are now on the verge of finally pinning down why. But 
that's another post).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90305440-8a6fef


Re: [singularity] Wrong focus?

2008-01-26 Thread Mike Tintner

Ben,

Thanks for reply. I think though that Samantha may be more representative - 
i.e. most here simply aren't interested in non-computer alternatives. Which 
is fine.


I joined mainly to learn  - about future possibilities generally. It's not 
an area I've thought about much, other than in relation to the future of 
human society.


I can't recall, though, a single superAGI discussion that struck me as other 
than pure fantasy, or gave me anything to conjure with - whereas your brief 
discussion of pathogens immediately gives me something to think about. (I 
guess the immediate response to your spectre is that if they can produce 
more deadly pathogens, they will be able to engineer some form of 
bio-resistance - which evokes the prospect of articial life arms races - 
although you might get a nuclear-comparable situation, where every state 
would be too scared to use them, for fear of being counter-attacked).


I certainly would like to see discussion of how species generally may be 
artificially altered, (including how brains and therefore intelligence may 
be altered) - and I'm disappointed, more particularly, that Natasha and any 
other transhumanists haven't put forward some half-way reasonable 
possibilities here.  But perhaps Samantha & others would regard such matters 
as offlimits?


It's a pity though because I do think that Venter has changed everything 
today - including the paradigms that govern both science and AI.



Ben: Hi,



Why does discussion never (unless I've missed something - in which case
apologies) focus on the more realistic future "threats"/possibilities -
future artificial species as opposed to future computer simulations?


While I don't agree that AGI is less realistic than artificial
biological species,
I agree the latter are also interesting.

What do you have to say about them, though?  ;-)

One thing that seems clear to me is that engineering artificial pathogens
is an easier problem than engineering artificial antibodies.

The reason biowarfare has failed so far is mostly a lack of good delivery
mechanisms: there are loads of pathogens that will kill people, but no one
has yet figured out how to deliver them effectively ... they die in the 
sun,

disperse in the wind, drown in the water, whatever

If advanced genetic engineering solves these problems, then what happens?
Are we totally screwed?

Or will we be protected by the same sociopsychological dynamics that have
kept DC from being nuked so far: the intersection of folks with a 
terrorist

mindset and folks with scientific chops is surprisingly teeny...

Thoughts?





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90295887-eb9ff6


Re: [singularity] The Extropian Creed by Ben

2008-01-26 Thread Mike Tintner

Gudrun: I am an artist who is interested in science, in utopia and seemingly
impossible
projects. I also came across a lot of artists with OC traits. ...
The OCAP, actually the obsessive compulsive 'arctificial' project ..
These new OCA entities ... are afraid, and bound to rituals and unwant
ed thoughts (and actions).

Some odd thoughts:

I'd wondered whether you might be interested in the reality rather than the 
science-fiction - of the connection between OCD and  real scientists and 
technologists. Ben's article arguably raises interesting questions about 
their psychology generally and not just that of Extropians, (and has the 
elements, if not the story, for a good movie).


(BTW after his highlighting of one Extropian sucide, up comes an article on 
two suicides closer to AI home - those of Singh & McKinstry (both 
Minsky-related!):


http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

with a note that: "MIT has attracted headlines for its high suicide rate in 
the past," )


The connection between the scientific, systemising personality and autism - 
the ultimate in an obsessive need to control and also in a rejection of 
humanity - has obviously  been expounded by Sacha Baron-Cohen :


http://news.bbc.co.uk/1/hi/health/4661402.stm

And you don't say, but aren't artists - whatever their philosophical 
position - fundamentally opposed to science's current worldview? Science 
still sees human beings as automata in an automatic process - fundamentally 
totally controlled,  - (and v. few AI-ers disagree) -  while the arts see 
us, in the shape of a million or so dramatic works, as heroes in a heroic 
drama - fundamentally unpredictable and suspenseful.  (Even robots in the 
arts tend to be more or less heroic).






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90288393-c99cc7


[singularity] Wrong focus?

2008-01-26 Thread Mike Tintner
Correct me  - my impression of discussions here is that this group seems to be 
focussed exclusively on the future development of a superAGI - and that is 
always considered to be a *computer*.

However, there is still no sign of that ever happening - of a disembodied 
computer achieving true intelligence - (or even how such a thing, were it 
possible, could avoid being simply switched off).

What however now seems extremely probable  is that Artificial Life will happen. 
Today the first artificial genome was announced:

http://www.jcvi.org/cms/research/projects/synthetic-bacterial-genome/press-release/

(any significance in the choice of Mycoplasma genitalium?)

Awed hush, please. Soon there will be an artificial cell.

It also seems highly probable now with Darpa that we will have robots freely 
roaming the earth eventually - however long it may take.

(These are the areas where all the serious money is going).

Why does discussion never (unless I've missed something - in which case 
apologies) focus on the more realistic future "threats"/possibilities -   
future artificial species as opposed to future computer simulations? 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90224064-934e45

Re: [singularity] The Extropian Creed by Ben

2008-01-24 Thread Mike Tintner

Gudrun:The obsessive compulsive 'artificial' project.

Can I ask what your thesis is about?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=89671722-2bf9d1


Re: [singularity] The Extropian Creed by Ben

2008-01-21 Thread Mike Tintner

talking about sucidal -

http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

& Minsky links both. (What's with MIT's "high suicide rate"?). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=88164415-431b43


Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Mike Tintner
Oh you tease. All right then... May I herewith extend a formal invitation to 
you to reply to my/subsequent posts, and give us the benefit of your opinions 
and extensive experience in these matters.  Hoping you will reply soon,

RSVP
  Natasha: , Mike Tintner wrote:

  Sorry if you've all read this:

  http://www.goertzel.org/benzine/extropians.htm

  But I found it a v. well written sympathetic critique of extropianism & 
highly recommend it. What do people think of its call for a "humanist 
transhumanism"? 

  I found Ben's essay to contain a certain bias which detracts from its 
substance.  If Ben would like to debate key assumptions his essay claims, I 
available. Otherwise, if anyone is interested in key points which I belive are 
narrowly-focused and/or misleading, I'll post them.

  Natasha 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=87943786-17474c

[singularity] The Extropian Creed by Ben

2008-01-20 Thread Mike Tintner

Sorry if you've all read this:

http://www.goertzel.org/benzine/extropians.htm

But I found it a v. well written sympathetic critique of extropianism & 
highly recommend it. What do people think of its call for a "humanist 
transhumanism"? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=87871446-3cd1ef


Re: [singularity] Are you guys anti-human?

2007-11-26 Thread Mike Tintner

Ollie:it is true that many transhumanists
don't believe that the present version of homo-sapiens is the paragon
of perfection (if that turn of phrase isn't too completely
tautological

Thanks - a more reasoned rather than literal reply. The article drew a pic 
of Paypal types as workaholic, highly mathematical nerds, (sub-Google 
Phd's), withdrawn from the social world, not getting laid too much - a type 
who might have reasons for disliking humanity.


The obvious question that follows from your reply - & that I may have missed 
being discussed - is what do you an d others see as wrong with human beings 
and needing correction? I think we can take it for granted that it would be 
a good idea if we had more powerful bodies and brains - I for example 
immediately think of how nice it would be if we weren't subject to fairly 
continuous muscle fatigue of varying degrees when running and thinking hard. 
But what do people see as wrong or defective - as distinct from weak - with 
human nature? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=68545530-2293d5


Re: [singularity] Why SuperAGI's ..P.S.

2007-10-30 Thread Mike Tintner
Oh well, my apologies!  But my science/ technology distinction stands  if 
tentatively  - not because I want to score points but because it's important in 
itself. Your conference was about the technology of collective intelligence. I 
am and was aware that there has been a good deal of AI work here. But a science 
of collective or social intelligence is a very different though related ball 
game. It probably should be "social intelligence" - Surowiecki insists that the 
wisdom of crowds depends to a great extent upon the individuals being 
decentralised and not part of a collective, hierarchical organization.

P.S. I think your Wiki ref was interesting in underlining what is now a general 
principle: "Try and think of something new, and it's almost bound to be in 
Wikipedia already." (But that doesn't mean it's been done properly or fully).

  Ben:Mike, you've got me all wrong, in this particular regard!!

  My practical plan for creating AGI does in fact involve creating a society of 
AGI's, living in online virtual worlds like Second Life and Metaplace ... 

  (Although, these AGI's will be able to share thoughts with each other, in a 
kind of collective memory and learning substrate, which is something that 
humans can't do ... so they'll really be what I've called a Mindplex rather 
than a society 

  http://www.goertzel.org/dynapsyc/2003/mindplex.htm
  )

  However, in principle, I do not agree with you that a society of AGI's is 
necessary for creating AGI's.  Even though my practical plan is in fact to 
create a society of AGI's... 

  Also, collective intelligence has been under study for decades in the systems 
theory world.  In 2001 Francis Heylighen and I ran a conference in Brussels 
called Global Brain 0, which was focused on the notion of an emerging, 
increasingly cohesive global collective intelligence.  I made a blog post on a 
related theme a couple days ago... 

  http://www.singinst.org/blog/2007/10/29/on-becoming-a-neuron/

  -- Ben G



  On Oct 30, 2007 5:00 PM, Mike Tintner < [EMAIL PROTECTED]> wrote:

There is a certain irony, considering how much you guys have agonized 
(perfectly reasonably) about open-sourcing or "collectivising" the building of 
an AGI, that you should instantly dismiss the "collectivising" or distributing 
of the AGI itself (or themselves).  (That Wisdom of Crowds book was only last 
year - I really think this is all still fairly virgin territory).

Well put. (BTW as perspective here, I should point out that what I've 
raised 
calls for a whole new branch/dimension of social psychology - the study 
of
collective intelligence. 

  Not new to everyone ;-)

  http://en.wikipedia.org/wiki/Collective_intelligence

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&; 


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.503 / Virus Database: 269.15.12/1098 - Release Date: 
10/29/2007 9:28 AM



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.503 / Virus Database: 269.15.12/1098 - Release Date: 10/29/2007 
9:28 AM

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=59279476-733df7

Re: [singularity] Why SuperAGI's ..P.S.

2007-10-30 Thread Mike Tintner
There is a certain irony, considering how much you guys have agonized 
(perfectly reasonably) about open-sourcing or "collectivising" the building of 
an AGI, that you should instantly dismiss the "collectivising" or distributing 
of the AGI itself (or themselves).  (That Wisdom of Crowds book was only last 
year - I really think this is all still fairly virgin territory).

Well put. (BTW as perspective here, I should point out that what I've 
raised 
calls for a whole new branch/dimension of social psychology - the study of
collective intelligence. 

  Not new to everyone ;-)

  http://en.wikipedia.org/wiki/Collective_intelligence

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.503 / Virus Database: 269.15.12/1098 - Release Date: 10/29/2007 
9:28 AM

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=59250202-592e28

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Mike Tintner
Well, partly my mistake and ignorance - and thanks for pointing it out. But not 
entirely, I think. There seems to be an unresolved mix in the article (which is 
v. interesting)  and perhaps in all the efforts referred to  there, between 
collective intelligence as technology, (which covers things as diverse as robot 
swarms and the open-source movement), and the scientific study of collective 
intelligence. In fact I suspect I'm right - and I stand to be corrected again - 
that there isn't a scientific field devoted to it as such - and if not, there 
certainly could and should be. I note that the main theorists referred to are 
nearly all of the last 15 years.

And if this whole area is not newish to you, how come your arguments showed no 
awareness of it? Obviously the "social criticism" of AI has been touched upon 
before -

"Skeptics, especially those critical of artificial intelligence and more 
inclined to believe that risk of bodily harm and bodily action are the basis of 
all unity between people, are more likely to emphasize the capacity of a group 
to take action and withstand harm as one fluid mass mobilization, shrugging off 
harms the way a body shrugs off the loss of a few cells."

P.S. That popular recent work was The Wisdom of Crowds.




Ben: Well put. (BTW as perspective here, I should point out that what I've 
raised 
calls for a whole new branch/dimension of social psychology - the study of
collective intelligence. 

  Not new to everyone ;-)

  http://en.wikipedia.org/wiki/Collective_intelligence

--

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=59208723-fcd8a0

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Mike Tintner
Charles H:I'm not sure that "Therefore we need a bunch of AGIs  rather than 
one"

is a valid conclusion, but it *is* a defensible one.  I have a suspicion
that a part of the reason for the success of humans as problem solvers
is that not only do we start from a large number of initial positions,
but also we tend to have differing goals, so when one is blocked,
another may not be.  But I see no reason why, intrinsically, the same
characteristics couldn't be given to a single AGI...though the variation
in goals might be difficult.

Well put. (BTW as perspective here, I should point out that what I've raised 
calls for a whole new branch/dimension of social psychology - the study of 
collective intelligence. We really have a v. poor to nonexistent 
understanding of how that works. We have a v. poor picture for example of 
how science (i.e. groups of scientists) builds up its collective pictures/ 
bodies of knowledge. What we are discussing here are some of the possible 
elements of a new science. It would also cover collective animal 
intelligence such as the extraordinary collective activities of termites).


But your valiant defense of a superAGI possibly splitting itself up into 
parts to mimic a society won't work - as you immediately start to realise. 
Any individual agent must establish sets of priorities among its goals- and 
among its various activities (and even "subselves"), because they are all 
competing for its limited time, energies, and resources. Different 
individuals can establish totally different priorities. And each human 
individual does indeed have an (often frustratingly) different POV and 
philosophy to one's own (which is of course always the right one).  So a 
whole set of separate intelligences can be aware of far more factors/ 
evidence, contribute far more different personal experiences, generate far 
more ideas and subdivide far more labour than an individual intelligence.


(What's the popular science work that came out recently about the 
superiority of collective intelligence?)


Another thought here is that language and maths - which would surely be 
vital for a superAGI -  probably only evolved when you had the beginnings of 
truly complex social activities - where you didn't just have, say,  group 
hunting (which animals of course do) but hunting with sets of tools and 
maps, pictures and planning, and, similarly, complex construction 
activities - where you needed complex coordination of individuals.Language 
and math IOW exist to finely coordinate separate minds. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=59181610-de9996


Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Mike Tintner
Ben:  MT:To be clear: I'm saying - no society and culture, no individual 
intelligence. The individual is part of a complex - & in the human case - VAST 
social web. (How ironic, Ben, that you could be asserting your position while 
totally embedded in the greatest social web ever - the Net. Your whole work 
depends on the Web and speaks to it).
To me that's like saying "How ironic that you can assert the possibility of 
rolling on wheels, when you yourself walk with legs." Human intelligence is 
tied to society and culture.  Not all intelligence must be.

Can't resist a comeback. The wheels don't get to roll, if the legs don't get in 
the car. (Ditto the computer doesn't calculate if the hand doesn't switch it 
on). The one depends on the other & vice versa. The individual depends on the 
society, & vice versa. But it's easy intellectually to fail to see the vital 
connection.

Try & find a single example of any form of intelligence that has ever existed 
in splendid individual isolation. That is so wrong an idea - like perpetual 
motion - & so fundamental to the question of superAGI's. (It's also a 
fascinating philosophical issue).


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58999594-6509b3

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Mike Tintner
Yes, I thought we disagreed.

To be clear: I'm saying - no society and culture, no individual intelligence. 
The individual is part of a complex - & in the human case - VAST social web. 
(How ironic, Ben, that you could be asserting your position while totally 
embedded in the greatest social web ever - the Net. Your whole work depends on 
the Web and speaks to it).

Tom McCabe expresses another dimension of the "isolated individual" position.  
He can sit down and work out prime nos. from 300-400 with pencil/paper all by 
himself apparently - only it's with a system of maths that took thousands of 
years for our society to develop, and millions if not billions of years for 
human/animal society to initiate/evolve, and a pencil and paper that are also 
the products of millions of years of human society, on a desk and in a room 
that are provided to him and continually supported and heated, lighted etc and 
with a body that is fed and watered by an extremely complex society. But no, 
he, you are truly isolated, individuals. "Get over yourself" guys.

(And of course, all our acts of intelligence, whether we are directly aware of 
it or not, are acts of social communication and exchange. You, Ben, are doing 
AGI because you think it will help as well as sell to society and only able to 
practice with the aid of teams of other people).

And Tom cues me in perfectly with his reference to Evolutionary Psychology. 
That is the perfect example of totally skewed, "isolated individual" thinking. 
Scientific, evolutionary thinking has been parallel to your AI/AGI bias. It 
thought/thinks that a self-interested individual would be selfish and not 
altruistic. Animal and human altruism could only be explained by an appeal to 
the interest of their genes in their self-preservation and -evolution. 
Actually, extreme selfishness is not smart at all, precisely because all of us 
individual animals depend for our survival on our relationships with our 
society -   reciprocity &  fairness of exchange together with cooperation are 
very sensible, rewarding and essential behaviour. And altruism is just as deep 
and fundamental an instinct as egotism - as anyone other than near-autistic 
scientists should be able to see. "No man is an island.")..

POINT 2:  Our equally fundamental disagreement is about the "nature of the 
reality" that any AGI or any human or any animal must deal with. Let me define 
it - since I rather than you am really asserting the opposite position here - 
it isn't so much "chaotic" as "crazy, and mixed up" as opposed to "rational and 
consistent." 

Narrow AI deals with straightforward problems - rational, consistent problems 
that can be solved in rational, consistent ways, even though they may involve 
degrees of uncertainty and demand cycling (algorithmically/systematically) 
through different approaches.

AGI must deal with problematic problems - crazy, (i.e. non-rational) mixed up 
problems that can only be solved in crazy, mixed up ways, where you are not 
just uncertain but fundamentally confused, (and should be so lucky as to have a 
neat algorithm), and have to patch together solutions by "groping" often 
blindly for ideas..

(The "crazy, (non-rational), mixed up" nature of the world - the fact that 
Richard can be friendly one day, & aggressive the next, & neither you nor he 
know when he will be which, or quite how to deal with him  - - is as deep and 
fundamental an attribute as "chaos"/complexity).

You can only assert the possibility of an essentially rational AGI because, I 
suggest, you are living in a virtual, structured world. The real, 
ill-structured world - along with every single activity humans and animals 
engage in - isn't like that.


  Ben:

MT:No AGI or agent can truly survive and thrive in the real world, if it is 
not similarly part of a collective society and a collective science and 
technology - and that is because the problems we face are so-o-o problematic. 
Correct me, but my impression of all the discussion here is that it assumes 
some variation of the classic science fiction scenario, pace 2001/ The Power 
etc where an individual computer takes power, if not takes off by itself. Ain't 
gonna happen - no isolated individual can truly be intelligent.


  Just to be clear -- I don't agree with this ... I think it's an undue 
projection of the particular nature of human intelligence onto the domain of 
nonhuman minds. 

  A superhuman AI could be in essence a "culture unto itself", not requiring a 
society to maintain a culture as humans do.  

  This certainly doesn't require that said AI be able to predict the weather 
and otherwise get around the chaotic, unpredictable nature of physical 
reality... 

  -- Ben G

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


--

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-29 Thread Mike Tintner
Ben,

Yes, this is the general type of thing I was referring to in calling for a 
mathematical expression of problematic problems. Of course, you're focussing in 
your paper on the impossibility of guaranteeing friendliness. It's the actual 
nature of the problems we - and therefore any superAGI and actually any 
ordinary AGI will have to - face, that I'm concerned with. Part of their 
problematicity is that there are often many more factors than you can possibly 
know about. Another is that the known factors are unstable and even potentially 
contradictory - the person or people who loved you yesterday, may hate you 
today through no action of yours.  Another part of the problematicity is the 
amount of evidence that can be gathered - how much evidence should you gather 
if you're defending OJ/a murderer, or writing an essay on AGI, or want to bet 
on a stockmarket movement? Ideally, an infinite amount. The only limit is 
practicality rather than reason. (Shouldn't be too hard to put all that into a 
formula?!)

A separate point: the EMBEDDEDNESS of intelligence. I went through your paper v 
quickly so I may have missed something on this. Ironically, I had just come to 
a similar idea before I saw your expression. I'm not sure how much you are 
thinking on similar lines to me.

The idea is: here we are talking about "intelligence" as if it were the 
property of an individual (human/animal/AGI). Actually, human intelligence - 
the finest example we know of - is the property of individuals working within a 
society with a v. complex culture (including science - collective knowledge 
about the world - and technology - collective know-how about how to deal with 
the world).  Our individual intelligence is extremely dependent on that of our 
society -  we each stand, pace Newton, on the shoulders of a vast pyramid of 
other people - and also dependent on a vast collection of artefacts and 
machines.

No AGI or agent can truly survive and thrive in the real world, if it is not 
similarly part of a collective society and a collective science and technology 
- and that is because the problems we face are so-o-o problematic. Correct me, 
but my impression of all the discussion here is that it assumes some variation 
of the classic science fiction scenario, pace 2001/ The Power etc where an 
individual computer takes power, if not takes off by itself. Ain't gonna happen 
- no isolated individual can truly be intelligent. 


Ben:

  Please check out an essay I wrote a couple years ago,

  http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf

  which is related to the issues you mention.  As I note there 

  "
  My goal in this essay is to explore some particular aspects of the difficulty 
of 
  creating Friendly AI, which ensue not from the subtleties of AI design but 
rather from the 
  complexity of the notion of Friendliness itself, and the complexity of the 
world in which 
  both humans and AI's are embedded.

  ... 

  ... the basic arguments I present here regarding Friendliness are as 
  follows:  
   
  • Creating accurate formalizations of current human notions of action-based 
  Friendliness, while perhaps possible in the future with very significant 
effort, is 
  unlikely to lead to notions of action-based Friendliness that will be robust 
with 
  respect to future developments in the world and in humanity itself
  • The world appears to be sufficiently complex that it is essentially 
impossible for 
  seriously resource-bounded systems like humans to guarantee that any system's 
  actions are going to have beneficent outcomes.  I.e., guaranteeing (or coming 
  anywhere near to guaranteeing) outcome-based Friendliness is effectively 
  impossible.  And this conclusion holds for basically any highly specific 
property, 
  not just for Friendliness as conventionally defined.  (What is meant by a 
"highly 
  specific property" will be defined below.)  

  "

  I don't conclude that the complexity of the world means AGI is impossible
  though.  I just conclude that it means that creating very powerful AGI's with 
  predictable effects is quite possibly not possible ;-)

  -- Ben G



  On 10/29/07, Mike Tintner < [EMAIL PROTECTED]> wrote:
Check out


http://environment.newscientist.com/article/dn12833-climate-is-too-complex-for-accurate-predictions.html
 

which argues:

"Climate change models, no matter how powerful, can never give a precise 
prediction of how greenhouse gases will warm the Earth, according to a new 
study."

What's that got to do with superAGI's? This: the whole idea of a superAGI 
"taking off" rests on the assumption that the problems we face in life are 
soluble if only we - or superAGI's- have more brainpower.

The reality is that the problems we face are actually infinite or 
"practically endless."  Problems 

Re: [singularity] CONJECTURE OR TRUTH

2007-10-26 Thread Mike Tintner

BillK:I believe the Great Invisible Flying Spaghetti Monster created the
Universe and rules every incident in our lives according to his
unfathomable purposes.

Brilliant. [Except on this board, you never write "I believe" - you just 
state your fantasies as facts. "Future AGI's will have spaghetti emotions & 
spaghetti consciousness & be friendly to spaghetti - I can guarantee it."]


But seriously, this spaghetti monster - presumably it's a string theorist? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57809699-823495


Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Mike Tintner
Candice: My sentiments exactly...which is why in the first place I said we 
should be achieving Super Intelligence on an individual level

Er that's interesting - because there doesn't seem to be much interest here in 
the future of human nature as distinct from AGI's. I personally think we'll see 
remarkable leaps forward in general levels of human intelligence and especially 
creativity, long before we see remarkable AGI's. But it's a pity that we don't 
also have some speculation here about future human possibilities -  e.g. brain 
machine interfaces, the potential and future of open source organizations on 
the net, and all that transhuman stuff...  Computers are, and the first AGI's 
almost certainly will be, EXTENSIONS of human beings, not independent entities. 
Humans have always adapted to new machines/ extensions of themselves, and, we 
can be confident, will also adapt in major ways to truly autonomous mobile 
robots walking around the place.  

P.S. Why DON'T we hear more from the transhumanists here? Or did I miss earlier 
conversations?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57224198-65f30a

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Mike Tintner
If you look at what I actually wrote, you'll see that I don't claim 
(natural) evolution has any role in AGI/robotic evolution. My point is that 
you wouldn't dream of speculating so baselessly about the future of natural 
evolution, why speculate baselessly about AGI evolution?


I should explain that I am not against all speculation, and I am certainly 
excited about the future.


But I would contrast the speculation that has gone on here with that of the 
guy who mainly started it - Kurzweil. His argument for the Singularity is 
grounded in reality - the relentless growth of computing power, which he 
documents. And broadly I buy his argument that that growth in power will 
continue much as he outlines. I don't buy his conclusion about the timing of 
the Singularity, because building an artificial brain with as much power and 
as many PARTS as the human brain or much greater, and building a SYSTEM of 
mind (creating and integrating possibly thousands of cognitive 
departments ), are two different things. Nevertheless he is pointing out 
something real and important even if his conclusions are wrong - and it's 
certainly worth thinking about a Singularity.


When you and others speculate about the future emotional systems of AGI's 
though - that is not in any way based on any comparable reality. There are 
no machines with functioning emotional systems at the moment on which you 
can base predictions.


And when Ben speculates about it being possible to build a working AGI with 
a team of ten or so programmers, that too is not based on any reality. 
There's no assessment of the nature and the size of the task, and no 
comparison with any actual comparable tasks that have already been achieved. 
It's just tossing figures out in the air.


And when people speculate about the speed of an AGI take-off, that too is 
not based on any real, comparable take-off's of any kind.


You guys are much too intelligent to be engaging in basically pointless 
exercises like that. (Of course, Ben seems to be publicly committed now to 
proving me wrong by the end of next year with some kind of functional AGI, 
but even if he were probably the first AI person ever to fulfil such a 
commitment, it still wouldn't make his prediction as presented any more 
grounded).


P.S. Re your preaching friendly, non-aggressive AGI's,  may I recommend a 
brilliant article by Adam Gopnik - mainly on Popper but featuring other 
thinkers too. Here's a link to the Popper part:


http://www.sguez.com/cgi-bin/ceilidh/peacewar/?C392b45cc200A-4506-483-00.htm

But you may find a fuller version elsewhere. The gist of the article is:

"The Law of the Mental Mirror Image. We write what we are not. It is not 
merely that we fail to live up to our best ideas but that our best ideas, 
and the tone that goes with them, tend to be the opposite of our natural 
temperament" --Adam Gopnik on Popper in The New Yorker


It's worth thinking about.






Richard: You could start by noticing that I already pointed out that 
evolution

cannot play any possible role.

I rather suspect that the things that you call "speculation" and "fantasy" 
are only seeming that way to you because you have not understood them, 
since, in fact, you have not addressed any of the specifics of those 
proposals . and when people do not address the specifics, but 
immediately start to slander the whole idea as "fantasy" they usually do 
this because they cannot follow the arguments.


Sorry to put it so bluntly, but I just talked so *very* clearly about why 
evolution cannot play a role, and you ignored every single word of that 
explanation and instead stated, baldly, that evolution was the most 
important aspect of it.  I would not criticise your remarks so much if you 
had not just demonstrated such a clear inability to pay any attention to 
what is going on in this discussion.



Richard Loosemore





Mike Tintner wrote:
Every speculation on this board about the nature of future AGI's has been 
pure fantasy. Even those which try to dress themselves up in some 
semblance of scientific reasoning. All this speculation, for example, 
about the friendliness and emotions of future AGI's has been non-sense - 
and often from surprisingly intelligent people.


Why? Because until we have a machine that even begins to qualify as an 
AGI - that has the LEAST higher adaptivity - until IOW AGI's EXIST- we 
can't begin seriously to predict how they will evolve, let alone whether 
they will "take off." And until we've seen a machine that actually has 
functioning emotions and what purpose they serve, ditto we can't predict 
their future emotions.


So how can you cure yourself if you have this apparently incorrigible 
need to produce speculative fantasies with no scientific basis in reality 
whatsoever?


I suggest : first speculate about the followi

[singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Mike Tintner
Every speculation on this board about the nature of future AGI's has been 
pure fantasy. Even those which try to dress themselves up in some semblance 
of scientific reasoning. All this speculation, for example, about the 
friendliness and emotions of future AGI's has been non-sense - and often 
from surprisingly intelligent people.


Why? Because until we have a machine that even begins to qualify as an AGI - 
that has the LEAST higher adaptivity - until IOW AGI's EXIST- we can't begin 
seriously to predict how they will evolve, let alone whether they will "take 
off." And until we've seen a machine that actually has functioning emotions 
and what purpose they serve, ditto we can't predict their future emotions.


So how can you cure yourself if you have this apparently incorrigible need 
to produce speculative fantasies with no scientific basis in reality 
whatsoever?


I suggest : first speculate about the following:

what will be the next stage of HUMAN evolution? What will be the next 
significant advance in the form of the human species - as significant, say, 
as the advance from apes, or - ok - some earlier form like Neanderthals?


Hey, if you are prepared to speculate about fabulous future AGI's, 
predicting that relatively small evolutionary advance shouldn't be too hard. 
But I suggest that if you do think about future human evolution your mind 
will start clamming up. Why? Because you will have a sense of physical/ 
evolutionary constraints (unlike AGI where people seem to have zero sense of 
technological constraints), - an implicit recognition that any future human 
form will have to evolve from the present form  - and to make predictions, 
you will have to explain how. And you will know that anything you say may 
only serve to make an ass of yourself. So any prediction you make will have 
to have SOME basis in reality and not just in science fiction. The same 
should be true here.






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56939651-d701b8


Re: [singularity] AI is almost here (2/2)

2007-08-02 Thread Mike Tintner



BillK:> There are collaborative whiteboard systems available on the net for 
free.

(The paid versions have more enhancements).

Sort of like wikis plus editable diagrams plus comments.
 is one example.

Video-conferencing systems can be similar, but free versions are
usually limited to a few participants.



BillK,


This is a VERY constructive suggestion -  although after a quick try, I'm 
still wrestling with skrbl!


I've often thought that many discussions on these boards are hopelessly 
limited by being purely print. It would be great to be able to easily 
incorporate photos/ vids and especially this whiteboard for 
graphics/drawings plus text.


And it would be great also to have a live purely audio or visual 
seminar/discussion if people want to participate. I think Skype does offer 
that facility already - although I haven't tried it. Certainly Skype video 
is v. easy.


The big problem is making the enormous shift in mindset necessary to start 
using these extra facilities - and also to identify the occasions when 
they're worth using. Some kind of conventions would need to be established.


It's particularly important, I believe, for AI/AGI - I personally am 
convinced that vast numbers of people here are wasting their lives on 
variations of symbolic GOFAI and almost no one seems fully to realise that 
rationality is based on imagination (the capacity to form images), not the 
other way round. Learning to incorporate a whiteboard plus other visuals 
would help people, I believe, to literally see that truth.


These are just my first reactions. But it would be interesting to hear 
further thoughts & suggestions. (This does need tossing around & 
brainstorming).


Also, this discussion is probably much more appropriate to the AGI board. My 
impression is that few people here have more than an extremely vague IMAGE 
in their minds of what they mean by some superAGI - and, in fairness, it may 
well be extremely difficult-to-impossible to form any image.








-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=27896773-e1df2c


Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Mike Tintner

MT:There isn't an

AGI system that has
shown, in even the most modest way, higher
adaptivity - the capacity, in any
given activity,  to find new kinds of paths, or take
new kinds of steps, to
its goals - which are, by definition, not derived
from its original
programming. The capacity, say, to find a new kind
of path through a maze or
forest.


Tom McCabe: Pathfinding programs, to my knowledge, are actually
quite advanced (due primarily to commerical
investment). Open up a copy of Warcraft III or any
other modern computer game and click to make a
character go from one end of the map to the other. How
does it find a correct route? Pathfinding AIs.

I said an AGI must have the capacity to find a "new kind" of path - as 
animals have done throughout evolution. Just finding your way from one end 
of a map to another doesn't qualify.  We don't call people "pathfinders" if 
that's all they do.


But this failure to distinguish between basic adaptivity and higher 
adaptivity - or, if you like,  iterative and creative pathfinding  - - runs 
right through AGI to my mind..


Tom:By the time the AGI has enough intelligence to say
"Hi!", I'm betting that at least 50% of the work will
already be done.

Er, when your AGI says "Hi" to someone, somehow I don't think the world is 
going to say "Hallelujah, AGI has arrived." If you and others can even think 
for a second that's 50% of the work - no wonder people are so extremely 
casual in estimating AGI's arrival.


P.S. For an example of simple but creative pathfinding, take UK birds 
recently who suddenly decided to switch from their normal dead reckoning 
flight path for long journeys to following the road highways instead. You've 
got to be able to go off the map to put AGI on the map. {That last sentence 
is also a form of higher adaptivity - if your AGI could say something like 
that rather than "Hi" - it would certainly have arrived).





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=27068450-9d3434


Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Mike Tintner
Here is another famous AGI prediction & in fairness, Minksy's denial. But 
either way, it's an awful warning about such predictions, which incorrigible 
AGI-ers seem incapable of realising, are almost guaranteed to be wrong :


in three to eight years we will have a machine with the general

> intelligence of an average human being ... The machine will begin
> to educate itself with fantastic speed.  In a few months it will be
> at genius level and a few months after that its powers will be
> incalculable ...
> -- MarvinMinsky, LIFE Magazine, November 20, 1970


MM:I was angry when that article came out, because it was filled with
misquotations from an interview.  I'm not sure where the interviewer
got this "quote"; perhaps I said '3 to 8 decades' or I was making a
joke, or I was describing the scenario from D.F.Jones's SF novel
entitled "Colossus."(It became the movie, The Forbin Project.)
Anyway, I sent an angry rebuttal to Life Magazine, but they declined
to publish it.

However, it does seem likely that a modern computer could develoe very
rapidly--once it has learns the right kinds of things to learn.
(That's the problem we call "credit assignment.")   However, the
earlier versions will all have serious bugs, so we'll surely need to
reprogram them many times before they will work well on their own.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26869618-8a6995


Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Mike Tintner

AG: The mid-point of the singularity
window could be as close as 2009. A rediculously pessimistic prediction
would put it around 2012. That would have to assume that some large
external event has caused massive social disruption and the people who
actually work on these algorithms are utter blockheads who can't see
what it can truly do. On the short side, we could literally be hours
away from a hard takeoff (yes, I realize it's almost August of 2007).

Nuts. And ditto all such predictions or, er, fantasies.

Here are AG and others predicting when an AGI will, so to speak, leap tall 
buildings and fly, and AGI hasn't got to square one - can't even crawl let 
alone, stand up or take a first step. There isn't an AGI system that has 
shown, in even the most modest way, higher adaptivity - the capacity, in any 
given activity,  to find new kinds of paths, or take new kinds of steps, to 
its goals - which are, by definition, not derived from its original 
programming. The capacity, say, to find a new kind of path through a maze or 
forest. From that will come the capacity to learn new kinds of activities, 
without preprogramming. (Put that another way - there isn't a single system 
that can transcend its programming).


Make a reasoned prediction about AGI taking that first step or crawl - point 
me to who's going to do it and roughly how and when. And then maybe, we can 
talk about when superAGI's will fly.. Otherwise it's all mathematical 
masturbation.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26850983-802465


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-17 Thread Mike Tintner

Kaj: I'm not sure if I'd agree with a sentiment saying that it is
always /impossible/ to control an agent's interpretations. Obviously,
if you merely create a reasoning system and then use natural language
to feed it goals you're bound to be screwed. But I wonder if one
couldn't take the very networks of the agent that /create/ the
interpretations and tune them so that they work in the intended
fashion.

If you want your agent to have any conceptual system like language  - which 
is adaptive and adapted to the real world, then yes it is impossible - and 
you don't WANT it to be possible..


I started to explain this in my reply to Vladimir - & I'm not sure whether 
anyone has offered this explanation before - I'd welcome comments.


A real world conceptual system (like language) has to be dynamic and 
evolving in its meanings and senses, in order to  a) capture a dynamic and 
evolving world  and b) to order our dynamic, evolving knowledge of that 
world.


Concepts exist primarily to capture 1)  individuals and groups in the real 
world - from inanimate objects to living organisms
2) movements and behaviour of those individuals and groups. Those 
individuals and groups and their behaviour are liable to keep changing, and, 
even if they don't, the more I know of them, the more my generalisations 
about them are liable to keep changing.


"Kaj Sotala" like every other human being keeps changing in physique, 
personality and many other respects. So do "Russians" and "Americans" and 
"houses" and "computers" and every artefact and machine. So does "the 
weather" ... and you get the idea.


So does every kind of behaviour ... "sex," "communication."

And, of course, our knowledge itself of even stable entities keeps 
changing - I & everyone may think Kaj is a bastard, & then we learn he 
contributes billions to charity.. & so on.


Conceptual systems like language are in fact evolved to be open-ended not 
closed-ended in meaning and reference. Both AI and linguistic purists who 
want their meanings to be precise, and who complain when meanings change, 
are, intellectually, not living in the real world.


What you are expressing in the above, what Vladimir was expressing - what 
everyone concerned in any way with AGI is experiencing - is what you could 
call the "AGI identity crisis."


You still want a machine that can be controlled, however subtly, and is 
basically predictable. Classical AI is about machines that produce 
controlled, predictable results. (Even if the computations are so complex, 
that the human minders have no or little idea how they will turn out, they 
can still be described as controlled and basically predictable).


The main point of an AGI machine is that it is going to be fundamentally 
surprising and unpredictable. What we really want practically is a machine 
that can - like an intelligent human being - be given a general 
instruction - "order my office," say - and come up with a new, surprising 
interpretation - a new filing system - that is as good as, or better than 
any we have thought of.  That kind of adaptivity depends on having a 
conceptual system which is open-ended, in which "order," for example, can 
continually be interpreted in new ways.


Higher adaptivity - the essential requirement for AGI - is by definition 
surprising and unpredictable.


The inevitable price of that adaptivity is that that machine will be able to 
interpret concepts in new ways that you don't like and turn against you - 
just as every human employee can turn against you. That doesn't stop us 
employing humans - the positive potentials outweigh the negative ones. (Can 
you think of a guaranteed way of controlling human employees' behaviour?)


The "AGI identity crisis" is that everyone currently in AGI - AFAIK - 
including Ben, Pei, Marvin et al - is still caught between the 
"psychological totalitarianism" of classical AI, with its need for total 
control, and the "psychological freedom & democracy" that is necessary for 
true, successful AGI - & that includes an acceptance of the open-ended 
nature of language & conceptual systems.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=22820468-32e245


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-14 Thread Mike Tintner

AG: My contention is that the computer systems we have now are unacceptable,
and that all visible trends strongly indicate that they are getting
_WORSE_ at a breakneck speed. How could this not be the greatest concern
of the list?

The main flaws of current computer systems are ?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=22131659-ae3ca7


Re: Symbol grounding (was [singularity] ESSAY: Why care about artificial intelligence?)

2007-07-12 Thread Mike Tintner


Vlad: > Problem is twofold: you should be able to guarantee properties of 
this

perceptual process (so it can't be an arbitrary emergent one), and you
should be able to figure out the essense of your own perception of
these concepts.


No you shouldn't be able to guarantee either language or the nature of the 
perceptual process.


The whole point is that they are both MEANT (or evolved)  to be general, 
abstract and open-ended and therefore emergent.


To perceive "Vladimir" my brain must recognize him from any angle - and, if 
I keep seeing him, keep adding new angles, new perspectives. The "Vladimir" 
invariant (or as it's called, "Jennifer Aniston")  cell in my brain - or any 
adaptive brain dealing with the real world - is an ever-growing tree with 
more and more particular roots. Or, to put it another way, if I am an agent 
with language, then the "Vladimir" symbol in my brain must become ever more 
complexly grounded.


Concepts/ symbols/ percepts  have to be general and open-ended and emergent 
to deal with a complex, dynamic, changing and emergent world.


Our definitions and connotations of "human" have to keep changing - to deal 
with new social and cultural realities like these strange transhumanists and 
genetic engineering and cloning etc.


Our definitions and connotations of "Russian" have to keep changing - to 
deal with the fact that they are no longer politically enemies etc.


Our concepts and most common mental images of "Vladimir" have to keep 
changing to deal with the changing physical man.


P.S. An interesting tangential thought occurs. I have talked in the main agi 
group of the brain being a picture tree - an idea loosely related to , 
though also in some ways more complex than Hawkins' idea of the brain as a 
hierarchical processor. Actually "tree" is a better concept here, I think, 
than either "hierarchy" or "grounding" - because it expresses the truth that 
the brain's multilevel networks of signs (and therefore levels of meaning 
and sense) are continually growing - and its symbolic trees continually 
developing new roots. (And neural networks do look like trees).







-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=20691193-d8080c


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-12 Thread Mike Tintner
Paul:It is my understanding that the basic problem in Friendly AI is that it is 
possible for the AI to interpret the command "help humanity" etc wrong, and 
then destroy humanity (what we don't want it to do). The whole problem is to 
find some way to make it more probable to not destroy us all. It is correct 
that a simple sentence can be interpreted to mean something that we don't 
really mean, even though the interpretation is logical for the AI. 

Yes - and essentially this is a replay of the problem that has plagued 
philosophy and linguistics for hundreds if not thousands of years - the dream 
of producing a language with precise meanings - the "perfect language." (Has 
this not been discussed here?) Eco wrote a book about it. I think it's now 
generally recognised that it is a pure fantasy.

I'm not so sure though whether it has been fully recognised that the whole 
function of language and any symbolic system is to be general and abstract and 
NOT pin down meaning or reference precisely. You obviously don't want numbers, 
for example, like "1" to refer to only one particular object. But you don't 
even want apparently particular names, like "Paul Horsmalahti" to refer to one 
particular object at one particular point in time. There are many "Paul 
Horsmalahti's", for every human has a rich, varied and developing personality - 
and,usually, physique..

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=20469632-016a92

Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-12 Thread Mike Tintner

CW: A problem is that I do not quite grasp the concept of "general
conceptual goals".

Yes, this is interesting, because I suddenly wondered whether any AI systems 
currently can be said to have goals in the true sense.


Goals are general - examples are: "eat/ get food", "drink," "sleep," 
"kill," "build a shelter."


A goal can be variously and open-endedly instantiated/particularised  - so 
an agent can only have a goal of "food" if it is capable of eating various 
kinds of food. A car or any machine that can only use one kind of fuel can't 
have such a goal. Ditto an agent has general goals, if it can drink, sleep, 
kill, build a shelter etc in various ways.


All animals have general goals. The open-ended generality of their goals is 
crucial to their being adaptive and able to find new and different ways of 
satisfying those goals - and developing altogether new goals - to their 
finding new kinds of food, drink, ways of resting and attacking or killing, 
and developing new habitats - both reactively when current paths are blocked 
and proactively by way of curious exploration.


What we are interested here surely is in the development of an AGI that has 
"General" intelligence and can pursue general goals and be adaptive.


I'm saying that if you want an agent with general goals and adaptivity, then 
you won't be able to control it deterministically - as you can with AI 
programs. You will be able to constrain it heavily, but it will (and must if 
it is to survive) have the capacity to break or amend any rules you may give 
it, and also develop altogether new ones that may not be to your liking - 
just like all animals and human beings.


If you disagree, and think deterministically controllable, general goals are 
possible, you must give an example.. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=20433705-717c77


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-12 Thread Mike Tintner

Cenny,

If you're saying that you can give an agent general, conceptual goals  and 
control its interpretation of them in every circumstance, please give an 
example of one such goal or set of goals. The entire legal profession and 
all philosophers of language are waiting to hear from you.



- Original Message - 
From: "Cenny Wenner" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, July 12, 2007 3:41 PM
Subject: Re: [singularity] ESSAY: Why care about artificial intelligence?



On 7/12/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

>> 2."An AI programmed only to help humanity will only help humanity."
>> Really?
>> George Bush along with every other leader is programmed to "help
>> humanity."
>
> George Bush is human. He has plenty of other goals in _addition_ to
> "helping humanity" - if he even has that goal at all...

Comment: You and others seem to be missing the point, which obviously 
needs

spelling out. There is no way of endowing any agent with conceptual goals
that cannot be interpreted in ways opposite to the designer's 
intentions -

that is in the general, abstract nature of language & symbolic systems.

For example, the general, abstract goal of "helping humanity" can
legitimately in particular, concrete situations be interpreted as wiping 
out
the entire human race (bar, say, two) - for the sake of future 
generations.




Any agent's actions may be described by a (possibly indeterministic)
policy or norm. If this scheme adheres to some property and the agent
was constructed through programming, the agent has been programmed to
adhere to that property. The problem of your example is the
interpretation of the ambiguous and vague statement. It is the
behaviour and not it's representation in an arbitrary system that is
relevant. If an agent is given a natural language goal which needs to
be interpreted we need to take the interpretation into account - how
should the two goals be weighted? If we may fulfil one but not the
other, what would be the prefered action? How should one deal with
inconsistencies? These are not enigmas of the policy.


And there is no reasonable way to circumvent this. You couldn't, say,
instruct an agent... "help humanity but don't kill any human beings..."
because what if some humans (like, say Bush) are threatening to kill 
vastly
greater numbers of other humans...wouldn't you want the agent to 
intervene?
And if you decided that even so, you would instruct the agent not to 
kill,

it could still as good as kill by rendering humans vegetables while still
alive.

So many people here and everywhere are unaware of the general and
deliberately imprecise nature of language - much stressed by Derrida and
exemplified in the practice of law.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 
269.10.4/897 - Release Date: 11/07/2007 21:57






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=19927706-30cc2e


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-12 Thread Mike Tintner


Kaj,

If I look closely at what you write, you are somewhat close to me - but you 
are in fact not saying explicitly and precisely what I am.


I am saying upfront: language and symbol systems are general and abstract 
and open to infinite, particular, concrete interpretations, including the 
opposite of any interpretation you might prefer. It is therefore impossible 
when programming an agent in general language or concepts, to control its 
particular interpretations of those concepts - whether those concepts are 
"help humanity" or "make humans happy" etc.


You don't say this upfront, and you do seem to imply that it might be 
possible to control the agent sometimes, if not at others.


If you basically agree with my statement, then both your exposition and, I'm 
sure, mine can be improved.



On 7/12/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
Comment: You and others seem to be missing the point, which obviously 
needs

spelling out. There is no way of endowing any agent with conceptual goals
that cannot be interpreted in ways opposite to the designer's 
intentions -

that is in the general, abstract nature of language & symbolic systems.

For example, the general, abstract goal of "helping humanity" can
legitimately in particular, concrete situations be interpreted as wiping 
out
the entire human race (bar, say, two) - for the sake of future 
generations.


And there is no reasonable way to circumvent this. You couldn't, say,
instruct an agent... "help humanity but don't kill any human beings..."
because what if some humans (like, say Bush) are threatening to kill 
vastly
greater numbers of other humans...wouldn't you want the agent to 
intervene?
And if you decided that even so, you would instruct the agent not to 
kill,

it could still as good as kill by rendering humans vegetables while still
alive.

So many people here and everywhere are unaware of the general and
deliberately imprecise nature of language - much stressed by Derrida and
exemplified in the practice of law.


I am confused, now. The sentence from my essay that you quoted was
from a section of it that was *talking about the same very thing as
you are talking about now*. In fact, had you not cut out the rest of
the sentence, it would've been apparent that it was talking exactly
*about* how "helping humanity" is too vague and ill-defined to be
useful:

"" An AI programmed only to help humanity will only help humanity, but
in what way? Were it programmed only to make all humans happy, it
might wirehead us - place us into constant states of pure,
artificially-induced states of orgasmic joy that preclude all other
thought and feeling. While that would be a happy state, many humans
would prefer not to end up in one - but even humans can easily argue
that pure happiness is more important than the satisfaction of desires
(in fact, I have, though I'm unsure of my argument's soundness), so
"forcibly wireheading is a bad thing" is not an obvious conclusion for
a mind."

There are many, many things that we hold valuable, most of which feel
so obvious that we never think about them. An AI would have to be
built to preserve many of them - but it shouldn't preserve them
absolutely, since our values might change over time. Defining the
values in question might also be difficult: producing an exact
definition for any complex, even slightly vague concept often tends to
be next to impossible. We might need to give the AI a somewhat vague
definition and demonstrate by examples what we mean - just as we
humans have learnt them - and then try to make sure that the engine
the AI uses to draw inferences works the same way as ours, so that it
understands the concepts the same way as we do. ""

Isn't this just what you're saying? The /entire section/ was talking
about this very issue.




--
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 
269.10.4/897 - Release Date: 11/07/2007 21:57






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=19912796-c3a35e


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-12 Thread Mike Tintner
2."An AI programmed only to help humanity will only help humanity." 
Really?
George Bush along with every other leader is programmed to "help 
humanity."


George Bush is human. He has plenty of other goals in _addition_ to
"helping humanity" - if he even has that goal at all...


Comment: You and others seem to be missing the point, which obviously needs 
spelling out. There is no way of endowing any agent with conceptual goals 
that cannot be interpreted in ways opposite to the designer's intentions - 
that is in the general, abstract nature of language & symbolic systems.


For example, the general, abstract goal of "helping humanity" can 
legitimately in particular, concrete situations be interpreted as wiping out 
the entire human race (bar, say, two) - for the sake of future generations.


And there is no reasonable way to circumvent this. You couldn't, say, 
instruct an agent... "help humanity but don't kill any human beings..." 
because what if some humans (like, say Bush) are threatening to kill vastly 
greater numbers of other humans...wouldn't you want the agent to intervene? 
And if you decided that even so, you would instruct the agent not to kill, 
it could still as good as kill by rendering humans vegetables while still 
alive.


So many people here and everywhere are unaware of the general and 
deliberately imprecise nature of language - much stressed by Derrida and 
exemplified in the practice of law. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=19804784-a4887a


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-11 Thread Mike Tintner
1. Black on white is much easier to read - David Ogilvy increased the take 
of a charity ad 50% by that change alone


2."An AI programmed only to help humanity will only help humanity."  Really? 
George Bush along with every other leader is programmed to "help humanity." 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=19384767-8a92c3


Re: [singularity] What form will superAGI take?

2007-06-17 Thread Mike Tintner
Lucio: Given the ever-distributed nature of processing power, I would 
suspect

that a superAGI would have no physical form, in the sense that it
would be distributed across many processing nodes around the world.
(And those could be computer clusters, single personal computers, and
so on - if you want to stick to physical forms, probably we are
talking about zillions of boxes of many sizes and shapes.) And despite
being "formless" it would be omnipresent, in the sense that it would
be able to access zillions of sensors (and possibly actuators), from
street surveillance cameras to radiotelescopes


That's interesting - a massive extension of Hal, in a sense.

But the immediate problem with that is how could it have a sense of self? 
That's crucial, surely, if you are to distinguish between what "I think" and 
others think - all the opinions that you, either superAGI or human, are 
continually being immersed in. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] What form will superAGI take?

2007-06-17 Thread Mike Tintner
Lucio: Given the ever-distributed nature of processing power, I would 
suspect

that a superAGI would have no physical form,


One of the interesting 2nd thoughts this provoked in me is the idea: what 
would it be like if you could wake each day with a new body - a totally 
different body? In principle a superAGI could perhaps adopt many changing 
different physical forms - like the villain in Terminator 2 (although not 
presumably using the same chemistry). 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] What form will superAGI take?

2007-06-17 Thread Mike Tintner

Joel:A large spherical room with a huge blue fluorescing set of tubes in
the center with jacob ladder effects between them. The tubes are
suspended in the mid point of the sphere and the sphere itself is
lined with regularly spaced fairy lights which serve no obvious
purpose. There's a walkway running towards the tubes, and at the end
of the walkway there is a solitary terminal through which a lone
researcher asks deep ponderous questions. ... obviously this is not 
sensible, but you did ask


Thankyou.I did ask - & want to know. Any 2nd thoughts about this or 
questions that follow? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-16 Thread Mike Tintner

http://tech.monstersandcritics.com/news/article_1317541.php/Internet_inventor_honoured_by_the_Queen

According to recent figures offered up by Internet World Stats, there are an 
estimated 1.133 billion people around the world engaging in regular use of 
the Internet. And with more than 110 million different destinations 
available through its virtual pages, the Internet has grown into a 
groundbreaking, and seemingly limitless, communications tool since its very 
first online Web site appeared back in 1991.


[And, to repeat, this is somewhat more important than crashing large objects 
into the earth, fun as that obviously is for some]. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


[singularity] What form will superAGI take?

2007-06-16 Thread Mike Tintner
Perhaps you've been through this - but I'd like to know people's ideas about 
what exact physical form a Singulitarian or near-Singul. AGI will take. And 
I'd like to know people's automatic associations even if they don't have 
thought-through ideas - just what does a superAGI conjure up in your mind, 
regardless of whether you're sure about it or not, or it's sensible?


The obvious alternatives, it seems to me, (but please comment), are either 
pace the movie 2001, a desk-bound supercomputer like Hal, with perhaps 
extended sensors all over the place, even around the world - although that 
supercomputer, I guess, could presumably occupy an ever smaller space as 
miniaturisation improves.


Or some robot - or society of robots - that moves around the world.

Are there any other major categories of alternatives?

The first type of alternative obviously raises the questionof whether 
disembodied intellligence is possible. Discussions here seem to me to assume 
that it is, but I may be misreading.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-12 Thread Mike Tintner
MT.:Er, the point surely is - one billion computer USERS. If true, that is a 
v.

big deal.


EL: Why? They're not scientists; not even programmers. Every texting
kid with a smartphone is a computer user.I'm mostly interested in online 
harware which can be easily 0wn3d

for bootstrap purposes.

This is a forum, as I understand, about the relationships between computers 
(& robots) present & future & their relationship with society. The above 
remark exemplifies the narrow-mindedness that characterises a lot of 
discussions here.   Computers like all machines evolve to meet human 
requirements, (incl. needs, demands, desires, dreams & capacities). There's 
a symbiotic, interdependent relationship.


If you want to have some idea of how computers will and can evolve, you have 
to have some idea of how human society will and can evolve. And it's 
currently changing at an extraordinary rate, faster than ever before.


Computers change people in dramatic ways. Something as apparently simple as 
Google changes everyone's nervous system. Everyone starts searching and 
millions of religious people become religious "seekers."


A much more computer literate society and world will have a v. different 
relationship to AI -  it's much more likely, for example,  to be impatient 
with, than worried about, AI's rate of progress.


Speculations about future AI MINUS future society (which seems to be the 
rule here) are pointless. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-12 Thread Mike Tintner

  A report released Monday by the market research firm Forrester

   predicts that by 2008, [2]1 billion personal computers will be in use


Fiddlesticks.


Er, the point surely is - one billion computer USERS. If true, that is a v. 
big deal. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] Bootstrapping AI

2007-06-05 Thread Mike Tintner

Eugen: "Drill for oil?  You mean drill into the ground to try and find oil?
You're crazy." - Drillers who Edwin L. Drake tried to enlist to his
project to drill for oil in 1859.
etc.

Nice list. Just one prob. - my guess is that for every such naysaying 
prophesy you could find five from overhopeful inventors prophesying that 
their invention would conquer the world & in the next n years. Turing? 
Minsky? Lenat? ... Not trying to be negative - just point out that life's 
complicated. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


[singularity] Aggressively scruffy with bemes

2007-05-30 Thread Mike Tintner
[Sorry if already posted - but no ref in archives]


http://entertainment.timesonline.co.uk/tol/arts_and_entertainment/books/book_extracts/article1300783.ece.

How to Live Forever or Die Trying by Bryan Appleyard
Bryan Appleyard explores how science may soon make us able to increase life 
expectancies to well over a hundred, or even a thousand
by Bryan Appleyard 
The Atlanta Braves 


Bruce Klein founded The Immortality Institute (Imminst) in 2002 as a non-profit 
organisation with the aim of 'conquering the blight of involuntary death'. 
Klein was brought up in the town of Americus, 'a jewel of Georgia', in Bible 
Belt America, the deep south. 'Yeah, I'm a southern redneck!' he jokes. His 
family was not especially religious, though he did observe the Catholicism of 
his mother until the age of eleven when he took a phone call from their priest. 
'I said to him I didn't believe any more. He got kind of upset and I hung up 
the phone. It was some kind of visceral thing.' 


Klein was thirty-one when I met him at Imminst's conference at the Georgia Tech 
Conference Center, Atlanta, in November 2005. The conference turned out to be a 
snapshot of the immortalist front line. It is a movement that is part cult and 
part serious science. But all were united by the fervency of their belief in 
the rightness of the project of extending life and by their vehement rejection 
of deathism and scepticism. The participants saw themselves as visionaries and 
frequently beleaguered pioneers of the only new frontier left to mankind. Klein 
is a groomed, fit-looking man. His wife and 'wonderful friend', Susan 
Fonseca-Klein, co-founder and director of the institute, is round-faced and 
pretty. Together, they have the air not of a threateningly glamorous but of a 
consolingly ideal couple - young, healthy, good-natured, extravagantly 
friendly, ambitious, optimistic, glowing. One could imagine them in an 
advertisement for breakfast cereal. 


Most of their work is involved with running Imminst, though Klein does say he 
manages some property and investments. His degree from the University of 
Georgia is in finance. He had just moved from Atlanta to Bethesda, Maryland. He 
is also president of Bethesda-based Novamente, a small firm devoted to the 
construction and commercialisation of the Novamente AI Engine, an 'artifical 
general intelligence oriented software system', and he wished to be closer to 
that project and its presiding thinker Ben Goertzel. 

Goertzel, who was also at the conference, is aggressively scruffy with tangled, 
heavy metal hair and jeans barely clinging to his hips. As he queued to ask a 
question of one of the speakers, I took him for a bum who had wandered in off 
the empty downtown streets and was preparing myself for an embarrassing 
incident culminating in his ejection from the hall. In fact, he was himself a 
speaker and a maths professor, though whatever normality that implies is 
swiftly detonated by the discovery that his first son is named Zarathustra 
Amadeus and his second Zebulon Ulysses. The more restrained Klein is, in spite 
of his wife's protests, putting off having children until he has made the world 
'a safer place', ideally by banishing death. 

Along with increasing numbers of people in the immortality field, Klein 
believes artificial intelligence may be the best way forward, hence his new 
partnership with Goertzel. There are two possibilities arising from AI. Either 
a super-intelligent computer could master the medical problems of human ageing 
that currently baffle us or, more speculatively, we could back-up our 
personalities by downloading them on to such a machine. 

Imminst has been highly successful. It is primarily webbased - you can find it 
at www.imminst.org - and the quality and responsiveness of its site is 
extremely high. The moment I joined, some months before the conference, I was 
(electronically) welcomed by Klein and invited to host a web chat, which I did 
rather sleepily between one and two in the morning. Atlanta is five hours 
behind my house in Stiffkey, Norfolk. ..cont.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Re: [singularity] Why do you think your AGI design will work?

2007-04-28 Thread Mike Tintner
the specific test example [s] can be generalised -  it doesn't have to be 
the only one test


I just did it - tho v. crudely with the jigsaw example..

the rearranging of pieces in the jigsaw can be likened to the rearranging of 
events required to tell a story of how something came about, or items to 
pack in a suitcase, or sets of numbers that have to add up to a given total 
in a maths/IQ prob.


but there have to be specific examples - the human mind simply can't 
understand what it means by different kinds of intelligence without them


[what's happening here is what's happening with Ben and everyone else - this 
general reluctance to look for examples - you know why? - it's HARD work - 
that's the only real reason...


thinking about intelligence/ problemsolving is much harder than most kinds 
of generalisation - the reason is you can't just shuttle back and forth from 
generalisation to particularisation as you do in many areas...  if you want 
to think about intelligence you have to make your generalisation, then 
SWITCH ENTIRELY TO THE PARTICULAR - I.E. you test it by doing particular 
kinds of problems, mathematical, linguistic, packing suitcases, telling 
stories, whatever - & observe yourself doing the problem ...  and you 
usually need to do more than one in fact several problems.. which all takes 
time...  and only  THEN can you go back to the general level and modify your 
generalisations...


most people are too impatient to do this.. but it's the only way to make 
progress here]



- Original Message - 
From: "Charles D Hixson" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, April 28, 2007 5:44 PM
Subject: Re: [singularity] Why do you think your AGI design will work?



Mike Tintner wrote:

yes, but that's not precise enough.

you have to have a task example that focusses what is going on 
adaptively... you're not specifiying what kinds of essays/ maths etc


what challenge does the problem pose to the solver's existing rules for 
reaching a goal?

how does the solver adapt their rules to solve it?
...
- Original Message - From: "Charles D Hixson" 
<[EMAIL PROTECTED]>

To: 
Sent: Saturday, April 28, 2007 6:23 AM
Subject: Re: [singularity] Why do you think your AGI design will work?



Mike Tintner wrote:

Hi,
 I strongly disagree - there is a need to provide a definition of AGI - 
not necessarily the right or optimal definition, but one that poses 
concrete challenges and focusses the mind - even if it's only a 
starting-point. The reason the Turing Test has been such a successful/ 
popular idea is that it focusses the mind.

...

OK.  A program is an AGI if it can do a high school kid's homework and 
get him good grades for 1 week (during which there aren't any 
pop-quizes, mid-terms, or other in-school and closed book exams.


That's not an optimal definition, but if you can handle essays and story 
problems and math and biology as expressed by a teacher, then you've got 
a pretty good AGI.


-
...

...
But the point is, a precise definition is useless.  Turing's test was 
established so that (paraphrase)"If a program could do this, then you 
would have to agree that it was intelligent.", it wasn't intended as a 
practical test that some future program would pass.  If we were to start 
passing laws about the rights and privileges of intelligent programs, then 
a "necessary & sufficient" test would be needed.  To do development work 
it may be more of a handicap than an assist.  (I.e., it would tend to 
focus effort on meeting the definition rather on where the program should 
logically next be developed.)


P.S.:  I meant an arbitrary week.  If it can only handle certain weeks, 
then it is clearly either not that intelligent, or has been poorly 
educated.   (However, I have a rather lower opinion than many of the 
amount of intelligence exhibited by humans, tending more toward a belief 
that they operate largely on reflexes and evolved rather than chosen 
goals.  Consider, e.g., the number of people not who start to believe in 
astrology, but rather who continue to believe in it for years.  A simple 
examination of predictions will demonstrate that nothing significant was 
predicted in advance, but only explained afterwards.  [OTOH, it *was* once 
useful for determining when to plant which crops.])


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.1/778 - Release Date: 27/04/2007 13:39






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


Re: [singularity] Why do you think your AGI design will work?

2007-04-28 Thread Mike Tintner

yes, but that's not precise enough.

you have to have a task example that focusses what is going on adaptively... 
you're not specifiying what kinds of essays/ maths etc


what challenge does the problem pose to the solver's existing rules for 
reaching a goal?

how does the solver adapt their rules to solve it?

the example must CRYSTALLISE/ DISSECT what the solver has to do...

here's an example that DOES do more of what I'm talking about - but still 
may only be

a stepping stone

solving a jigsaw problem...

that's a simple, but obvious adaptive problem - and requires obvious 
adaptive action...


to reach the goal (a united picture) you have to put the pieces together 
everywhich way... (jigsaw-solving grasps one of the essential adaptive 
features of the human/ animal mind - COMPOSITIONALITY - (I may be stretching 
the normal use of the term) - the capacity to put the pieces of a solution 
together in  any order - the capacity to pack a suitcase, arrange a room, 
paint the elements of a scene, tell a story in any order)


you have flexible jigsaw  rules... start with the outside bits / fit 
together obviously related shapes .. etc


jigsaws are a fairly basic example - because we quickly establish set rules 
for doing them -  a more dramatic example highlighting the rule-breaking, 
rule-adjusting as well as the re-composing necessary to solve a problem, 
would be still better


try & think of some more examples...! --   there ARE solutions to the "beat 
the Turing Test" challenge...


P.S. COMPOSITIONALITY is EXTREMELY important - central to the brain's 
adaptive capacity


Compositionality flows, I think, from having a mobile body in a mobile 
world... if you have a nervous system, that controls a complex set of body 
parts, then you can combine those body parts and their movements in any 
order... you can and will scan a scene in many different patterns... you 
will twitch and twist your body msucles in different orders/ dances... it's 
natural, you'll do it with or without a problem to solve...


and thence you have the essential capacity to rearrange the paths/ routes 
you take to any goal - and find ways round any obstacles...


(Still thinking aloud here)


- Original Message - 
From: "Charles D Hixson" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, April 28, 2007 6:23 AM
Subject: Re: [singularity] Why do you think your AGI design will work?



Mike Tintner wrote:

Hi,
 I strongly disagree - there is a need to provide a definition of AGI - 
not necessarily the right or optimal definition, but one that poses 
concrete challenges and focusses the mind - even if it's only a 
starting-point. The reason the Turing Test has been such a successful/ 
popular idea is that it focusses the mind.

...

OK.  A program is an AGI if it can do a high school kid's homework and get 
him good grades for 1 week (during which there aren't any pop-quizes, 
mid-terms, or other in-school and closed book exams.


That's not an optimal definition, but if you can handle essays and story 
problems and math and biology as expressed by a teacher, then you've got a 
pretty good AGI.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.1/778 - Release Date: 27/04/2007 13:39






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


Re: [singularity] News bit: Carnegie Mellon unveils Internet-controlled robots anyone can build

2007-04-26 Thread Mike Tintner
Could these robots be connected up to a network of Net computers so as to 
massively extend their mental capabilities?
  - Original Message - 
  From: Benjamin Goertzel 
  To: singularity@v2.listbox.com 
  Sent: Thursday, April 26, 2007 12:32 PM
  Subject: [singularity] News bit: Carnegie Mellon unveils Internet-controlled 
robots anyone can build



  Cool stuff indeed ... commentary from those w/ robotics expertise would be 
appreciated...



  http://www.eurekalert.org/pub_releases/2007-04/cmu-cmu042407.php 


  Carnegie Mellon University researchers have developed a new series of robots 
that are simple enough for almost anyone to build with off-the-shelf parts, but 
are sophisticated machines that wirelessly connect to the Internet. 
  The robots can take many forms, from a three-wheeled model with a mounted 
camera to a flower loaded with infrared sensors. They can be easily customized 
and their ability to wirelessly link to the Internet allows users to control 
and monitor their robots' actions from any Internet-connected computer in the 
world. 


  ...





--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.463 / Virus Database: 269.6.1/776 - Release Date: 25/04/2007 
12:19

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Re: [singularity] Re: Why do you think your AGI design will work?

2007-04-25 Thread Mike Tintner
Ben, Thanks. Good to see developmental dimension. Still would like to see 
strong examples of tests and targets you hope to meet.

One thought occurs though -  if you could get an AGI machine to demonstrate a 
series of adaptations however simple, like forming new words, (such as the ape 
combining "water" and "bird" signs to denote "duck"), or new sentence 
structures, or finding new routes round and over furniture etc, you would 
probably convince people that you were at least on your way to true AGI, even 
if the machine were otherwise very imperfect - BUT the only way in such 
situations to prove those adaptations would be by demonstrating that the 
machine had actually changed its rules and reprogrammed itself, no? The 
behavioural adaptations in themselves would not be enough - they could have 
been produced by some tricksy, but not really adaptive program.
  - Original Message - 
  From: Benjamin Goertzel 
  To: singularity@v2.listbox.com 
  Sent: Wednesday, April 25, 2007 6:28 PM
  Subject: Re: [singularity] Re: Why do you think your AGI design will work?



For example, here nearly everyone seems to be talking about plunging in and
creating a sophisticated intellectual mind more or less straight-off, but 
it 
takes the human brain roughly 13-20 years to develop physically and mentally
to where it is able to intellectualise - to handle concepts like "society"
and "development" and "philosophy." 


  Agreed, and I have written a paper on "Computational Developmental Psychology"
  aimed at exploring how this development process may take place in certain 
kinds
  of AI systems... see 

  www.novamente.net/file/WCCI06_Stages.pdf

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


--


  Internal Virus Database is out-of-date.
  Checked by AVG Free Edition. 
  Version: 7.5.463 / Virus Database: 0.0.0/0 - Release Date:  00:00

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Re: [singularity] Re: Why do you think your AGI design will work?

2007-04-25 Thread Mike Tintner
Very good, Richard. Agree to great extent. Yes the human mind is a complex, 
interdependent system of subsystems, and you can't chop them off.


[Yes BTW to the "insanity", i.e. literally out-of-the-human-mind, nature of 
sci. psychology. First, no mind - behaviourism. Then, yes there's a mind, 
but only an unconscious mind. Then, 1990's, oh we do have a conscious mind 
too. And still we only study consciousness, as a set of faculties, and not 
Thought - the conscious mind's actual streams of debate - the geology, if 
you like, but not the geography of human thought.].


But what you seem to be missing out is the evolutionary (& developmental) 
standpoint. The human mind evolved. And it also has to develop in stages 
through childhood, which to a limited extent recapitulates evolution.


So you have to understand why the human system had to evolve and has to 
develop in those ways. You can't just attempt to recreate, say, an 
already-developed adult human mind by a super-Manhattan project. We're 
nowhere near ready for that yet.


(An interesting thought BTW here is that adaptivity itself adapts, becomes 
more sophisticated through life - and evolution evolves).


Sure, Ben, AGI does not have to copy the evolution of mind exactly, but 
there are basic principles there of constructing a mind that I think do have 
to be adhered to, just as there were basic principles of flight..


For example, here nearly everyone seems to be talking about plunging in and 
creating a sophisticated intellectual mind more or less straight-off, but it 
takes the human brain roughly 13-20 years to develop physically and mentally 
to where it is able to intellectualise - to handle concepts like "society" 
and "development" and "philosophy." Why? I would argue because those powers 
of abstraction have been grounded in gradually building up a picture tree of 
underlying images and graphics, of great depth, with extraordinary CGI 
powers of manipulating them. An abstract concept, for example,  like 
"society", I'm suggesting, is based on a lot of images in the brain - and 
you have to have them to handle it - as you do all such abstract concepts..



- Original Message - 
From: "Richard Loosemore" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, April 25, 2007 4:59 PM
Subject: [singularity] Re: Why do you think your AGI design will work?



Joshua Fox wrote:
Ben has confidently stated that he believes Novamente will work ( 
http://www.kurzweilai.net/meme/frame.html?m=3 
 and others).


AGI builders, what evidence do you have that your design will work?

This is an oft-repeated question, but I'd like to focus on two possible 
bases for saying that an invention will work before it does.
1. A clear, simple, mathematical theory, verified by experiment. The 
experiments can be "pure science" rather than technology tests.

2. Functional tests of component parts or of crude prototypes.

Maybe I am missing something in the articles I have read, but do 
contemporary AGI builders have a verified theory and/or verified 
components and prototypes?


Joshua,

I happen to think your question is a very important one.  I am writing a 
paper on something very close to that question right now, so I want to 
summarize what I have said there.


First of all, I think a lot of the replies to your post went off at a 
tangent:  inventing a test means nothing (no matter how much fun it is) if 
the justification for the test is nonexistent.  It doesn't matter how many 
tests people pull out of thin air, the whole point of your question was 
WHY should we believe this or that test, or WHY should we believe this or 
that definition of intelligence, or WHY should we believe this or that 
design for an AGI is better than any other.


What we need is the BASIS that anyone might have for asserting the 
superiority of one answer over another  except personal judgment.


But:

This 'basis' is completely missing from all of AI research.  AI is just 
one great big free-for-all exploration, based on personal judgements that 
are often kept away from the limelight, to build something that works as 
well as human intelligence.  There are no principled approaches, there are 
only hidden assumptions/preconceptions/guesses, on top of which are 
layered various kinds of formalism that are designed to make it look more 
scientific.  (And if it seems outrageous to say that so many people are 
being so self-deceiptful, take a quick look at the history of behaviorism, 
in psychology very similar story, same conclusion).


The above is meant to be a position statement:  I believe that I can 
justify it by means of a long essay, with lots of evidence, but let's just 
take it for granted right now, so I can move on to the next step.


Here is what I think is happening.

1) Everyone is actually borrowing crucial ideas from the design of the 
human cognitive system, including those people who say they are not.


I say this beca

Re: [singularity] Why do you think your AGI design will work?

2007-04-25 Thread Mike Tintner
No. Algorithms can produce deterministic adaptivity - essentially "when you 
make a mistake, correct as specified.."


But only free adaptivity will work. When exactly have you made a mistake? If 
no one answers the door, do you keep ringing, or look round the back of the 
house. You have to be free to adapt or not - to change your ways, or to 
persist longer with your existing ways. It's difficult to know most of the 
time when you have made a mistake - ok your stocks are plummeting in value, 
but does that mean you sell now, or wait a little longer, when they may soar 
back into profit?


How else - other than our being freely, non-deterministically, 
non-algorithmically programmed to adapt - can you explain the massive human 
resistance to adapting and changing our ways in all activities? We can and 
do adapt, but we are also highly resistant to change. That is consistent 
with being freely not deterministically adaptive.


Also algorithms by definition severely limit the options open to you, unless 
they say "try any of all options open to you" in which case they cease to be 
algorithms.


And you still haven't answered how algorithms can pre-specify how an 
environment will change in unpredictable ways - that's what they have to do 
to tell you how to adapt successfully. Like squaring the circle.



--- Mike Tintner <[EMAIL PROTECTED]> wrote:

Screw the algorithms. Why not try some nondeterministic programming,  and
let the agent work things out through trial and error? The way we actually
are programmed?


Isn't learning from trial and error a type of algorithm?


-- Matt Mahoney, [EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


Re: [singularity] Why do you think your AGI design will work?

2007-04-25 Thread Mike Tintner
Hi Ben,

Thanks for all replies. Just a quick wrap-up of my main point  (& no doubt we 
can/ will re-engage/ redevelop ideas on other threads. I think it's simply that 
AGI must be "flexi-principle". That it can and does, in effect, say : "well, 
those are my assumptions on this activity, but I could be wrong.." rather like 
this man does:

"We don't have any solid **proof** that Novamente will "work" in the sense of 
leading to powerful AGI.

We do have a set of mathematical conjectures that look highly plausible and 
that, if true, would imply that Novamente will work (if properly implemented 
and a bunch of details are gotten right, etc.).   But we have not proved these 
conjectures and are not currently focusing on proving them, as that is a big 
hard job in itself  We have decided to seek proof via practical 
construction and experimentation rather than proof via formal mathematics."

Rather like even the simplest animals do - extensive research, often linked to 
biorobotics,  has now  shown that they all use flexible navigational strategies.

All forms of life are scientists/ technologists.

P.S. My point re language, extremely succinctly, is that the brain processes 
all info. simultaneously on at least 3 levels - as a 'picture tree' - as 
symbols, 'outline' graphics AND detailed images,  supplying & checking on all 3 
levels right now in your brain, even as you are apparently just processing on 
the one level. of symbols/words. And that picture tree, I believe, will also be 
essential for AGI. No need to develop this here - but do you also understand 
something like that?

Best


  - Original Message - 
  From: Benjamin Goertzel 
  To: singularity@v2.listbox.com 
  Sent: Wednesday, April 25, 2007 3:38 AM
  Subject: Re: [singularity] Why do you think your AGI design will work?





  On 4/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
Well we agree where we disagree.

I'm very confident that AGI can't be achieved but by following crudely 
evolutionary and developmental paths. The broad reason is that brain, body, 
intelligence  and the set, or psychoeconomy of activities of the animal evolve 
in interrelationship with each other. All the activities that animals undertake 
are extremely problematic,  and became ever more complex and problematic as 
they evolved - and require ever complex physical and mental structures to 
tackle them.


  Yes, that is how things evolved in nature.  That doesn't mean it's the only 
way things can be.

  Airplanes don't fly like birds, etc. etc. 



You seem to be making a more sophisticated version of the GOFAI mistake of 
thinking intelligence could be just symbolic and rational - and you can jump 
straight to the top of evolved intelligence.


  No, I absolutely don't think that intelligence can be "just symbolic" -- and 
I don't think that given plausible computational resources, intelligence can be 
"just rational." 

  "Purely symbolic/rational" versus "animal-like" are not the only ways to 
approach AGI...

   



But I take away from this one personal challenge, which is that it clearly 
needs to be properly explained that a) language rests at the top of a giant 
picture tree of sign systems in the mind  - without the rest of which language 
does not  "make sense" and you "can't see what you are talking about" (& 
there's no choice about that - that's the way the human mind works - and any 
equally successful mind will have to work), and b) language also rests on a 
complex set of physical motor and manipulative systems - and you can't grasp 
the sense of language, if you can't physically grasp the world. Does this last 
area - the multilevelled nature of language - interest you?


  I already understand all those points and have done so for a long time.  They 
are statements about human psychology.  Why do you think that closely humanlike 
intelilgence is the only kind? 

  As it happens my own AGI project does include embodiment (albeit, at the 
moment, simulated embodiment in a 3D sim world) and aims to ground language in 
perceptions and actions.  However, it doesn't aim to do so in a slavishly 
humanlike way, and also has room for more explicit logic-like representations. 

  "There are more approaches to AGI than are dreamt of in your philosophy"  ;-)

  -- Ben G




--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.463 / Virus Database: 269.6.0 - Release Date: 24/04/2007 00:00

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Mike Tintner

Sure and there's a deterministic formula that explains everything.

What's the algorithm that tells me what to do with my investment portfolio 
on the stockmarket right now - Buy, Sell or Hold? What's the algorithm that 
tells you how much time to devote to your work tonight, and how much to your 
kids?


And if there is one or a set, why do you keep changing your mind, and 
oscillating about that and so many other decisions? And what's the algorithm 
that tells the character in Closer how many times to oscillate when offered 
a cigarette..."No... yes... no... fuckit no... Ok... no I'm giving them 
up...  Yes.." Five, six, seven times?


What's the algorithm that can tell any agent how to deal with a situation 
that it has never encountered before, and the algorithm couldn't possibly 
have known about beforehand? Like a new form of investment - in landbank 
funds, say? Or how to deal with a new form of geopolitics like the war on 
terror? Or the millions of new forms that our dynamic social and natural 
environment are continually taking, and were totally unpredictable 
beforehand?


Screw the algorithms. Why not try some nondeterministic programming,  and 
let the agent work things out through trial and error? The way we actually 
are programmed? Is Wei Pang into something loosely like that?



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, April 25, 2007 3:02 AM
Subject: Re: [singularity] Why do you think your AGI design will work?



--- Mike Tintner <[EMAIL PROTECTED]> wrote:


There is a difference between your version: "achieving goals" which can be
done, if I understand you, by algorithms - and my goal-SEEKING, which is
done by all animals, and can't be done by algorithms alone. It involves
finding your way as distinct from just following the way set by programmed
rules.


There is an algorithm.  We just don't know what it is.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.463 / Virus Database: 269.5.10/774 - Release Date: 23/04/2007 
17:26




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Mike Tintner
Well we agree where we disagree.

I'm very confident that AGI can't be achieved but by following crudely 
evolutionary and developmental paths. The broad reason is that brain, body, 
intelligence  and the set, or psychoeconomy of activities of the animal evolve 
in interrelationship with each other. All the activities that animals undertake 
are extremely problematic,  and became ever more complex and problematic as 
they evolved - and require ever complex physical and mental structures to 
tackle them.

You seem to be making a more sophisticated version of the GOFAI mistake of 
thinking intelligence could be just symbolic and rational - and you can jump 
straight to the top of evolved intelligence. 

A sense of history - of the truth that we are now moving into a new stage of 
civilisation that represents an even more drastic change than the end of 
feudalism with the beginning of the print era - should warn you. Now it's the 
internet era, and the beginning of a multimedia as opposed to a literate 
society. And right through our culture you can see the marks of that change, 
which involve an end of the old splits, the  reuniting of mind and body,  
rationality and imagination, symbols and images, reason and emotion, 
intelligence and creativity, print, photo and video  - recognizing their 
multi-levelled interdependence adn rejecting the illusions of their 
independence.  The new age of flight is not the age of AGI on a computer - it 
was symbolised neatly on time by the new age of the autonomous mobile robot, in 
the Darpa race. Embodied intelligence, however primitive. You can't cut 
corners. There are too many of them.

But I take away from this one personal challenge, which is that it clearly 
needs to be properly explained that a) language rests at the top of a giant 
picture tree of sign systems in the mind  - without the rest of which language 
does not  "make sense" and you "can't see what you are talking about" (& 
there's no choice about that - that's the way the human mind works - and any 
equally successful mind will have to work), and b) language also rests on a 
complex set of physical motor and manipulative systems - and you can't grasp 
the sense of language, if you can't physically grasp the world. Does this last 
area - the multilevelled nature of language - interest you?
  - Original Message - 
  From: Benjamin Goertzel 
  To: singularity@v2.listbox.com 
  Sent: Wednesday, April 25, 2007 2:18 AM
  Subject: Re: [singularity] Why do you think your AGI design will work?



  You seem to be mixing two things up...

  1) the definition of the goal of "human level AGI"

  2) the right incremental path to get there

  I consider these as rather different, separate isses... 

  In my prior reply to you I was discussing only Point 1, not Point 2

  I don't really accept your distinction btw "achieving goals" and "seeking 
goals."
  Even a system that is able to reprogram its own top-level goals, can still be 
  judged according to how effectively it can achieve goals...

  Of course I agree that to achieve powerful AGI a system will need to be able 
to
  formulate lots of its own rules rather than just following explicit 
high-level 
  cognitive rules.  (Whether that AGI system is still "following rules" as some 
low level,
  in the manner that humans follow the rules of physics or neurology, is 
  another question.)

  I don't agree that the only viable path to human-level AGI is to recapitulate 
  evolution and work on animal-level intelligence first.  That is **a** viable 
path
  but IMO not the only one.

  -- Ben G


  On 4/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
But there is a difference & I think it's crucial re the goals being set for 
AGI.

There is a difference between your version: "achieving goals" which can be 
done, if I understand you, by algorithms - and my goal-SEEKING, which is done 
by all animals, and can't be done by algorithms alone. It involves finding your 
way as distinct from just following the way set by programmed rules.

As I'm defining AGI, one of the central goals will be to provide a set of 
rules and principles that allow for themselves to be radically changed and 
broken, so that the AGI machine can find its way . Such a set of rules would 
allow birds as they did recently in the UK, to switch from flying magnetically 
north to their ultimate destination (or whatever they did) to flying along the 
central road highways instead (obviously an easier way to fly). Such rules 
would among other things allow our agent, whatever it is, to freely experiment.

Now birds clearly must have such rule-breaking rules - but it strikes me 
that they still present a challenge to modern programmers, no?  (And perhaps 
travel by flight might be a good test activity for AGI because it's

Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Mike Tintner
But there is a difference & I think it's crucial re the goals being set for AGI.

There is a difference between your version: "achieving goals" which can be 
done, if I understand you, by algorithms - and my goal-SEEKING, which is done 
by all animals, and can't be done by algorithms alone. It involves finding your 
way as distinct from just following the way set by programmed rules.

As I'm defining AGI, one of the central goals will be to provide a set of rules 
and principles that allow for themselves to be radically changed and broken, so 
that the AGI machine can find its way . Such a set of rules would allow birds 
as they did recently in the UK, to switch from flying magnetically north to 
their ultimate destination (or whatever they did) to flying along the central 
road highways instead (obviously an easier way to fly). Such rules would among 
other things allow our agent, whatever it is, to freely experiment.

Now birds clearly must have such rule-breaking rules - but it strikes me that 
they still present a challenge to modern programmers, no?  (And perhaps travel 
by flight might be a good test activity for AGI because it's not that 
complicated).

I absolutely agree that the general definition must be accomplished by specific 
examples of  the activities the AGI machine will tackle.A sports-playing robot 
or a multiple-maze-running robot were my first attempts.

I disagree with yours, though. Passing human exams of most if not all kinds 
would certainly classify as a proof of AGI. I just think that's like trying to 
fly at intergalactic speed before you can even move a finger or a foot. 
Language is an embodied skill -  the brain can't understand words it can't 
literally make sense of. It's based on whole sets of physical, manipulative and 
navigational skill as well as a highly evolved visual intelligence with awesome 
CGI powers..(Remember - the unconscious mind doesn't think over things in words 
alone, which might seem most efficient, but in cinematic dreams. And so, almost 
certainly do animal minds).

I reckon an AGI whose skills were in various ways navigational, like those of 
the earliest animals, would be a far more realistic target.



  - Original Message - 
  From: Benjamin Goertzel 
  To: singularity@v2.listbox.com 
  Sent: Tuesday, April 24, 2007 11:58 PM
  Subject: Re: [singularity] Why do you think your AGI design will work?



  Well, in my 1993 book "The Structure of Intelligence" I defined intelligence 
as 

  "The ability to achieve complex goals in complex environments."

  I followed this up with a mathematical definition of complexity grounded in 
  algorithmic information theory (roughly: the complexity of X is the amount of
  pattern immanent in X or emergent between X and other Y's in its environment).

  This was closely related to what Hutter and Legg did last year, in a more 
rigorous 
  paper that gave an algorithmic information theory based definition of 
intelligence.

  Having put some time into this sort of definitional work, I then moved on to 
more
  interesting things like figuring out how to actually make an intelligent 
software system 
  given feasible computational resources.

  The catch with the above definition is that a truly general intelligence is 
possible
  only w/ infinitely many computational resources.  So, different AGIs may be 
able
  to achieve different sorts of complex goals in different sorts of complex 
environments.
  And if an AGI is sufficiently different from us humans, we may not even be 
able
  to comprehend the complexity of the goals or environments that are most 
relevant 
  to it.

  So, there is a general theory of what AGI is, it's just not very useful.

  To make it pragmatic one has to specify some particular classes of goals and
  environments.  For example

  goal = getting good grades 
  environment = online universities

  Then, to connect this kind of pragmatic definition with the mathematical
  definition, one would have the prove the complexity of the goal (getting good
  grades) and the environment (online universities) based on some relevant 
  computational model.  But the latter seems very tedious and boring work...

  And IMO, all this does not move us very far toward AGI, though it may help
  avoid some conceptual pitfalls that could have been fallen into otherwise... 

  -- Ben G

  On 4/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
Hi,

I strongly disagree - there is a need to provide a definition of AGI - not 
necessarily the right or optimal definition, but one that poses concrete 
challenges and focusses the mind - even if it's only a starting-point. The 
reason the Turing Test has been such a successful/ popular idea is that it 
focusses the mind.

(BTW I immediately noticed your lack of a good definition on going through 
your site and papers, and it immediately raised doubts 

Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Mike Tintner
Hi,

I strongly disagree - there is a need to provide a definition of AGI - not 
necessarily the right or optimal definition, but one that poses concrete 
challenges and focusses the mind - even if it's only a starting-point. The 
reason the Turing Test has been such a successful/ popular idea is that it 
focusses the mind.

(BTW I immediately noticed your lack of a good definition on going through your 
site and papers, and it immediately raised doubts in my mind. In general, the 
more or less focussed your definition/ mission statement, I would argue, the 
more or less seriously people will tend to take you). 

Ironically, I was just trying to take Marvin Minsky to task for this on another 
forum. I suddenly realised that although he has been talking about the problem 
of AGI for decades, he has only waved at it, and not really engaged with it. He 
talks  about how  having different ways of thinking about a problem like the 
human mind does, is important for AGI  - and that's certainly one central 
problem/ goal - but he doesn't really focus it. 

Here's my first crack at a definition - very crude - offered strictly in 
brainstorming mode - but I think it does focus a couple of AGI challenges at 
least - and fits with some of the stuff you say.

AN AGI MACHINE - a truly adaptive, truly learning machine - is one that will be 
able to:

1) conduct a set of goal-seeking activities

- where it starts with only a rough, incomplete idea of how to reach its goals,

- i.e. knows only some of the steps it must take, & some of the rules that 
govern those steps

- and can find its way to its goals "making it up as it goes along" 

- by finding new ways round more or less unfamiliar obstacles.

To do this it must be able to:

2) Change its steps and rules -

-not just revising them according to predetermined formulae but

-adding new steps and rules, & even

-creating new rules, that break existing ones.

3) can learn new related activities


[[The key things in this definition for me are that it focusses on the need for 
AGI to be able to radically change the steps and rules of any activity it 
undertakes].

EXAMPLE: {again a very crude one - first that came to mind]:

An AGI machine would be a SPORTING ROBOT that first could learn to play soccer, 
as we do,  by being taught a few basic principles [like "try to score a goal by 
running towards the goal with the ball, or passing it to other team members, 
" and shown a few soccer games.

It would then be able to learn the game as it goes along, by playing. And 
should be able to find and learn new routes to goal,  new passes, new kicks 
(with perhaps new spins and backswings),  It should even be able to adapt its 
rules, - adding new ones like "you can move back towards your own goal when you 
have the ball, as well as forwards towards the opponent's"

And having learned soccer, it should be able to learn OTHER FIELD/ COURT SPORTS 
in similar fashion, -  like Gaelic football, hockey, basketball, etc. etc.  

[Comment: Perhaps much too extravagant a starting-goal - maybe better to have a 
maze-running robot that can learn to run radically different and suprising 
kinds of mazes - but once objections are considered, more realistic goals can 
be set]


- Original Message - 
  From: Benjamin Goertzel 
  To: singularity@v2.listbox.com 
  Sent: Tuesday, April 24, 2007 9:50 PM
  Subject: Re: [singularity] Why do you think your AGI design will work?



  Hi,

  We don't have any solid **proof** that Novamente will "work" in the sense of 
leading to powerful AGI.

  We do have a set of mathematical conjectures that look highly plausible and 
that, if true, would imply that Novamente will work (if properly implemented 
and a bunch of details are gotten right, etc.).   But we have not proved these 
conjectures and are not currently focusing on proving them, as that is a big 
hard job in itself  We have decided to seek proof via practical 
construction and experimentation rather than proof via formal mathematics. 

  Wright Bros. did not prove their airplane would work before building it.  But 
they were confident based on their intuitive theoretical model of aerodynamics, 
which turned out to be correct.  The case with Novamente is a bit more rigorous 
than this because we have gotten to the point of stating but not proving 
mathematical conjectures that would imply the workability of the system... 

  As for Matt Mahoney's point about "definining AGI" being the bottleneck, I 
really think that is a red herring.  Rigorously defining any natural language 
term is a pain.  You can play for hours with the definition of "cup" versus 
"bowl", or the definition of "flight" versus "leaping" versus "floating in 
space", etc.  Big deal!  

  -- Ben G


  -- Ben G






  On 4/24/07, Joshua Fox <[EMAIL PROTECTED]> wrote: 
Ben has confidently stated that he believes Novamente will work ( 
http://www.kurzweilai.net/meme/frame.html?m=3 and others). 

AGI builders,