Re: [agi] Instead of an AGI Textbook

2008-03-29 Thread Robert Wensman
Hmm.. well, but at least, using words related to robotics gives a flavour of
embodiment :-).

Anyhow, I still prefer sharing terminology with robotics, as opposed to
narrow AI. Narrow AI and AGI are perhaps closer, so the risk of confusion is
bigger.

/R


2008/3/29, Ben Goertzel [EMAIL PROTECTED]:

  4. In fact. I would suggest that AGI researchers start to distinguish
  themselves from narrow AGI by replacing the over ambiguous concepts from
 AI,
  one by one. For example:
 
  knowledge representation = world model.
  learning = world model creation
  reasoning = world model simulation
  goal = life goal (to indicate that we have the ambition of building
  something really alive)
  If we say something like world model creation, it seems pretty obvious
  that we do not mean anything like just tweaking a few bits in some
 function.

 Yet, those terms are used for quite shallow things in many Good Old
 Fashioned
 robotics architectures ;-)

 ben

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-28 Thread Robert Wensman
A few things come to my mind:

1. To what extent is learning and reasoning a sub topic of cognitive
architectures? Is learning and reasoning a plugin to a cognitive
architecture, or is in fact the whole cognitive architecture about learning
and reasoning.

2. I would like a special topic on AGI goal representation. More
specifically, a topic that discusses how a goal specified by any human
designer, can be related to the world model and actions that an AGI system
creates? For example, how can the human specified goal, be related to a
knowledge representation that is constantly developed by the AGI system?

3. Why do AI/AGI researchers always talk about *knowledge
representation.*It gives such a strong bias towards static or useless
knowledge bases. Why
not talk more about *World modelling*. Because of the more active meaning
of the word modelling as opposed to representation, it implies that
things such as inference etc. need to be considered. Since the word
modelling is also used to denote the process of creating a model, it also
implies that we need mechanisms for learning. I really think we should
consider if not knowledge representation is a concept straightly borrowed
from dumb-narrow AI, or if it really is a key concept for AGI. Sure enough,
there will always be knowledge representation, but the question is whether
it is an important/relevant/sufficient/misleading concept for AGI.

4. In fact. I would suggest that AGI researchers start to distinguish
themselves from narrow AGI by replacing the over ambiguous concepts from AI,
one by one. For example:

knowledge representation = world model.
learning = world model creation
reasoning = world model simulation
goal = life goal (to indicate that we have the ambition of building
something really alive)

If we say something like world model creation, it seems pretty obvious
that we do not mean anything like just tweaking a few bits in some function.

2. I am thinking of whether it would be a good idea with a topic like
methods for quelling combinatorial explosions in AGI world model
processes. That topic could outline basic principles like meta-adaptation
and parallelisation of adaptation (meaning that the AGI system needs to
separate objects in reality that can be studied separatley). Like someone
mentioned, such principles might be overly simple to many already in the
field, and thereby not worth mentioning, but if we aim at writing documents
for beginners, we really need to get the basics right. Simple/basic
principles are still interesting, as long as they are not narrow. Maybe Ben
Goertzel could add some more difficoult material under such a topic also.

Hope any of these ideas could could be helpful. Thanks.

/R



2008/3/26, Ben Goertzel [EMAIL PROTECTED]:

 BTW I improved the hierarchical organization of the TOC a bit, to
 remove the impression that it's just a random grab-bag of topics...


 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook

 ben

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-29 Thread Robert Wensman


 d) you keep repeating the illusion that evolution did NOT achieve the
 airplane and other machines - oh yes, it did - your central illusion here
 is
 that machines are independent species. They're not. They are
 EXTENSIONS  of
 human beings, and don't work without human beings attached. Manifestly
 evolution has taken several stages to perfect tool/machine-using species -
 of whom we are only the latest version - I refer you to my good colleague,
 the tool-using-and-creating Caledonian crow.

 Yes, somehow, we are going to create the first independent machine species
 -
 but there's a big unanswered set of questions as to how .


It can be said that the emergence of human intelligence and human cultures
set of another kind of technological evolution on top of the biological one.
That these two forms of evolution can be seen as separate, can be explained
as follows:

Biological evolution works through DNA sequences, genes. The survivability
of genes, depend on whether they are a part of successful biological
lifeforms.

Technological evolution works through sets of ideas, or memes that grow in
our culture and in the minds of human beings. The survivability of memes
depend on whether they are appealing to human minds. Whether a meme is
appealing or not, could depend on a number of factors, such as whether the
meme could help humans to achieve some of their goals, whether they are
self-contradicting, or whether we can understand them etc. Memes can even
survive outside the brain of humans, stored in books etc.

The reason why technological innovations works with such great strides, is
first because memes are produced at an incredible rate compared to genes;
they are software based instead of hardware based. But more importantly,
because memes can be selected based on logical deduction and the
consideration of a predicted future. Thus, the survivability of memes depend
on how well we believe them to help us in the future.

I think it would be more accurate to say that technological meme evolution *was
caused by *the biological evolution, rather than being *the extension of it*,
since they are in fact two quite different evolutionary systems, with
different kinds of populations/survival conditions.

I would say that in some sense, there is already a machine species, even if
not independent. This machine species just have not yet found a way of
staying alive and breed outside human minds.

Is this a helpful perspective? :-)...

One key issue here, is whether we want to consider hardware and software
evolutionary systems, or just hardware based evolutionary systems. Also, I
admit that maybe I am not using the concept of species in any stringent
way.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] AGI Metaphysics and spatial biases.

2008-02-22 Thread Robert Wensman
 out from a
particular object.

Since these metaphysics are based on 3D space, they can be easily modified
to be suitable for 2D space. Maybe it could be possible to build AGI
prototypes using 2D space biases to lessen the demand for hardware, and then
when we have gained more experience, it could be possible to shift to full
3D space metaphysics based on the experience from 2D AGI.

So, my questions now are:

Has anyone else had similar ideas about what biases/metaphysics should be
encoded into an AGI system. What could be good with them, bad with them?
Does anyone agree with the fact that object isolation could be an important
principle for achieving AGI learning?

Also, some specific questions for Ben Goertzel:
I understand Novamente is based on patternist metaphysics. In what ways is
patternist metaphysics different/similar to the metaphysics sketched at
here? As I understand it, the patternist metaphysics is based on events.
Would it be possible/easy to model data-flow dependencies between objects
using the Novamente metaphysics?

Also, I remember once seeing a Novamente demonstration where the AGI system
was learning the concept of object persistancy. The fact that objects
hidden, remain in hiding until next time it is shown again (I hope this give
a correct description of what was shown). But in that case I guess that
there must have been some initial concepts encoded already into the AGI
system, for example the concepts of how dense objects can move through space
objects. Using informal words, how would you describe the metaphysics or
biases currently encoded into the Novamente system?

/Robert Wensman

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] Primates using tools.

2008-01-30 Thread Robert Wensman
This could perhaps be relevant to understanding human level intelligence.
One interpretation here is that the brain of primates considers tools as
part of their body, which makes them good at using them:

http://sciencenow.sciencemag.org/cgi/content/full/2008/128/2

This of course, still leaves the question of how a generally intelligent
system uses its body in the first place, and what special hardware there is
to deal with this problem. :-).

Personally I believe that a general intelligence, such as the human mind,
still have some specialized processors to deal with very common situations.

Another thing that I guess could use some special hardware, is the ability
to feel empathy and understand other human beings or animals. To understand
other intelligent beings is so important for humans, yet if done in a
general way it seems so incredibly expensive and difficult. Also, a human is
in many ways very similar to the intelligent beings it tries to simulate, so
it is my firm belief that a human uses parts of its own cognitive process to
simulate other intelligent beings. I think that a social AGI system needs to
be able to instantiate its own cognitive process in a kind of role-play.
Assume that I know this, that I want this, and that I am in this kind of
situation, what would I do. And then use this role playing to assess others
actions.

The fact that empathy seems to be more strongly connected to biological
heritage, rather than by social influence could indicate that the ability to
feel empathy needs special hardware in our brain. I think I heard of a study
that showed a very strong correlation between the empathic ability of
identical twins, which should indicate that their social upbringing has less
influence on this particular ability. However, I don´t remember the source
of that that information.

/Robert Wensman

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91461624-5f7744

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Robert Wensman
1. Brembs and his colleagues reasoned that if fruit flies (Drosophila
melanogaster) *were simply reactive robots entirely determined by their
environment*, in completely featureless rooms they should move completely
randomly.

Yes, but no one has ever argued that a flier is a stateless machine. It
seems like their argument ignores the concept of internal state. If they
went through all this trouble just to prove that the brain of the flies has
an internal state, it seems they wasted a lot of time on something trivial.

I cannot see how the concept of free will has got anything to do with
this.

/R

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89402809-0e047a

Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Robert Wensman



 I don't think anyone with knowledge of insect nervous systems would
 argue that they're stateless machines.  Even simple invertebrates such
 as slugs can exhibit classical condition effects which means that at
 least some minimal state is retained.

 To me the idea of free will suggests that a number of possible
 behaviors can be triggered at any moment in time and that the system
 in some way chooses between those possibilities.  The system can
 only move easily from one state to another if its dynamics are perched
 on an edge between pure randomness and determinism.  If any one
 behavior is too strong an attractor then the system overall may become
 dysfunctional.



I don't understand what you mean by randomness. To people who believe in
determinism, there is no true randomness. What you might mean randomness is
smooth distribution or lack of complexity.

I am curious whether the same scientific method also would conclude that the
following fractal tree, also has a consciousness?
http://commons.wikimedia.org/wiki/Image:Fractal_tree_%28Plate_b_-_3%29.jpghttp://upload.wikimedia.org/wikipedia/commons/4/41/Fractal_tree_(Plate_b_-_2).jpg

Do they mean that a system complex enough obtains free will?

/R

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89412989-a5f2a7

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Robert Wensman
Mike Tinter,

If you really do not think that digital computers can be creative by
definition, I do not understand why you would like to join a mailing list
with AGI researchers? Computers operate by using software, thus, they need
to be programmed. It just seems to me that you do not understand what the
word program means. Even if you use use a computer that do not need to be
loaded with a program, guess what, such a computer could be considered to
have an initial program.

The very determinism of the universe implicates that everything runs
according to a program, including your ramblings here about creativity. I
have to ask you a question, do you think the universe and everything in it
runs according to deterministic laws of nature? Do you accept that you are a
part of this deterministic reality? Well, in that case Ive got news for you,
you are a program also! As evidence I would present your DNA, a program
encoded and stored in molecular structures.

Have you ever heard of computational equivalence? Do you know what it means?

Also, I feel annoyed that you compare the Novamente architecture with
something that just takes instructions, like do this, do that, then do
this etc. It seems you need to spend greater effort in studying this
architecture, for example by reading The Hidden Pattern.

I feel you are in great need of widening your mind to understand chaotic or
fractal processes. Take a forest for example, even in all its complexity and
diversity, it is still governed by very simple and basic laws namely the
laws of nature. By mimicking some of these laws at an appropriate level,
such as shape level, programmers can create forests that to a very large
extent looks like real forests:  http://www.speedtree.com/. A generator such
as speedtree could generate entire forests of miles and miles of trees, with
no single two trees looking the same. Even though the lines of code
producing the trees are pretty simple, the outcome in creativity and
originality is vast.

The same thing applies to a human mind. Even though the output of a human
mind is amazingly diverse and creative, its program is still goverened by
the basic laws of nature, and the DNA program. What AGI designers tries to
do is to is to mimic this process.

The concepts of program and determinism are pretty well established within
the scientific community, please do not try to redefine them like you do. It
just creates a lot of confusion. I think what you really want to use is the
concept of adaptability, or maybe you could say you want an AGI system that
is *programmed in an indirect way* (meaning that the program instructions
are very far away from what the system actually does). But please do not say
things like we should write AGI systems that are not programmed. It hurts
my ears/eyes.

/Robert Wensman



2008/1/7, Mike Tintner [EMAIL PROTECTED]:

 Well we (Penrose  co) are all headed in roughly the same direction, but
 we're taking different routes.

 If you really want the discussion to continue, I think you have to put out
 something of your own approach here to spontaneous creativity (your
 terms)
 as requested.

 Yes, I still see the mind as following instructions a la briefing, but
 only odd ones, not a whole rigid set of them., a la programs. And the
 instructions are open-ended and non-deterministically open to
 interpretation, just as my briefing/instruction to you - Ben go and get
 me
 something nice for supper - is. Oh, and the instructions that drive us,
 i.e. emotions, are always conflicting, e.g [Ben:] I might like to.. but
 do
 I really want to get that bastard anything for supper? Or have the time
 to,
 when I am on the very verge of creating my stupendous AGI?

 Listen, I can go on and on - the big initial deal is the claim that the
 mind
 isn't -  no successful AGI can be - driven by a program, or thoroughgoing
 SERIES/SET of instructions - if it is to solve even minimal general
 adaptive, let alone hard creative problems. No structured approach will
 work
 for an ill-structured problem.

 You must give some indication of how you think a program CAN be generally
 adaptive/ creative - or, I would argue, squares (programs are so square,
 man) can be circled :).

  Mike,
 
  The short answer is that I don't believe that computer *programs* can
 be
  creative in the hard sense, because they presuppose a line of enquiry,
 a
  predetermined approach to a problem -
  ...
  But I see no reason why computers couldn't be briefed rather than
  programmed, and freely associate across domains rather than working
 along
  predetermined lines.
 
  But the computer that is being briefed is still running some software
  program,
  hence is still programmed -- and its responses are still determined by
  that program (in conjunction w/ the environment, which however it
  perceives
  only thru a digital bit stream)
 
  I don't however believe that purely *digital* computers are capable of
  all
  the literally imaginative powers (as already

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Robert Wensman
Mike,

Let me clarify further. What me and other computer scientists mean by
program, is probably something like *A formal and non-ambigous description
of a deterministic system that operates over time*. Thus, if you can
describe something in nature with enough detail, your description is a
program. As another example, if you write a book that describes the human
mind formally in enough detail, that book in itself would become a program.

So when you say that we cannot write a program that is creative on the same
level as humans, you basically state that it would be impossible to describe
the human mind in a detailed enough way. This is certainly bogus, as this
could be done theoretically by simply scanning and recording the state and
connections of every neuron in a human mind. Another way to put it, is that
your suggestion implies that we could never *understand *the human mind on a
fine enough level, which is pretty upsetting and certainly not
revolutionary.

What computers have or have not done up until this point is completely
besides the question, if we discuss the definition of program.

Yes, enough powerful AGI would be revolutionary, but they would still be
programs. What you is suggesting is equivalent to asking a painter to paint
a revolutionary painting, without using paint. What should he do, stare
intensely at the canvas until what happens? He could try to cheat, using
dirt, or mud to paint. But most people would then just say he invented
another kind of paint, namely the dirt paint, or the mud paint. It is just
impossible to paint a painting without paint (unless your painting is
intended to look the same as the empty canvas). Painting paintings without
paint is not a radical idea, it is just plain futile or incorrect, depending
on perspective.

Why this topic is frustrating, is because you are roughly right in one
aspect. Yes, computers and AI systems up until this point has been
programmed in a much too direct way, where the connection between
programmers lines of code, and the systems actions has been too close. E.g.
there is a line of code saying *if(handIsHot()) moveHand(),* and where the
robot system moves its hand when it becomes hot. But this is what we here
call narrow AI and what we all here try to distance ourselves from. From
what I can tell, Novamente is for example miles and miles and miles away
from this kind of programming. In contrast, systems like Novamente studies
input and builds and relates concepts to abstract goals and later form
actions using different kinds of subtle methods. But a system like that is *
complex* and you cannot expect Ben Goertzel to blurt out all this complexity
in an email on this mailing list. You have to study the design in detail if
you are interested in it. But the bottom line is, it is still programming in
any way you choose to look at it (unless you want to use the word
programming in some way that no other person on earth is using it, but in
that case, be prepared to feel alone).

You should focus on HOW we could make programs creative, rather loosing
yourself in a strange quest to redefine well established terminology. It is
completley besides the point.

/Robert Wensman


2008/1/7, Mike Tintner [EMAIL PROTECTED]:

  Robert,

 Look, the basic reality is that computers have NOT yet been creative in
 any significant way, and have NOT yet achieved AGI - general intelligence, -
 or indeed any significant rulebreaking adaptivity; (If you disagree, please
 provide examples. Ben keeps claiming/implying he's solved them or made
 significant advances, but when pressed never provides any indication of
 how).

 These are completely unsolved problems. Major creative problems.

 And I would suggest you have to be prepared for the solutions to be
 revolutionary and groundshaking.

 If you are truly serious about solving these problems, I suggest, you
 should prepared to be hurt - you should be ready to consider truly radical
 ideas - for the ground on which you stand to be questioned - and be
 seriously shaken up. You should WELCOME any and all of your assumptions
 being questioned. Even if, let's say, what I or someone else suggests is in
 the end nutty, drastic ideas are good for you to contemplate at least for a
 while.

 Having said all this, I accept that what I have been saying offends this
 community -  I wasn't trying originally to push it, I got dragged into some
 of that last discussion.by Ben. And I also accept that most of you are not
 interested in going for the revolutionary,  from whatever source. And  I
 shall try to restrict my comments unless someone wishes to engage with me -
 although BTW I am ever more confident of my broad philosophical/
 psychological position - the mind really doesn't work that way.

 I may possibly make one last related post in the not too distant future
 about the nature of problems, and which are/aren't suitable for programs -
 but just ignore it.



  Mike Tinter,

 If you really do not think that digital

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Robert Wensman
Mike,

To put my question in another way. Would you like to understand
intelligence? Understand it to such a degree, that you can give a detailed
and non-ambiguous description of how an intelligent system operates over
time? Well, if you do want that, then you want -using standard
terminology- to create an intelligent program.

Why we get upset is because we feel you basically say I don't want to
understand intelligence alternatively intelligence can never be clearly
understood. You have to understand how computer scientists use the word
program to understand how we perceive your statements. From our
perspective, your position is not revolutionary, just depressing.

/Robert Wensman




2008/1/7, Mike Tintner [EMAIL PROTECTED]:

  Robert,

 Look, the basic reality is that computers have NOT yet been creative in
 any significant way, and have NOT yet achieved AGI - general intelligence, -
 or indeed any significant rulebreaking adaptivity; (If you disagree, please
 provide examples. Ben keeps claiming/implying he's solved them or made
 significant advances, but when pressed never provides any indication of
 how).

 These are completely unsolved problems. Major creative problems.

 And I would suggest you have to be prepared for the solutions to be
 revolutionary and groundshaking.

 If you are truly serious about solving these problems, I suggest, you
 should prepared to be hurt - you should be ready to consider truly radical
 ideas - for the ground on which you stand to be questioned - and be
 seriously shaken up. You should WELCOME any and all of your assumptions
 being questioned. Even if, let's say, what I or someone else suggests is in
 the end nutty, drastic ideas are good for you to contemplate at least for a
 while.

 Having said all this, I accept that what I have been saying offends this
 community -  I wasn't trying originally to push it, I got dragged into some
 of that last discussion.by Ben. And I also accept that most of you are not
 interested in going for the revolutionary,  from whatever source. And  I
 shall try to restrict my comments unless someone wishes to engage with me -
 although BTW I am ever more confident of my broad philosophical/
 psychological position - the mind really doesn't work that way.

 I may possibly make one last related post in the not too distant future
 about the nature of problems, and which are/aren't suitable for programs -
 but just ignore it.



  Mike Tinter,

 If you really do not think that digital computers can be creative by
 definition, I do not understand why you would like to join a mailing list
 with AGI researchers? Computers operate by using software, thus, they need
 to be programmed. It just seems to me that you do not understand what the
 word program means. Even if you use use a computer that do not need to be
 loaded with a program, guess what, such a computer could be considered to
 have an initial program.

 The very determinism of the universe implicates that everything runs
 according to a program, including your ramblings here about creativity. I
 have to ask you a question, do you think the universe and everything in it
 runs according to deterministic laws of nature? Do you accept that you are a
 part of this deterministic reality? Well, in that case Ive got news for you,
 you are a program also! As evidence I would present your DNA, a program
 encoded and stored in molecular structures.

 Have you ever heard of computational equivalence? Do you know what it
 means?

 Also, I feel annoyed that you compare the Novamente architecture with
 something that just takes instructions, like do this, do that, then do
 this etc. It seems you need to spend greater effort in studying this
 architecture, for example by reading The Hidden Pattern.

 I feel you are in great need of widening your mind to understand chaotic
 or fractal processes. Take a forest for example, even in all its complexity
 and diversity, it is still governed by very simple and basic laws namely the
 laws of nature. By mimicking some of these laws at an appropriate level,
 such as shape level, programmers can create forests that to a very large
 extent looks like real forests:   http://www.speedtree.com/. A generator
 such as speedtree could generate entire forests of miles and miles of trees,
 with no single two trees looking the same. Even though the lines of code
 producing the trees are pretty simple, the outcome in creativity and
 originality is vast.

 The same thing applies to a human mind. Even though the output of a human
 mind is amazingly diverse and creative, its program is still goverened by
 the basic laws of nature, and the DNA program. What AGI designers tries to
 do is to is to mimic this process.

 The concepts of program and determinism are pretty well established within
 the scientific community, please do not try to redefine them like you do. It
 just creates a lot of confusion. I think what you really want to use is the
 concept of adaptability, or maybe

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Robert Wensman
2008/1/7, David Butler [EMAIL PROTECTED]:

 How would an AGI choose which things to learn first if given enough
 data so that it would have to make a choice?


This is a simple question that demands a complex answer. It is like asking
How can a commercial airliner fly across the Atlantic?. Well, in that case
you would have to study aerodynamics, mechanics, physics, thermodynamics,
computer science, electronics, metallurgy and chemistry for several years,
and in the end you would discover that one single person cannot understand
such a complex machine in its entire detail. True enough, one person could
understand all basic principles for such a system, but explaining them would
hardly suffice as evidence that it would actually work in practice.

If you lived in the medieval times, and someone asked you how is it
possible to cross the Atlantic in a flying machine carrying several hundred
passengers?, what would you answer? Even if you had the expertise knowledge
it would be very hard to explain thoroughly, just because the machine is so
complex and you would have to explain every technology from the
beginning. Where would you start? Maybe some person with less insight would
interrupt you after a few sentences and say well, clearly you cannot
present evidence that it will ever work and make fun of the idea, but how
does insufficient time/space to explain a complex system prove that
something is not possible?

The same goes for AGI, for example when someone asks how can we create a
program that is creative and can choose what to learn?. In response to this
it is possible to present a lot of different principles, such as
adaptability, genetic programming, quelling of combinatorial explosions etc.
But will the principles work in practice when put together? Well, at this
stage we simply cannot tell. *So every person just has to make a choice in
whether to believe it is possible, or whether to believe it is not possible.
*But just because no AGI researcher can answer that question in a few words.
how can we create a programs that is creative and can choose what to
learn, it doesn't mean it is not possible when all these principles come
together. We just have to wait and see.

To those who do not believe: Please just go away from this mailing list and
do not interfere with the work here. Don't demand proof that it would work,
because when we have such proof, i.e. a finished AGI system, we wont need to
defend our hypothesises anyway.


If two AGI's (again-same
 hardware, learning programs and controlled environment) were given
 the same data would they make different choices?


Is a deterministic system deterministic? I do not understand what you are
getting at. Why this question? I think Benjamin answered this question
pretty thoroughly already.

/Robert Wensman

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82686766-4e2400

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Robert Wensman
Regarding testing grounds for AGI. Personally I feel that ordinary computer
games could provide an excellent proving ground for the early stages of AGI,
or maybe even better if they are especially constructed. Computer games are
usually especially designed to encourage the player towards creativity and
exploration. Take a simple platform game for example, at every new stage new
graphics and monsters are introduced, and in large, the player undergoes a
continuous self training that last throughout the whole game. Game
developers carefully distribute rewards and challenges to make this learning
process as smooth as possible.

But also I would like to say that given any proving ground for the first
stages of AGI could be misused if AGI designers bring specialized code into
their system. So if there is to be a competition for first generation AGI,
there would have to be some referee that evaluates how much domain specific
knowledge has been encoded to any given system.

For the late development stages of AGI, where we basically have virtual
human minds, then we could use so hard problems that specialized code could
not help the AGI system anymore. But I guess that at that time we have
basically already solved the problem of AGI, and competitions where AGI
systems compete in writing essays on some subject, could only be used to
polish some already outlined solution to AGI.

I am a fan of Novamente, but for example when I watched the movie where they
trained an AGI dog, I was left with the question about what parts of its
cognition was specialization. For example, the human teacher used natural
language to talk to the dog. Did the dog understand any of it, and in that
case, was there any special language module involved? Also, training a dog
is quite open ended, and it is difficult to assess what is progress. This
shows just how difficult it is to demonstrate AGI. Any demonstration of AGI
would have to support a list of what cognitive aspects are coded, and which
are learnt. Only then you can understand whether it is impressive or not.

Also, because we need to have firm rules about what can be pre-programmed,
and what needs to be learnt, it is easier if we used some world with pretty
simple mechanics. What I basically would like to see is an AGI learning to
play a certain computer game, starting by learning the fundamentals, and
then playing it to the end. Take an old videogame classic like The Legend of
Zelda. http://www.zelda.com/universe/game/zelda/. I know a lot of you would
say that this is a far to simplistic world for training an AGI, but not if
you prohibit ANY pre-programmed knowledge. You only allow the AGI system to
start with proto-knowledge representation, and basically hard-wire the
in-game rewards and punishemnts to the goal of the AGI. The AGI system would
then have to learn basic concepts such as:

objects moving around on the screen
which graphics correspond to yourself
walls where you can go
keys that opens doors
the concept of coming to a new screen when walking of the edge of one
how screens relate to each other
teleportation (the flute for anyone who remembers)

If the AGI system then can learn to play the game to the end and slay
Ganon based on only proto-knowledge, then maybe we have some interesting
going on. Such an AGI could maybe be compared to a rodent running in a maze,
even if the motoric and vision system are more complicated. Then we are
ready to increase the complexity of the computer game, adding communication
with other characters, more complex concepts and puzzles, more dimensions,
more motorics etc..

Basically, I would like to se Novamente and similar AGI systems play some
goal oriented computer game, since AGI in itself needs to be goal oriented.

/R



2007/10/20, Benjamin Goertzel [EMAIL PROTECTED]:



 
  I largely agree. It's worth pointing out that Carnot published
  Reflections on
  the Motive Power of Fire and established the science of thermodynamics
  more
  than a century after the first working steam engines were built.
 
  That said, I opine that an intuitive grasp of some of the important
  elements
  in what will ultimately become the science of intelligence is likely to
  be
  very useful to those inventing AGI.
 


 Yeah, most certainly  However, an intuitive grasp -- and even a
 well-fleshed-out
 qualitative theory supplemented by heuristic back-of-the-envelope
 calculations
 and prototype results -- is very different from a defensible, rigorous
 theory that
 can stand up to the assaults of intelligent detractors

 I didn't start seriously trying to design  implement AGI until I felt I
 had a solid
 intuitive grasp of all related issues.  But I did make a conscious choice
 to devote
 more effort to utilizing my intuitive grasp to try to design and create
 AGI,
 rather than to creating better general AI theories  Both are worthy
 pursuits,
 and both are difficult.  I actually enjoy theory better.  But my sense is
 that the
 heyday of AGI 

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread Robert Wensman
 scientific facts that we can verify are correct. These
scientific facts can then be used for our production but never connect the
AGI system to the production facilities directly. If the production facility
needs intelligence, we choose a separate more dumb AGI system that is just
suited to its task of running the factory. There are a number of safety
measures like this that could greatly improve the safety of AGI usage. I
believe we could make it quite difficult for an AGI system to obtain power
by using the age old idea of divide and conquer.

Also, the history shows that intelligence is no guarantee for power. The
Russian revolution and the genocide in Cambodia illustrates effectively how
intelligent people were slaughtered by apparently less intelligent people,
and later how they were controlled to the extreme for decades. Most
communist dictatorships end because instability caused by poverty, not
because the control structure itself failed. This just reveals something raw
and basic about existence on earth that I think many AGI enthusiasts and
futurists wants to deny: What good are wits when you are looking down the
barrel of a gun?

/Robert Wensman

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51821901-7abc6f

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-09 Thread Robert Wensman
(off topic, but there are something relevant for AGI)

My fears about economical libertarianism could be illustrated with a fish
pond analogy. If there is a small pond with a large number of small fish of
some predatory species, after an amount of time they will cannibalize and
eat each other until at the end there will just remain one very very fat
fish. The instability occurs because a fish that already has managed to eat
a peer, becomes slightly larger than the rest of the fish, and therefore has
a better position in continuing to eat more fish, thus its progress can
accelerate. Maybe if the pond is big enough, a handful of very big fish
would remain.

This is of course just an illustration and by no means a proof that the same
thing would occur in a laissez-faire/libertarianism economy. Libertarians
commonly put blame for monopolies on government involvement, and I guess
some would object that I unfairly compares fish that eat each other with a
non-violent economy. But lets just say I do not share their relaxed attitude
towards the potential threat of monopoly, and a bigger fish eating a smaller
fish do have some similarity to a bigger company acquiring a smaller one.

First of all, the consequence of monopoly is so serious that even if the
chance is very slight, there is a strong incentive to try to prevent it from
ever happening. But there are also a lot of details to suggest that a
laissez-faire economy would collapse into monopoly/oligopoly. Effects of
synergy and mass production benefits would be one strong reason why a
completely free market would benefit those companies that are already large,
which could make them grow larger.

*Especially when considering AGI and intelligence enhancement I believe a
libertarian market could be even more unstable. In such a setting, the rich
could literally invest in more intelligence, that would make them even more
rich, creating a positive economic feedback loop. A dangerous accelerating
scenario where the intelligence explosion could co-occur with the rise of
world monopoly. We could call it an AGI induced monopoly explosion. Unless
democracy could challenge such a libertarian market, only a few oligarchs
might have the position to decide the fate of mankind, if they could control
their AGI that is. Although it is just one possible scenario.*

A documentary I saw claimed that Russia was converted to something very
close to a laissez-faire market in the years after the Soviet Union
collapse. However I don't have any specific details about it, such as
exactly how free the market of that period was. But apparently it caused
chaos and gave rise to a brutal economy with oligarchs controlling the
society. [
http://en.wikipedia.org/wiki/The_Trap_(television_documentary_series)].
Studying what happened in Russia after the fall of communism could give some
insight on the topic.

/R


2007/10/8, Bob Mottram [EMAIL PROTECTED]:

 Economic libertarianism would be nice if it were to occur.  However,
 in practice companies and governments put in place all sorts of
 anti-competitive structures to lock people into certain modes of
 economic activity.  I think economic activity in general is heavily
 influenced by cognitive biases of various kinds.


 On 06/10/2007, BillK  [EMAIL PROTECTED] wrote:
  On 10/6/07, a wrote:
  A free market is just a nice intellectual theory that is of no use in
  the real world.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51384923-1d1de1

[agi] A problem with computer science?

2007-09-28 Thread Robert Wensman
, and dares to let in a little bit of the psychological
vagueness in their paper writing jargon. By that I do not mean to encourage
any kind of Freud-like incoherent crackpot theories, but just the kind of
vagueness that is associated with any kind of complex engineering, like
this system seems to be better than that system, or it seems this design
could benefit a certain capability etc. Maybe an increased focus on AGI
would encourage such a development.

/Robert Wensman



These are not clearly separable things.  One of the reasons many
 people do the system synthesis and balanced approximations so badly
 is because they tend to use minor variations of the same function
 representations they would use when playing with those functions in
 isolation.  The assumption that a particular set of functions are
 only expressible as a particular narrow form can frequently make it
 impossible to synthesize a useful system because the selected form
 imposes limits and tradeoffs specific to its form in practice that
 are not required to achieve equivalent function.

 A lot of computer science tends to be like this in practice (e.g. the
 ever ubiquitous balanced tree).

 Cheers,

 J. Andrew Rogers

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=47989577-57c6b2

Re: [agi] a2i2 news update

2007-07-26 Thread Robert Wensman
 ownership on all the land that we need to
live on, on all the food we need to eat, and on all the air we need to
breathe. Then it could just kill us in self-defence because we trespass on
its property. I know even Ayn Rand sees no moral problem in using defensive
violence to defend material property that is being stolen.



Well, let me just say that I would be concerned if someone creates a selfish
super intelligent AGI system that does not value the well being of me and
the rest of us humans, except for when it can see benefits for its own
survival. Out of fear for my own life, and the life of my descendants, I
would not support your AGI initiative! Even a sentimental and altruistic
person like me has that much sense of self-defence! :-)* *



That said, I think Adaptive AI's definition of general intelligence seems
pretty reasonable, and their plans for development seems well thought out. I
also found some thoughts on evolution and AGI noteworthy. But my feelings
are mixed about their strength in numbers and the hopes for progress it
gives. To me altruistic AGI just seems a lot safer than selfish AGI!



/Robert Wensman

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415id_secret=25439434-9f2310

Re: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-15 Thread Robert Wensman

Yes, what language to use when expressing memes is definitley one of the key
point in the construction of such a system. I think such a language needs to
fulfull the following criteria:

  - Enough expressive power
  - Algorithms for consistency checks, entailment etc.
  - Roubustness to random modifications

Just considering the need of expressive power, It would be tempting to
consider first or second order predicate logic. Sure, contemporary research
in logic reasoning systems tells us that even for quite limited logics,
there are pretty severe algorithmic limitations. I hope however that this
just reflects that current science is aimed too much at mathematical
completeness and soundness, which might not really be needed in a true AGI
system. Some kind of statistical or incremental deduction systems could
perhaps overcome this problem in the future, and we could settle for
concepts like probably consistent, probably sound and probably complete.

(note: I continue to use the term meme for the time beeing, but depending
on the other discussion I might change to information unit.)

Actually, I think a meme consistency check algorithm could be based on meme
evolution itself. If we are to check that a certain meme is consistent, the
system then sets up an evolution that tries to create counterexamples that
would easily prove the meme to be inconsistent. While no counterexample is
found, the systems belief in the meme is strenthened. Has there been any
work done previously in statistical, example driven deduction?

(Note that many of my ideas of how an AGI should generate theories, is
analogously to how the scientific community generate theories according to
contemporary science philosophy. This is because I believe science to be a
macro-projection of the intellectual process in one human mind)

Since memes are created at random, the language they are expressed in also
needs an inherent roubustness. This could be the reason why it for example
would not be suitable to express memes as C++ classes or programs. Perhaps
99% of all randomly created C++ programs will crash the system, and it is
therefore not very practical. Logical formulas are appealing because of
this, because the worst thing that can happen is that a set of logic
formulas become inconsistent (which might be troublesome enough).

As of the predicates used in such a database, most of them would be created
randomly, and their identifier would just be a number. Some predicates
related to facts, such as sensor information and actuator settings would
however be hard coded into the system. For memeplexes to be of any use, they
need to be grounded to these fact predicates. However, I believe some memes
could exist without any direct connection to the hard coded predicates. They
could be favoured parts that the meta evolution uses when creating fact
grounded memeplexes.

I also think that a logic suitable for this purpose could have special
features that are just meant to support random mutations of memes. Some
years ago I was interested in a class of second order predicate logic with
formulas that could be factored. For example (infix notation used):

factor(a, (b) - (c))  = (a b) - (a c)

I am not claiming that particular feature is of any interest, nor that it is
not. My point is that if we want to have a language for memes, we should
also consider what refactoring functions that could be defined upon such a
language. Probably roubustness is one of the most important parts in this
also.

PS: I sent my first mail to the mailing list, but did not receive it myself
(not even with my second mail adress that I use). Was the mail sent to the
entire list?

Regards.

/Robert Wensman




2007/6/14, Derek Zahn [EMAIL PROTECTED]:



Robert Wensman writes:

 Databases:
 1. Facts: Contains sensory data records, and actuator records.
 2. Theory: Contains memeplexes that tries to model the world.

I don't usually think of 'memes' as having a primary purpose of modeling
the world... it seems to me like the key to your whole approach is how you
represent them (the schema of database 2).  Could you elaborate a bit on
that?
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-15 Thread Robert Wensman


 For an intelligence to know what is possible actions it must model those
and think about those internally, and model that beahavior, and I would
argue that all intelligence is about the behavior that the internal
cognition brings.
You cant really have an intelligence I dont believe without behaviour, can
you have behaviour without some form of intelligence?

The simple act of talking and responding to a question is action and
behavior, and without it, you cant realy determine if something is
intelligent or not.

Memes - It looks like the meme people want to somehow shoehorn knowledge
into being a meme or memeplex

A *meme* (IPA http://en.wikipedia.org/wiki/IPA_chart_for_English: /miːm/)
is a unit of cultural http://en.wikipedia.org/wiki/Culture 
informationhttp://en.wikipedia.org/wiki/Informationthat propagates from one mind to 
another as a theoretical unit of cultural
evolution http://en.wikipedia.org/wiki/Sociocultural_evolution and
diffusion http://en.wikipedia.org/wiki/Cultural_diffusion, 
analogoushttp://en.wikipedia.org/wiki/Analogyto the way a
gene http://en.wikipedia.org/wiki/Gene propagates from one organism to
another as a unit of genetic http://en.wikipedia.org/wiki/Geneticsinformation 
and evolution.

First it says memes are a cultural unit but then go on to encompass just
about all types of information / knowledge.
A generic information unit sounds better, and doesnt have to have the
restrictions and/or extra effects that a meme seems to have.

Otherwise welcome to the AGI group, and maybe we can expound on some of
the other thoughts you have.

James

Thanks. Well, I agree that information unit would work well also.

However, to some degree I feel that the extra effects of meme are
beneficial, even though you cannot interpret those extra effects
litterally. Using meme could emphasize that they are created at random in
an evolutionary process, or at least that there is some distributed process
that creates them, which is exactly what I want. Information unit on the
other hand is more general, and could include hand-crafted code made by some
programmer, or data inserted in an expert system.

Also I like meme because it is one word, as opposed to two in information
unit. I believe it is better to use short words for the most common things
we want to express. Wether it sounds good or not I dont know. Beeing swede
it sounds pretty ok to me, but maybe it sounds different to people with
other background.

I believe something could act intelligently without actually beeing
intelligent. Say for example that we program a robot to perform a lot of
hard coded random actions, even without using any sensory data. Even if
incredibly (actually there arent words strong enough) unlikely, there is a
chance that such a robot might go to work and act as a seemingly intelligent
employee of some company during an entire day. However we would know it is
not intelligent.

Also, I would say that a lot of other system classes are not defined based
on their actual actions. For example, we could determine wether an object
is an airplane even without actually seeing it fly. What we do is to study
its structure, and judge wether it according to our understanding has a
capability to fly in the way airplanes do.

I agree however that this is troublesome for general intelligence, because
turing completeness causes a lot of different definitions to be
computationally equivalent, thus there is no standardized language in which
to describe what is general intelligence. For airplanes it is simple,
because 3D pictures and 2D drawings are quite straightforward. Maybe this
will be easier once we have an example of working AGI.

There is a point though, in that passive AGI systems that just think think
think, but doesnt do anything useful would be of little use :-).

Regards

/Robert Wensman

___

James Ratcliff - http://falazar.com
Looking for something...

 --
Park yourself in front of a world of choices in alternative vehicles.
Visit the Yahoo! Auto Green 
Center.http://us.rd.yahoo.com/evt=48246/*http://autos.yahoo.com/green_center/;_ylc=X3oDMTE5cDF2bXZzBF9TAzk3MTA3MDc2BHNlYwNtYWlsdGFncwRzbGsDZ3JlZW4tY2VudGVy
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e