[agi] Dilbert on Singularity

2008-11-12 Thread Dennis Gorelik
http://dilbert.com/strips/comic/2008-11-12/
What's the worst thing that could happen?

http://dilbert.com/strips/comic/2008-11-11/



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re[2]: [agi] CyberLover passing Turing Test

2007-12-12 Thread Dennis Gorelik
Bryan,

 In my taste, testing with clueless judges is more appropriate
 approach. It makes test less biased.

 How can they judge when they don't know what they are judging? Surely,
 when they hang out for some cyberlovin', they are not scanning for 
 intelligence. Our mostly in-bred stupidity is evidence.

For example, we can ask judges: do you notice anything unusual about
this chatter? or even what do you think about this chatter?

The answer will make it more or less clear whether judges think about
this person as about human or as about AI.


In any case, Passing Turing Test is neither necessary, nor sufficient
proof for AGI.

Turing Test may be used for answering more practical questions, such
as can AI be a part of our society yet?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75493544-0e9b74


[agi] CyberLover passing Turing Test

2007-12-11 Thread Dennis Gorelik
http://blog.pmarca.com/2007/12/checking-in-on.html
===
If CyberLover works as described, it will qualify as one of the first
computer programs ever written that is actually passing the Turing Test. 
===


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75061956-f23a41


Re[4]: [agi] Do we need massive computational capabilities?

2007-12-11 Thread Dennis Gorelik
Matt,

 You can feed it with text. Then AGI would simply parse text [and
 optionally - Google it].
 
 No need for massive computational capabilities.

 Not when you can just use Google's 10^6 CPU cluster and its database with 10^9
 human contributors.

That's one of my points: our current civilization gives AGI researcher
ability to build AGI prototype on single PC using existing
civilization's achievements.

Human being cannot be intelligent without surrounding society anyway.
We all would loose our mind in less than 10 years if we are totally
separated from other intelligent systems.

Intelligence simply cannot function fully independently.


Bottom line: when building AGI - we should focus on building member
for our current civilization. Not fully independent intelligent
system.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75096267-b51b43


Re[2]: [agi] CyberLover passing Turing Test

2007-12-11 Thread Dennis Gorelik
Bryan,

 If CyberLover works as described, it will qualify as one of the first
 computer programs ever written that is actually passing the Turing
 Test.

 I thought the Turing Test involved fooling/convincing judges, not 
 clueless men hoping to get some action?

In my taste, testing with clueless judges is more appropriate
approach. It makes test less biased.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75096924-6d69b3


Re[4]: [agi] Do we need massive computational capabilities?

2007-12-08 Thread Dennis Gorelik
Mike,

What you describe - is set of AGI nodes.
AGI prototype is just one of such node.
AGI researcher doesn't have to develop all set at once. It's quite
sufficient to develop only one AGI node. Such node will be able to
work on single PC.


 I believe Matt's proposal is not as much about the exposure to
 memory or sheer computational horsepower - it's about access to
 learning experience. 

 Matt's proposed network enables IO to the us [existing examples of
 intelligence/teachers].  Maybe these nodes can ask questions, What
 does my owner know of A? - the answer becomes part of its local
 KB.  Hundreds of distributed agents are now able to query Matt's
 node about A (clearly Matt does not have time to answer 500 queries
 on topic A)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73990287-5aefe6

[agi] Interpreting Brain damage experiments

2007-12-07 Thread Dennis Gorelik
Richard,

 Did you know, for example, that certain kinds of brain damage can leave
 a person with the ability to name a visually presented object, but then
 be unable to pick the object up and move it through space in a way that
 is consistent with the object's normal use . and that another type
 of brain damage can result in a person have exactly the opposite 
 problem:  they can look at an object and say I have no idea what that
 is, and yet when you ask them to pick the thing up and do what they
 would typically do with the object, they pick it up and show every sign
 that they know exactly what it is for (e.g. object is a key:  they say
 they don't know what it is, but then they pick it up and put it straight
 into a nearby lock).

 Now, interpreting that result is not easy, but it does seem to tell us
 that there are two almost independent systems in the brain that handle
 vision-for-identification and vision-for-action.

That's not exact explanation.
In both cases vision module works good.
Vision-to-identification works fine in both cases.

In this case identified object cannot produce proper actions, because
connection with action module was damaged.

In another case identified object cannot be resolved into language
concept, because connection with language module was damaged.

Agree?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73488174-e8e4c8


[agi] High-level brain design patterns

2007-12-07 Thread Dennis Gorelik
Derek,

 Low level design is not critical for AGI. Instead we observe high level brain
 patterns and try to implement them on top of our own, more understandable,
 low level design.
   
  I am curious what you mean by high level brain patterns
 though.  Could you give an example?

1) All dependencies we may observe between inputs or outputs.
For example, conditional reflex and unconditional reflex.

2) Activation of neuron A that happens _consistently_ with activation
of neuron B.

3) Richard Loosemore already gave his example:
http://www.dennisgorelik.com/ai/2007/12/reducing-agi-complexity-copy-only-high.html
For example, much to our surprise we might see waves in the U values.
And every time two waves hit each other, a vortex is created for
exactly 20 minutes, then it stops.
  

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73895601-d02434

Re[2]: [agi] Interpreting Brain damage experiments

2007-12-07 Thread Dennis Gorelik
Richard,

Let's save both of us time and wait when somebody else read this
Cognitive Science book and will come here to discuss it.
:-)

Though interesting, interpreting Brain damage experiments is not the
most important thing for AGI development.

 In both cases vision module works good.
 Vision-to-identification works fine in both cases.
 
 In this case identified object cannot produce proper actions, because
 connection with action module was damaged.
 
 In another case identified object cannot be resolved into language
 concept, because connection with language module was damaged.
 
 Agree?

 I don't think this works, unfortunately, because that was the first 
 simple explanation that people came up with, and it did not match up
 with the data at all.  I confess I do not have time to look this up 
 right now.  You wouldn't be able to read one of the latest cognitive
 neuropsychology books (not cognitive neuroscience, note) and let me know
 would you? ;-)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73890290-7fa0bf


Re[2]: [agi] Solution to Grounding problem

2007-12-07 Thread Dennis Gorelik
Richard,


 This could be called a communcation problem, but it is internal, and in
 the AGI case it is not so simple as just miscalculated numbers.

Communication between subsystems is still communication.
So I suggest to call it Communication problem.


 So here is a revised version of the problem:  suppose that a system
 keeps some numbers stored internally, but those numbers are *used* by
 the system in such a way that their meaning is implicit in the entire
 design of the system.  When the system uses those numbers to do things,
 the numbers are fed into the using mechanisms in such a way that you
 can only really tell what the numbers mean by looking at the overall
 way in which they are used.

That's right approach of doing things. Concepts gaining meaning
by connecting to other concepts.
The only exception - concepts that are directly connected to
hardcoded sub-systems (dictionary, chat client, web browser, etc).
Such directly connected concepts would have some predefined meaning.
This predefined meaning would be injected by AGI programmers.


 Now, with that idea in mind, now imagine that programmers came along and
 set up the *values* for a whole bunch of those numbers, inside the 
 machine, ON THE ASSUMPTION that those numbers meant something that the
 programmers had decided they meant.  So the programmers were really 
 definite and explicit about the meaning of the numbers.

 Question:  what if those two sets of meanings are in conflict?

How could they be in conflict, if one set is predefined, and another
set gained meaning from predefined set?

If you are talking about inconsistencies within predefined set --
that's problem of design  development team.
Do you want to address this problem?
So far I can suggest one tip: keep the set of predefined concept as
small as possible.
Most of mature AGI intelligence should come from concepts (and their
relations) acquired during system life time.


 If the AI system starts out with a design in which symbols are
 designed and stocked by 
 programmers, this part of the machine has ONE implicit meaning for its
 symbols . but then if a bunch of peripheral machinery is stapled on
 the back end of the system, enabling it see the world and use robot 
 arms, the processing and symbol building that goes on in that
 part of the system will have ANOTHER implicit meaning for the symbols.
 There is no reason why these two sets of symbols should have the same
 meaning!

Here's my understanding of your problem:
We have an AGI, and now we want to extend it by adding new module.
We afraid that new module will have problems communicating with other
modules, because the meaning of some symbols is different.

If I understood your correctly, here're two solutions:

Solution #1: Connect modules through Neural Net.

Under Neural Net I mean set of concepts (nodes) connected with other
concepts by relations.
Concepts can be created and deleted dynamically.
Relations can be created and deleted dynamically.
When we connect new module to the system - it will introduce its own
concepts into Neural Net.
Initially these concepts are not connected with existing concepts.
But then some process will connect these new concepts with existing
concepts.
One example of such process could be: if concepts are active at the
same time -- connect them.
There could be other possible connecting processes.
In any case, eventually system would connect all new concepts, and
that connections would define how input from new module is interpreted
by the rest of the system.

Solution #2: Connect new module into another hardcoded modules
directly.
In this case it's responsibility of AGI development team to make sure
that both hardcoded modules talk the same language.
That's typical module integration task for developers.



 In fact, it turns out (when you think about it a little 
 longer) that all of the problem has to do with the programmers going in
 and building any symbols using THEIR idea of what the symbols should
 mean:  the system has to be allowed to build its own symbols from the
 ground up, without us necessarily being able to interpret those symbols
 completely at all.  We might nevcer be able to go in and look at a 
 system-built symbol and say That means [x], because the real meaning
 of that symbol will be implicit in the way the system uses it.

 In summary:  the symbol grounding problem is that systems need to have
 only one interpretation of their symbols,

Not sure what you mean by one interpretation.
Symbol can be have multiple interpretations in different contexts.
Our goal is to make sure that different systems and different modules
has ~same understanding of the symbols at the time of communication.
(Under symbols here I mean data that is passed through interfaces)

 and it needs to be the one built by the system itself as a result of
 a connection to the external world.

So it seems you already have a solution (I propose the same solution)
to the Real Grounding Problem.

Can 

Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Dennis Gorelik
Matt,

 No, my proposal requires lots of regular PCs with regular network connections.

Properly connected set of regular PCs would usually have way more
power than regular PC.
That makes your hardware request special.
My point is - AGI can successfully run on singe regular PC.
Special hardware would be required later, when you try to scale
out working AGI prototype.

  It is a purely software approach.  But more hardware is always better.

Not always.
More hardware costs money and requires more maintenance.

 http://www.mattmahoney.net/agi.html



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73892920-985965


Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Dennis Gorelik
Matt,

  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 
 
 Could you give an example or two of the kind of problems that your AGI
 system(s) will need such massive capabilities to solve? It's so good - in
 fact, I would argue, essential - to ground these discussions. 

 For example, I ask the computer who is this? and attach a video clip from my
 security camera.


Why do you need image recognition in your AGI prototype?
You can feed it with text. Then AGI would simply parse text [and
optionally - Google it].

No need for massive computational capabilities.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73892756-356b26


Re[4]: [agi] Solution to Grounding problem

2007-12-07 Thread Dennis Gorelik
Mike,

 1. Bush walks like a cowboy, doesn't he?
 The only way a human - or a machine - can make sense of sentence 1 is by
 referring to a mental image/movie of Bush walking.

That's not the only way to make sense of the saying.
There are many other ways: chat with other people, or look on Google:
http://www.google.com/search?q=Bush+walks+cowboy


http://images.google.com/images?q=grundchen

 Merely referring to more words won't cut it.

It would. Meaning - is connection between concepts.
If proper words are referred, then meaning is there.


 Oh,  just to make your day, if you don't have a body, you can't understand
 the images either

How is that don't have a body remark relevant?
Computers have body and senses (such as keyboard and Internet connection).

 Is all that clear?

No.
You didn't describe what grounding problem is about.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73839347-3bb442


Re[2]: [agi] How to represent things problem

2007-12-07 Thread Dennis Gorelik
Richard,

 the instance nodes are such an
 important mechanism that everything depends on the details of how they
 are handled.

Correct.


 So, to consider one or two of the details that you mention.  You would
 like there to be only a one-way connection between the generic node (do
 you call this the pattern node?)

1) All nodes are equal.

2) Nodes can point to each other.
Yes, connection should be one way.
(E.g.: You know George Bush, but he doesn't know you :-))

Two-way connection can be easily implemented by two separate
connections.


 For instance, we are able to see a field of patterns, of
 different colors, and then when someone says the phrase the green 
 patterns we find that the set of green patterns jumps out at us from
 the scene.  It is as if we did indeed have links from the generic 
 concept [green pattern] to all the instances.

Yes, that's good way to store links:
All relevant nodes are connected.


 Another question: what do we do in a situation where we see a field of
 grass, and think about the concept [grass blade]?

Field or grass concept and grass blade concept are obviously
directly connected.
This link was formed, because we saw Field of grass and Grass
blade together many times.


 Are there individual instances for each grass blade?

If you remember individual instances -- then yes.

 Are all of these linked to the generic concept of [grass blade]?

Some grass blades may be directly connected to field of grass.
Other may be connected only through other grass blade instances.
It would depend on if it's useful for brain to keep these direct
associations.


 What is different is that I see many, many possible ways to get these
 new-node creation mechanisms to work (and ditto for other mechanisms
 like the instance nodes, etc.) and I feel it is extremely problematic to
 focus on just one mechanism and say that THIS is the one I will 
 implement because  I think it feels like a good idea.


 The reason I think this is a problem is that these mechanisms have 
 system-wide consequences (i.e. they give rise to global behaviors) that
 are not necessarily obvious from the definition of the mechanism, so we
 need to build a simulation to find out what those mechanisms *really* do
 when they are put together and allowed to interact.

I agree -- testing is important.
In fact, it's extremely important.

Not only we need to test several models [of creating  updating nodes
and links), but within single model we should try several settings
values (such as if node1 and node2 were activated together -- how
much should we increase the strength of the link between them).


That's why it's important to carefully design tests.
Such tests should work reasonably fast and be able to indicate how
good did system work.

What is good and what is not good -- has to be carefully defined.
Not trivial, but quite doable task.


 I can show you a paper of mine in which I describe my framework in a
 little more detail.

Isn't this pager public yet?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73487197-bbb1fa


Re[2]: [agi] How to represent things problem

2007-12-06 Thread Dennis Gorelik
Richard,

 It's Neural Network -- set of nodes (concepts), when every node can be
 connected with the set of other nodes. Every connection has it's own
 weight.
 
 Some nodes are connected with external devices.
 For example, one node can be connected with one word in text
 dictionary (that is an external device).

 you need special extra mechanisms to handle the difference between
 generic nodes and instance nodes (in a basic neural net there is no
 distinction between these two, so the system cannot represent even the most 
 basic of situations),

1) Are you talking about problems of basic neural net or problems of
Neural Net that I described?

2) Human brain is more complex than basic neural net and probably
works similar to what I described.

3) Extra mechanisms would add additional features to instance nodes.
(I prefer to call such nodes peripheral or surface.)
Surface nodes have the same abilities as regular nodes, but they are
also heavily affected by special device.

4) Are you saying that developing special device is a problem?


 and you need extra mechanisms to handle the dynamic creation/assignment of
 new nodes, because new things are being experienced all the time.

That's correct. Such mechanism that creates new nodes is required.
Is that a problem?


 These extra mechanisms are so important that is arguable that the
 behavior of the system is dominated by *them*, not by the mere fact that
 the design started out as a neural net.

It doesn't matter what part of system dominates. If we able to solve How to 
represent
things problem by such architecture -- it's good enough, right?


 Having said that, I believe in neural nets as a good conceptual starting
 point.


Are you saying that what I described is not exactly a Neural Net?
How would you call it then?
Blend Neural Net?


 It is just that you need to figure out all that machinery - and no one
 has, so there is a representation problem in my previous list of problems.

We can talk about machinery in all details.
I agree, that the system would be complex, but it would have
manageable complexity.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73456976-acd60e


[agi] None of you seem to be able ...

2007-12-04 Thread Dennis Gorelik
Mike,

 Matt::  The whole point of using massive parallel computation is to do the
 hard part of the problem.

 The whole idea of massive parallel computation here, surely has to be wrong.
 And yet none of you seem able to face this to my mind obvious truth.

Who do you mean under you in this context?
Do you think that everyone here agrees with Matt on everyting?
:-)

Quite the opposite is true -- almost every AI researcher has his own
unique set of believes. Some believes are shared with one set of
researchers -- other with another set. Some believes may be even
unique.

For example, I disagree with Matt's claim that AGI research needs
special hardware with massive computational capabilities.

However I agree with Matt on quite large set of other issues.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71955617-a244b4


[agi] Solution to Grounding problem

2007-12-04 Thread Dennis Gorelik
Richard,

 1) Grounding Problem (the *real* one, not the cheap substitute that
 everyone usually thinks of as the symbol grounding problem).

Could you describe, what *real* grounding problem is?

It would be nice to consider an example.

Say, we are trying to build AGI for the purpose of running intelligent
chat-bot.

What would be the grounding problem in this case?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71959916-5302dd


[agi] How to tepresent things problem

2007-12-04 Thread Dennis Gorelik
Richard,

 3) A way to represent things - and in particular, uncertainty - without
 getting buried up to the eyeballs in (e.g.) temporal logics that nobody
 believes in.

Conceptually the way of representing things is described very well.
It's Neural Network -- set of nodes (concepts), when every node can be
connected with the set of other nodes. Every connection has it's own
weight.

Some nodes are connected with external devices.
For example, one node can be connected with one word in text
dictionary (that is an external device).


Do you see any problems with such architecture?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71970053-638180


Re[4]: [agi] Self-building AGI

2007-12-01 Thread Dennis Gorelik
John,

 If you look at nanotechnology one of the goals is to build machines that
 build machines. Couldn't software based AGI be similar?

Eventually AGIs will be able to build other AGIs, but first AGI models
won't be able to build any software.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71194112-033f71


Re[2]: [agi] Lets count neurons

2007-11-30 Thread Dennis Gorelik
Matt,

 Using pointers saves memory but sacrifices speed.  Random memory access is
 slow due to cache misses.  By using a matrix, you can perform vector
 operations very fast in parallel using SSE2 instructions on modern processors,
 or a GPU.

I doubt it.
http://en.wikipedia.org/wiki/SSE2 - doesn't even mention parallel or
matrix.

Whatever performance advantages SSE2 provide -- they will benefit both
architectures.

 By your own calculations, an array only takes twice as much space
 as a graph.

My own calculations used 4 bits allocated for weights.
You compare the size with the matrix that has either connection or
absence of connection (1 bit).

Actual difference in size would be 10 times, since your matrix is only
10% filled.

 Do you imply that intelligent algorithm must be universal across
 language, speech, vision, robotics, etc?
 In humans it's just not the case.
 Different algorithms are responsible for vision, speech, language,
 body control etc.

 Neural networks are useful for all of these problems.

HTML is useful for all sorts of web sites.
Does it mean that all web sites are the same?

Yes, some sort of neural network should probably be use for language,
voice, vision, and robotics modules.
But that doesn't mean that the implementation would be the same for
all of them. The differences would probably be quite big.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71107602-dff5dd


Re[14]: [agi] Funding AGI research

2007-11-30 Thread Dennis Gorelik
Benjamin,

 Obviously, most researchers who have developed useful narrow-AI
 components have not gotten rich from it.

My example is Google founders who developed narrow-AI
component -- Google).

What is your example of useful narrow AI component developers who
have not got rich from it?


 The nature of our economy and society is such that most scientific
 and technical innovators are not dramatically
 financially rewarded.

Useful innovations are usually rewarded.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70980803-eccadb


Re[14]: [agi] Funding AGI research

2007-11-30 Thread Dennis Gorelik
Benjamin,

 E.g.: Google, computer languages, network protocols, databases.

 These are tools that are useful for AGI RD but so are computer
 monitors, silicon chips, and desk chairs.

1) Yes, creating monitor contributed into AGI a lot too.

2) Technologies that I mentioned above are useful on another level as well:
they can be used as prototypes (or even modules) for AGI parts.


 If it were as simple as you're saying,

I'm not saying that developing narrow AIs as a way to come closer to
AGI is simple. It's still hard.
It's just not as hard/complex as developing AGI in one giant leap.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70978454-c30048


[agi] Self-building AGI

2007-11-30 Thread Dennis Gorelik
John,

 Note, that compiler doesn't build application.
 Programmer does (using compiler as a tool).

 Very true. So then, is the programmer + compiler more complex that the AGI
 ever will be?

No.
I don't even see how it relates to what I wrote above ...

 Or at some point does the AGI build and improve itself.

Yes.
What's your point?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70979876-57f137


Re[2]: [agi] Self-building AGI

2007-11-30 Thread Dennis Gorelik
John,

 Example - When we create software applications we use compilers. When the
 applications get more complex we have to improve the compilers (otherwise
 AutoCad 2007 could be built with QBasic). For AGI do we need to improve the
 compliers to the point where they actually write the source code?

There are programs that already write source code.
The trick is to write working and useful apps.
The most important part in writing useful apps is not about writing
code. It's about gathering/defining requirements and designing the
system.

More intelligent development environments (as well as many other
tools) can help to build AGI, but development environments cannot
build AGI by itself.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71120365-0de126


[agi] Critical modules for AGI

2007-11-30 Thread Dennis Gorelik
Bob,

Yes, losing useful modules degrades intelligence, but system still can
be intelligent without most of such modules.
Good example - blind and deaf people.

Besides, such modules can be replaced by external tools.

I'd say that critical modules for AGI are:
- Super Goals (permanent).
- Sub Goals (flexible, closely integrated with memory).
- IO module (input/output).
- General learning module (that modifies Sub Goals based on Super
goals and information from IO module).
- Decision making module, that gets input from IO module and makes
decisions according to Sub Goals.


There may be one or more IO modules.
I'd say that text IO is the most useful one.

Visual/Sound/Touch stuff is not critical.


Friday, November 30, 2007, 2:13:14 AM, you wrote:

 On 30/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote:
 For example, mouse has strong image and sound recognition ability.
 AGI doesn't require that.
 Mouse has to manage its muscles in a very high pace.
 AGI doesn't need that.


 I'm not convinced that it is yet possible to make categorical
 assertions of this kind.  It could well turn out that spatial
 representations derived from visual processing are essential to some
 kinds of thought and analogy, without which an AGI would be
 cognitively impaired.  I know it's claimed that many mathematicians
 are supposed to possess enhanced spatial reasoning ability.

 Brains fundamentally evolved to move creatures around - an internal
 guidance mechanism - and there's a close relationship between movement
 and perception.  Whether this will need to also be the case for an AGI
 I don't know.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70998972-ef42e2


Re[2]: [agi] Self-building AGI

2007-11-30 Thread Dennis Gorelik
Ed,

1) Human-level AGI with access to current knowledge base cannot build
AGI. (Humans can't)

2) When AGI is developed, humans will be able to build AGI (by copying
successful AGI models). The same with human-level AGI -- it will be
able to copy successful AGI model.

But that's not exactly self-building AGI you are looking for :-)

3) Humans have different level intelligence and skills. Not all are
able to develop programs. The same is true regarding AGI.


Friday, November 30, 2007, 10:20:08 AM, you wrote:

 Computers are currently designed by human-level intellitences, so presumably
 they could be designed by human-level AGI's. (Which if they were human-level
 in the tasks that are currently hard for computers means they could be
 millions of times faster than humans for tasks at which computers already
 way out perform us.) I mention that appropriate reading and training would
 be required, and I assumed this included access to computer science and
 computer technology sources, which the peasants of the middle age would not
 have access.  

 So I don't understand your problem.

 -Original Message-
 From: Dennis Gorelik [mailto:[EMAIL PROTECTED] 
 Sent: Friday, November 30, 2007 1:01 AM
 To: agi@v2.listbox.com
 Subject: [agi] Self-building AGI

 Ed,

 At the current stages this may be true, but it should be remembered that
 building a human-level AGI would be creating a machine that would itself,
 with the appropriate reading and training, be able to design and program
 AGIs.

 No.
 AGI is not necessarily that capable. In fact first versions of AGI
 would not be that capable for sure.

 Consider middle age peasant, for example. Such peasant has general
 intelligence (GI part in AGI), right?
 What kind of training would you provide to such peasant in order to
 make him design AGI?


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71105689-80c0ae


Re[12]: [agi] Funding AGI research

2007-11-29 Thread Dennis Gorelik
Benjamin,

 That proves my point [that AGI project can be successfully split
 into smaller narrow AI subprojects], right?

 Yes, but it's a largely irrelevant point.  Because building a narrow-AI
 system in an AGI-compatible way is HARDER than building that same
 narrow-AI component in a non-AGI-compatible way.

Even if this is the case (which is not) that would simply mean several
development steps:
1) Develop narrow AI with non-reusable AI component and get rewarded
for that (because it would be useful system by itself).
2) Refactor non-reusable AI component into reusable AI component and
get rewarded for that (because it would reusable component for sale).
3) Apply reusable AI component in AGI and get rewarded for that.

If you were analyzing effectiveness of reward systems -- you would
notice, that systems (humans, animals, or machines) that are rewarded
immediately for positive contribution perform considerably better than
systems with reward distributed long after successful accomplishments.


 So, given the pressures of commerce and academia, people who are
 motivated to make narrow-AI for its own sake, will almost never create
 narrow-AI components that are useful for AGI.

Sorry, but that does not match with how things really work.
So far only researchers/developers who picked narrow-AI approach
accomplished something useful for AGI.
E.g.: Google, computer languages, network protocols, databases.

Pure AGI researchers contributed nothing, but disappointments in AI
ideas.



 Would you agree that splitting very complex and big project into
 meaningful parts considerably improves chances of success?

 Yes, sure ... but demanding that these meaningful parts

 -- be economically viable

 and/or

 -- beat competing, somewhat-similar components in competitions

 dramatically DECREASES chances of success ...

INCREASES chances of success. Dramatically.
There are lots of examples supporting it both in AI research field and
in virtually every area of human research.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70646629-5088c0


[agi] Self-building AGI

2007-11-29 Thread Dennis Gorelik
Ed,

 At the current stages this may be true, but it should be remembered that
 building a human-level AGI would be creating a machine that would itself,
 with the appropriate reading and training, be able to design and program
 AGIs.

No.
AGI is not necessarily that capable. In fact first versions of AGI
would not be that capable for sure.

Consider middle age peasant, for example. Such peasant has general
intelligence (GI part in AGI), right?
What kind of training would you provide to such peasant in order to
make him design AGI?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70651687-aa8ee6


[agi] Lets count neurons

2007-11-29 Thread Dennis Gorelik
Matt,


 And some of the Blue Brain research suggests it is even worse.  A mouse
 cortical column of 10^5 neurons is about 10% connected,

What does mean 10% connected?
How many connections does average mouse neuron have?
1?

 but the neurons are arranged such that connections can be formed
 between any pair of neurons.  Extending this idea to the human brain, with 
 10^6 columns of 10^5 neurons
 each, each column should be modeled as a 10^5 by 10^5 sparse matrix,

Only poor design would require 10^5 by 10^5 matrix if every neuron
has to connect only to 1 other neurons.

One pointer to 2^17 (131072) address space requires 17 bits.
1 connections require 17 bits.
If we want to put 4 bit weighting scale on every connection, then it
would be 85000 bytes.
85000 * 1 neurons = 8.5 * 10^9 bytes = 8.5 GB (hard disks of that
size were available on PCs ~10 years ago).


But in fact mouse's brain does way more than AI has to do.
For example, mouse has strong image and sound recognition ability.
AGI doesn't require that.
Mouse has to manage its muscles in a very high pace.
AGI doesn't need that.
All these unnecessary features consume lion's share of mouse brain.
Mouse must function in way more stressful environment, than AGI must.
That again makes mouse brain bigger than AGI has to be.


 Perhaps there are ways to optimize neural networks by taking advantage of the
 reliability of digital hardware, but over the last few decades researchers
 have not found any.

Researchers have not found appropriate intelligent algorithms. That
doesn't mean that hardware is not sufficient.

 For narrow AI applications, we can usually find better algorithms than neural
 networks, for example, arithmetic, deductive logic, or playing chess.  But
 none of these other algorithms are so broadly applicable to so many different
 domains such as language, speech, vision, robotics, etc.

Do you imply that intelligent algorithm must be universal across
language, speech, vision, robotics, etc?
In humans it's just not the case.
Different algorithms are responsible for vision, speech, language,
body control etc.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70650748-f0eed8


Re[10]: [agi] Funding AGI research

2007-11-27 Thread Dennis Gorelik
Benjamin,

 Nearly any AGI component can be used within a narrow AI,

That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?

 but, the problem is, it's usually a bunch easier to make narrow AI's
 using components that don't have any AGI value...

I agree, that many narrow AI projects are not very useful for future
AGI project.

Still, AGI-oriented researcher can pick appropriate narrow AI projects
in a such way that:
1) Narrow AI project will be considerably less complex than full AGI
project.
2) Narrow AI project will be useful by itself.
3) Narrow AI project will be an important building block for the full
AGI project.

Would you agree that splitting very complex and big project into
meaningful parts considerably improves chances of success?


 Another way to go -- use existing narrow AIs as prototypes when
 building AGI.

That's right.
The problem is -- there is not enough narrow AIs at this point to
assemble AGI in any reasonable amount of time.
I consider anything longer than 3 years -- unreasonable [and almost
guaranteed failure].


 I don't really accept any narrow-AI as a prototype for an AGI.

Ok.
How about set of narrow-AIs that cover different functionality of AGI.
Would it be a good prototype?

In any case, narrow AI prototype[s] is better, than no prototype,
right?


 I think there is loads of evidence that narrow-AI prowess does not imply
 AGI prowess,

All other things being equal -- would you invest into researcher who
successfully developed narrow AI or into researcher who did not?
:-)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69274200-4fc0e0


Re[10]: [agi] Funding AGI research

2007-11-27 Thread Dennis Gorelik
Matt,

 --- Dennis Gorelik [EMAIL PROTECTED] wrote:
 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.

 A one million CPU cluster.

Are you claiming that computational power of human brain is equivalent
to one million CPU cluster?

My feeling is that human's brain computational power is about the same
as of modern PC.

AGI software is the missing part of AGI, not hardware.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69275829-75b273


Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-27 Thread Dennis Gorelik
Matt,

  As for the analogies, my point is that AGI will quickly evolve to
 invisibility from a human-level intelligence.
 
 I think you underestimate how quickly performance deteriorates with the
 growth of complexity.
 AGI systems would have lots of performance problems in spite of fast
 hardware.

 No, I was not aware of that.  What is the relationship?

Performance problems would prevent AGIs from being considerably
better thinkers than humans.


 To visualize potential differences try to compare income of humans
 with IQ 100 and humans with IQ 150.
 The difference is not really that big.

 Try to visualize an Earth turned into computronium with an IQ of 10^38.

That would be a failure project. Such creature wouldn't event start working.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69276383-302788


Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-27 Thread Dennis Gorelik
Mike,

 I think you underestimate how quickly performance deteriorates with
 the growth of complexity.
 Dennis, you are stating what could be potentially an extremely important 
 principle.

It is very important principles for [hundreds of] years already.

Take a look into business. You can notice, that even in spite that
mass production gives huge economical advantage to big companies --
still small companies survive.

The bigger the organization is -- the more managing overhead is there.


The same with software if the program is getting 2 times bigger --
it's getting ~4 times more complex.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69278191-cd7125


Re[8]: [agi] Funding AGI research

2007-11-27 Thread Dennis Gorelik
Edward,

It seems that Cassimatis architect his AGI system as an assembly of
several modules.
That's primary approach in designing any complex system.
I agree with such module architecture approach, but my path to AGI
statement was not exactly about such architecture.

My claim is that it's possible [and necessary] to split massive amount
of work that has to be done for AGI into smaller narrow AI chunks in
such a way that every narrow AI chunk has it's own business meaning
and can pay for itself.

That would guarantee that:
- Successful AI researchers will be timely rewarded
- Unsuccessful AI researchers (who cannot successfully deliver even
one chunk of work) would lose their funding and would be forced to
correct their approaches.

Sunday, November 25, 2007, 5:22:46 PM, you wrote:

 A few days ago there was some discussion on this list about the
 potential usefulness of narrow AI to AGI.  
  
 Nick Cassimatis, who is speaking at AGI 2008, has something he
 calls Polyscheme which is described partially at the following AGIRI
 link: http://www.agiri.org/workshop/Cassimatis.ppt
  
 It appears to use what are arguably narrow AI modules in a
 coordinated manner to achieve AGI.  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69280599-c1e5da

Re[2]: [agi] Funding AGI research

2007-11-27 Thread Dennis Gorelik
Richard,

 I had something very specific in mind when I said that,
 because I was meaning that in a complex systems AGI project, there is
 a need to do a massive, parallel search of a space of algorithms. This
 is what you might call a data collection phase.  It is because of the
 need for this data collection (*before* a prototype can be built).

What is that necessary data collection phase?
Why Google's Index knowledge base + thousands of enthusiasts who are
ready to chat with new AI toy wouldn't be sufficient data source?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69281615-1d4a31


Re[4]: [agi] Funding AGI research

2007-11-20 Thread Dennis Gorelik
Jiri,

 I'm professionally working on a top secret military project that supports
 the war on terror and I can see there is lots of data that, if
 processed in smarter ways could make a huge difference in the world.
 This is not really a single domain narrow AI task (though the related
 projects - as currently developed - are rather narrow AIs). We badly
 need smarter machines.

You need smarter programs.
So why wouldn't you build such?
Start with weak AI programs. That would push technology envelope
further and further and in the end AGI will be possible.


 Any successful AGI prototype would attract investors anyway.

 At this time, we still need investors spending money to develop prototypes.

If developing prototype requires significant investments -- that means
that technology is not ready yet for full scale AGI yet.

Weak AI is a way to go at this moment.


  Many bright folks would IMO like to work on AGI full time but limited
  resources force them to focus on other stuff.

 Other stuff is also important.

 True, but more full time AGI folks wouldn't hurt.

It would. More full time AGI folks means less full time folks on
other important projects.

 There are many hard problems to solve, including some which, if not
 solved correctly in relatively near future, could cause the money
 (made on those better paying tasks) having only a fraction of its
 current value. AGI could do so much for us that it would be IMO worth
 to immediately stop working on all non-critical projects and
 temporarily spend as many resources as possible on AGI RD.

What disastrous problems that cannot be solved without AGI are you
referring to?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67372960-53c956


Re[4]: [agi] Funding AGI research

2007-11-20 Thread Dennis Gorelik
Benjamin,

 Do you have any success stories of such research funding in the last
 20 years?
 Something that resulted in useful accomplishments.


 Are you asking for success stories regarding research funding in any domain,
 or regarding research funding in AGI?

Any domain, please.


 There were no success stories regarding manned spaceflight before Apollo

There were LOTS of prototypes before Apollo.
- Manned jets and submarines.
- Missiles.
- Satellites.


 were no success stories for genome-sequencing before it was first done, etc.

---
http://bioinfo.mbb.yale.edu/course/projects/final-4/
The short history of genome sequencing began with Frederic Sanger's invention 
of sequencing almost twenty-five years ago.
---


Note, that the only time when this article mentioned funding was here:
===

Venter's H. Influenzae project had failed to win funding from the
National Institute of Health indicating the serious doubts surrounding
his ambitious proposal. It simply was not believed that such an
approach could sequence the large 1.8 Mb sequence of the bacterium
accurately. Venter proved everyone wrong and succeeded in sequencing
the genome in 13 months at a cost of 50 cents per base which was half
the cost and drastically faster than conventional sequencing.  
===

Venter succeeded when he was NOT funded.



Do you have an example of research success with practical results in
the field with:
- Extensive funding.
- No prototype.
?

Preferably - within last 20 years.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67376737-e5d398


Re[2]: [agi] Funding AGI research

2007-11-20 Thread Dennis Gorelik
Richard,

 specific technical analysis of the AGI problem that I have made
 indicates that nothing like a 'prototype' is even possible until
 after a massive amount of up-front effort. 

I probably misunderstand you first time.
I thought you meant that this massive amount of up-front efforts must
be made in single project.
But you probably don't mean that, right?

I agree that there is massive amount of up-front effort required for
delivering AGI.
But this amount can be split into separate pieces.
All these pieces can be done in separate projects (weak AI projects).
Every such project can have its own business sense and would be able
to pay for themselves. Good example of such weak AI project would be
Google.

That's why I claim that huge up-front investment can be avoided, even
though there is massive amount of up-front efforts.

Do you agree?


 Billions of dollars would be exactly what I need:  I have a need for a
 large bank of parallelized exploration machines, and I have a need for
 large numbers of research assistants to undertake specific tasks.

That's what you need, but would that guarantee AGI delivery?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67379640-47fb6b


[agi] Finding analogies

2007-11-20 Thread Dennis Gorelik
Russell,

 The reason I didn't comment is because I
 don't have a solution - that is, I know how to write software that can
 draw certain kinds of analogies in certain contexts, but I don't know
 how to write software that can do it anywhere near as generally as
 humans can.

I just want to note, that Google does exactly that: it finds analogies
to your search queries in any context.
Sometimes Google is better than humans in finding analogies (because
of huge knowledge base), sometimes it's worse, because Google is
missing some mental processes that humans have.
In any case, Google already works quite well -- why not use Google's
approach as a starting point for further improvements.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67388357-0f13ca


Re[4]: [agi] Danger of getting what we want [from AGI]

2007-11-20 Thread Dennis Gorelik
Matt,
  http://www.mattmahoney.net/singularity.html
 Could you allow comments under your article? That might be useful.

 I expect my remarks to be controversial and most people will disagree with
 parts of it,

Exactly. That's the major reason to have comments in the first place.


 As for the analogies, my point is that AGI will quickly evolve to invisibility
 from a human-level intelligence.

I think you underestimate how quickly performance deteriorates with the
growth of complexity.
AGI systems would have lots of performance problems in spite of fast
hardware.

Unmodified humans, on the other hand would be considerably more
advanced than now just because all AGI civilization technologies will
be available for humans as well.
So the gap won't be really that big.

To visualize potential differences try to compare income of humans
with IQ 100 and humans with IQ 150.
The difference is not really that big.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67413959-5d7ab9


Re[2]: [agi] Funding AGI research

2007-11-20 Thread Dennis Gorelik
Andrew,

 If you cannot solve interesting computer science problems that are
 likely to be simpler, then it is improbable that you'll ever be able
 to solve really hard interesting problems like AGI (or worse,  
 Friendly AGI).  I don't mean to disparage anyone doing AGI research,
 but if they are incapable of solving the easy problems, why should  
 anyone expect them to solve the hard problems?

That's right to the point.

That's why I think that AGI researcher should approximately follow
this path:
1) Successfully deliver a software project (it doesn't need to have AI
technologies at all).
2) Enhance software project with weak AI features or create weak AI
project from scratch.
3) Enhance weak AI project to strong AI (AGI) project or create AGI
project from scratch.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67457866-0ba041


Re[6]: [agi] Funding AGI research

2007-11-20 Thread Dennis Gorelik
Benjamin,

 Are you asking for success stories regarding research funding in any domain,
 or regarding research funding in AGI?

 Any domain, please.

 OK, so your suggestion is that research funding, in itself, is worthless in 
 any domain?

No.
My point is that massive funding without having a prototype prior to
funding is worthless most of the times.
If prototype cannot be created at reasonably low cost then fully working product
most likely cannot be created even with massive funding.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67466243-4cd4c4


Re[6]: [agi] Funding AGI research

2007-11-20 Thread Dennis Gorelik
Jiri,

 AGI is IMO possible now but requires very different approach than narrow AI.

AGI requires properly tune some existing narrow AI technologies,
combine them together and may be add couple of more.

That's massive amount of work, but most AGI research and development
can be shared with narrow AI research and development.


 It would. More full time AGI folks means less full time folks on
 other important projects.

 IMO worth it.

That's up to investors to decide.
Currently I don't think it's wise to invest my time and/or money into
AGI directly. Narrow AI projects [whose results can be used in AGI
later] are better target for investments.


  There are many hard problems to solve, including some which, if not
  solved correctly in relatively near future, could cause the money
  (made on those better paying tasks) having only a fraction of its
  current value.

 Problems that require hard-to-do evaluation of larger amounts of data
 from different domains before important decision deadlines. We could
 use narrow AI to do everything the AGI could do - except - we cannot
 always do it fast enough - which costs lives and/or unnecessary
 suffering you can see all around the world.

You description of the problems does not qualify for:
could cause the money having only a fraction of its current value.

Could you give an example of such problem?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67469218-948255


Re[8]: [agi] Funding AGI research

2007-11-20 Thread Dennis Gorelik
Benjamin,

 That's massive amount of work, but most AGI research and development
 can be shared with narrow AI research and development.

 There is plenty overlap btw AGI and narrow AI but not as much as you 
 suggest...

That's only because that some narrow AI products are not there yet.

Could you describe a piece of technology that simultaneously:
- Is required for AGI.
- Cannot be required part of any useful narrow AI.


 Also:
 Narrow AI technologies are not meant to be combined together,

 so to build AGI out of narrow-AI components, you need to create narrow-AI 
 components that are
 **specifically designed** for integration into an AGI system...

Yes, that's one way to go (convert existing narrow AI into narrow AI with
similar functionality and plugability).

Another way to go -- use existing narrow AIs as prototypes when
building AGI.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=67475235-358e5d


Re[2]: [agi] Funding AGI research

2007-11-18 Thread Dennis Gorelik
Jiri,

 To DARPA, but some spending rules should go with it. In collaboration
 with universities and the AGI community, they IMO should:
 1) Develop framework(s) for AGI testing.

DARPA cares about technology that help improve military within few
years. At this time that may be weak AI partially replacing soldiers.

DARPA doesn't care about AGI now, because it cannot yield practical
military results within several years. I think that is correct approach.


 3) Have a huge $ prize for AGIs that score highest in an annually (?)
 held AGI competition - which should make it attractive enough for
 investors/companies to get [more] involved.

Any successful AGI prototype would attract investors anyway.

But while there is no such prototype -- there is no point to invest.


 5) Have some level of democracy in spending related decisions.

There is very high level of investment democracy already. Investors
can invest their money into whatever they like.


 Many bright folks would IMO like to work on AGI full time but limited
 resources force them to focus on other stuff.

Other stuff is also important.
It could be program that improves health care efficiency or genome
research project.
Why do you think AGI is more important than other better paying tasks?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66316047-a80f3c


[agi] Funding AGI research

2007-11-17 Thread Dennis Gorelik
Jiri,

Give $1 for the research to who?
Research team can easily eat millions $$$ without producing any useful
results.
If you just randomly pick researchers for investment, your chances to
get any useful outcome from the project is close to zero.

The best investing practise is to invest only into such teams that
produced working prototype already.
Serious funding is usually helpful only to scale prototype up.
(See how it worked out for Google, for example).

So far there is no working prototype of AGI yet, therefore there is no
point to invest.

On the other hand some narrow AI teams already produced some useful
results. Such teams deserve investments.
When narrow AI field is mature enough -- making next step to AGI would
be possible for self-funding AGI research team.


Wednesday, October 31, 2007, 11:50:12 PM, you wrote:

 I believe AGI does need promoting. And it's IMO similar with the
 immortality research some of the Novamente folks are involved in. It's
 just unbelievable how much money (and other resources) are being used
 for all kinds of nonsense/insignificant projects worldwide. I wish
 every American gave just $1 for AGI and $1 for immortality research.
 Imagine what this money could for all of us (if used wisely).
 Unfortunately, people will rather spend the money for their popcorn in
 a cinema.


 Godlike intelligence? :) Ok, here is how I see it: If we survive, I
 believe we will eventually get plugged into some sort of pleasure
 machine and we will not care about intelligence at all. Intelligence
 is a useless tool when there are no problems and no goals to think
 about. We don't really want any goals/problems in our minds.
 Basically, the goal is to not have goal(s) and safely experience as
 intense pleasure as the available design allows for as long as
 possible. AGI could be eventually tasked to take care of all what that
 takes + search for the system improvements and things that an altered
 human mind could consider being even better than feelings as we know
 them now. Many might think that they love someone so much that they
 would not tell him/her bye and get plugged into a pleasure machine,
 but I'm pretty sure they would change their mind after the first trial
 of a well designed device of that kind. That's how I currently see the
 best possible future. Some people, when talking about advanced aliens,
 are asking Where are they?.. Possibly, they are in such a pleasure
 machine and don't really care about anything, feeling like true gods
 in a world where concepts like intelligence are totally meaningless.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66235890-ee6bd5


[agi] Danger of getting what we want [from AGI]

2007-11-17 Thread Dennis Gorelik
Matt,

You are right that AGI may seriously weaken human civilization just by
giving humans what they want. Lots of individuals can succumb to some
form of pleasure machine.

On the other hand -- why would you worry about human civilization or
any civilization at all if you personally get what you want?



However I don't think that civilization would be totally destroyed.
Natural selection would keep working.
Whoever succumb to pleasure machines (or any other dangers) will
simply lose the competition, but other would survive and adapt to new
technologies.


Thursday, November 1, 2007, 8:33:16 AM, you wrote:

 Your goals are selected by evolution.  There is a good reason why you fear
 death and then die.  You want what is good for the species.  We could
 circumvent our goals through technology, for example, uploading our brains
 into computers and reprogramming them.  When a rat can stimulate its nucleus
 accumbens by pressing a lever, it will forgo food, water, and sleep until it
 dies.  We worry about AGI destroying the world by launching 10,000 nuclear
 bombs.  We should be more worried that it will give us what we want.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66242228-cb472a


Re[2]: [agi] Nirvana? Manyana? Never!

2007-11-17 Thread Dennis Gorelik
Eliezer,

You asked that very personal question yourself and now you blame
Jiri for asking the same?
:-)

Ok, let's take a look into your answer.
You said that you prefer to be transported into a randomly selected
anime.

In my taste, Jiri's Endless AGI supervised pleasure is much wiser
choice than yours
:-)


Friday, November 2, 2007, 10:48:51 AM, you wrote:

 Jiri Jelinek wrote:
 
 Ok, seriously, what's the best possible future for mankind you can imagine?
 In other words, where do we want our cool AGIs to get us? I mean
 ultimately. What is it at the end as far as you can see?

 That's a very personal question, don't you think?

 Even the parts I'm willing to answer have long answers.  It doesn't 
 involve my turning into a black box with no outputs, though.  Nor 
 ceasing to act, nor ceasing to plan, nor ceasing to steer my own 
 future through my own understanding of it.  Nor being kept as a pet.
 I'd sooner be transported into a randomly selected anime.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66243567-558723


[agi] Supergoals of the fittest species in AGI environment

2007-11-17 Thread Dennis Gorelik
Jiri,

You assume that when we are 100% done -- we will get what we
ultimately want.
But that's not exactly true.

The most fittest species (whether computers, humans, or androids) will dominate 
the world.

Let's talk about set of supergoals that such fittest species will
have.

I think this set would include:
- Supergoal Prevent being [self]destroyed.
- Supergoal Prevent changing supergoals. That supergoal would also
try to prevent tampering with supergoals. I guess that supergoal will
have to become quite strong in the environment when it's
technologically possible to tweak supergoals.
- Supergoal reproduce. Supergoals of descendants would probably
slightly vary from supergoals of the parent.
- Other supergoals, such as Desire to learn, Desire to speak, and 
Contribute to
society.

Note, that the most fittest species will not really have Permanent
pleasure paradise option.



Friday, November 2, 2007, 9:00:50 PM, you wrote:

 Choice to take particular action generates sub-goal (which might be
 deep in the sub-goal chain). If you go up, asking why? on each
 level, you eventually reach the feeling level where goals (not just
 sub-goals) are coming from. In short, I'm writing these words because
 I have reasons to believe that the discussion can in some way support
 my /or someone else's AGI R /or D. I want to support it because I
 believe AGI can significantly help us to avoid pain and get more
 pleasure - which is basically what drives us [by design]. So when we
 are 100% done, there will be no pain and an extreme pleasure. Of
 course I'm simplifying a bit, but what are the key objections?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66250614-119592


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Dennis Gorelik
Matt,

You algorithm is too complex.
What's the point of doing step 1?
Step 2 is sufficient.

Saturday, November 3, 2007, 8:01:45 PM, you wrote:

 So we can dispense with the complex steps of making a detailed copy of your
 brain and then have it transition into a degenerate state, and just skip to
 the final result.

 http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
 Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
 4-bit logic function and positive reinforcement for both right and wrong
 answers, e.g.

   g++ autobliss.cpp -o autobliss.exe
   autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)

 Step 2. Kill yourself.  Upload complete.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66253555-746bb4


Re[2]: [agi] Funding AGI research

2007-11-17 Thread Dennis Gorelik
Richard,

 Although this seems like a reasonable stance, I don't think it is a
 strategy that will lead the world to the fast development (or perhaps
 any development) of a real AGI.

Nothing would lead to the fast development of a real AGI.
The development would be slow. It would be about constantly improving
AI related technologies, trying to apply them, finding problems,
fixing them, making next step in research  development, ...

 I agree you would not just pick researchers at random, but on the other
 hand if you insist on a team with a working prototype this might well
 be a disaster.

How that would be a disaster?
The worst thing that could happen -- AGI (strong AI) would be
developed few years later than it could.

Note, that AGI is not useful for us by itself. We want it to serve us.
Funding AGI without focusing on practical gain would probably produce
useless results (if any).


 I am in a position to use massive investment straight away (and I have a
 project plan that says how), but the specific technical analysis of the
 AGI problem that I have made indicates that nothing like a 'prototype'
 is even possible until after a massive amount of up-front effort.

That's not true.
Evolution worked its way from single cell to intelligent species in
a very graduate fashion.
Claiming that incremental steps are impossible when we're trying to
copy Nature's accomplishments is just wrong.


 Catch 22.  No prototype, no investment;  no investment, no prototype.

 Investors are leery of sorry, no prototype! claims (with good reason,
 generally) but they are also not tech-savvy enough to comprehend the
 technical analysis that tells them that they should make an exception in
 this case.

If you want to claim an exception in your case -- the burden of proof
is on you.
Do you have any reasonable proof that your team would be successful
exception from no prototype - no investment rule?

In general I know, that successful research usually doesn't need
massive investments. It needs small team of bright people, who could
work part time.
Money don't really help in break-through research. Money can help
to bring proven idea to production though.

If you cannot develop AGI prototype that would be able to make
decisions properly in safe and convenient environment (convenient
input, lots of time, limited help from development team) -- then
pouring billions of dollars into the project wouldn't help at all.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66310072-f837bd


Re[2]: [agi] Supergoals of the fittest species in AGI environment

2007-11-17 Thread Dennis Gorelik
Stefan,

Could you please explain, how could I apply your research paper:
http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-11-17_isotemp.pdf
to something useful?
It's a little too abstract for me, so some introduction that binds
this research to practical research/development goals would be quite
helpful.

Saturday, November 17, 2007, 3:19:37 PM, you wrote:

 On Nov 18, 2007 3:05 AM, Dennis Gorelik [EMAIL PROTECTED] wrote:
  You assume that when we are 100% done -- we will get what we
 ultimately want.
 But that's not exactly true.

 The most fittest species (whether computers, humans, or androids) will 
 dominate the world.

 Let's talk about set of supergoals that such fittest species will
 have.

 I think this set would include:
 - Supergoal Prevent being [self]destroyed.
 - Supergoal Prevent changing supergoals. That supergoal would also
 try to prevent tampering with supergoals. I guess that supergoal will
 have to become quite strong in the environment when it's
 technologically possible to tweak supergoals.
 - Supergoal reproduce. Supergoals of descendants would probably 
 slightly vary from supergoals of the parent.
 - Other supergoals, such as Desire to learn, Desire to speak, and 
 Contribute to
 society.

 Note, that the most fittest species will not really have Permanent 
 pleasure paradise option.



 Dennis, I believe the same and have recently finished organizing
 my thoughts on the matter in a paper: Practical Benevolence – a
 Rational Philosophy of Morality that is available at
 http://rationalmorality.info/

 Abstract: These arguments demonstrate the a priori moral nature of reality and
 develop the basic understanding necessary for realizing the logical maxim in
 Kant's categorical imperative[1] based on the implied goal of evolution[2]. 
 The
 maxim is used to proof moral behavior as obligatory emergent phenomenon
 among evolving interacting goal driven agents.

 Kind regards, 

 Stefan



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66310437-b4c064

Re[2]: [agi] Danger of getting what we want [from AGI]

2007-11-17 Thread Dennis Gorelik
Matt,

 On the other hand -- why would you worry about human civilization or
 any civilization at all if you personally get what you want?

 That is exactly the problem.  I wouldn't worry about reducing my own fitness.

Why do you worry about reducing your own fitness now?


 However I don't think that civilization would be totally destroyed.
 Natural selection would keep working.
 Whoever succumb to pleasure machines (or any other dangers) will
 simply lose the competition, but other would survive and adapt to new
 technologies.

 True, but it would not be anything like civilization as we know it.

That's the nature (and the goal) of progress.
Our current civilization is quite different from Ancient Egypt civilization
and even more different from how dinosaurs lived.

1 million years from now civilization would be extremely different
from ours even if AGI would never be invented.

Why do you worry about such change?

   I wrote
 my thoughts on this. http://www.mattmahoney.net/singularity.html

That's an interesting article.
I don't agree with everything though.
For example you compare human vs super AGI as bacteria vs human.
But that's not fair comparison.
Try to compare dog vs human and bacteria vs dog.
Would you still claim that dog cannot observe human just because
bacteria doesn't see dog?
:-)

Could you allow comments under your article? That might be useful.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66311711-3a19e4


Re[4]: [agi] Supergoals of the fittest species in AGI environment

2007-11-17 Thread Dennis Gorelik
Stefan,

Thanks, but it seems that ensuring that AGI is human-friendly
problem is not really a problem that we need to solve at the moment.

Currently it is sufficient to test if whatever system we develop is:
1) Is useful for us.
2) Is not too harmful for us.

On the later stages of AGI development it may become useful to find
some algorithms that would constrain advanced AGI behavior, but
currently such constraint would simply kill AGI prototype.

Still if you had short list of tips about how to design  apply such
AGI safety constraint -- that would be useful, but your article is
considerably longer and way more abstract than that.



Saturday, November 17, 2007, 11:51:13 PM, you wrote:

 On Nov 18, 2007 2:30 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:
  Stefan,

 Could you please explain, how could I apply your research paper:
 http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-11-17_isotemp.pdf
 to something useful?
 It's a little too abstract for me, so some introduction that binds
 this research to practical research/development goals would be quite
 helpful.



 For example one could apply the gained insights to develop a
 strategy for ensuring a  positive transcension. Due to the intrinsic
 moral nature of reality the term positive singularity becomes
 tautological, as anything that desires to exit has to act in a moral
 way to prevent its self annihilation. Bringing about the singularity
 thus becomes rather simple and can be achieved in the following way:
 create an environment allowing for the existence of units of self
 replicating units of informationensure that the units of information
 can be acted upon by the forces of evolutionplant an arbitrary self
 replicator wait
 This could be realized by using the BOINC architecture for
 distributed computing for creating a fuzzy copying environment to
 realize above plan. The copying 'fuzziness' i.e. error rate per
 copied bit, would have to be roughly proportional to the maximally
 complex self replicator in the system to allow for a gradual
 expansion of the system's complexity boundary and thus for the
 emergence of ever more rational agents.
 Once the rationality of the emerging agents would approach human
 levels they would realize M! and thus never become a threat to
 humanity.
  I am currently thinking further about how to ground the
 simulated reality in our reality and belief that could be achieved
 by providing an interface to the internet that the evolving rational
 agents can interact with using a browser like interface. 

 More soon under www.guidoborner.org




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66312571-304c15


[agi] Motivational system

2006-06-09 Thread Dennis Gorelik
William,

 It is very simple and I wouldn't apply it to everything that
 behaviourists would (we don't get direct rewards for solving crossword
 puzzles).

How do you know that we don't get direct rewards on solving crossword
puzzles (or any other mental task)?
Chances are that under certain mental condition (achievement state),
brain produces some form of pleasure signal.
If there is no such reward, then what's your explanation why people
like to solve crossword puzzles?



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Motivational system

2006-06-09 Thread Dennis Gorelik
William,

1) I agree that direct reward has to be in-built
(into brain / AI system).
2) I don't see why direct reward cannot be used for rewarding mental
achievements. I think that this direct rewarding mechanism is
preprogrammed in genes and cannot be used directly by mind.
This mechanism probably can be cheated to the certain extend by the
mind. For example mind can claim that there is mental achievement when
actually there is none.
That possibility of cheating with rewards is definitely a problem.
I think this problem is solved (in human brain) by using only small
dozes of mental rewards.
For example, you can get small positive mental rewards by cheating your
mind to like finding solutions to 1+1=2 problem.
However, if you do it too often you'll eventually get hungry and would
get huge negative reward. This negative reward would not just stop you
doing 1+1=2 operation over and over, it would also re-setup your
judgement mechanism, so you will not consider 1+1=2 problem as an
achievement anymore.

Also, we all familiar with what boring is.
When you solve a problem once - it's boring to solve it again.
I guess that that is another genetically programmed mechanism with
prevents cheating with mental rewards.

3) Indirect rewarding mechanisms definitely work too, but they are not
sufficient for bootstrapping strong-AI capable system.
Consider a baby. She doesn't know why it's good to play (alone or with
others). Indirect reward from childhood playing will come years later
from professional success. 
Baby cannot understand human language yet, so she cannot envision this
success.
AI system would face the same problem.

My conclusion: indirect reward mechanisms (as you described them) would not be
able to bootstrap strong-AI capable system.

Back to real baby: typically nobody explains to baby that it's good to play.
But somehow babies/children like to play.
My conclusion: there are direct reward mechanisms in humans even for
things which are not directly beneficial to the system (like mental
achievements, speech, physical activity).

Friday, June 9, 2006, 4:48:07 PM, you wrote:

 How do you know that we don't get direct rewards on solving crossword
 puzzles (or any other mental task)?

 I don't know, I only make hypotheses. As far as my model is concerned
 the structures that give direct reward have to be pretty much in-built
 otherwise for a selectionist system allowing a selected for behaviour
 to give direct reward would quickly lead to behaviour that gives
 itself direct reward and doesn't actually do anything.

 Chances are that under certain mental condition (achievement state),
 brain produces some form of pleasure signal.
 If there is no such reward, then what's your explanation why people
 like to solve crossword puzzles?

 Why? By indirect rewards! If you will allow me to slip into my
 economics metaphor, I shall try to explain my view of things. The
 consumer is the direct reward giver, something that attempts to mold
 the system to produce certain products, it doesn't say what is wants
 just what is good, by giving money ( direct reward).

 In humans this role played by the genome constructing structures that
 says nice food and sex is good, along with respect from your peers
 (probably the Hypothalamus and amygdala).

 The role of raw materials is played by the information coming from the
 environment. It can be converted to products or tools.

 You have retail outlets that interact directly with the consumer,
 being closest to the outputs they get directly the money that allows
 their survival. However they have to pass some of the money onto the
 companies that produced the products they passed onto the consumer.
 This network of money passing will have to carefully controlled so
 that more money isn't produced in one company than was given
 (currently I think of the network of dopaminergic neurons being this
 part).

 Now with this sort of system you can make a million just so stories
 about why one program would be selected that passes reward to another,
 that is give indirect reward. This is where the complexity kicks in.
 In terms of crossword solving one possibility is that a program closer
 to the output and with lots of reward has selected for rewarding
 logical problem solving because in general it is useful for getting
 reward and so passes reward on to a program that has proven its
 ability to logical problem solve, possibly entering into a deal of
 some sort.

 This is all very subconcious, as it is needed to be to be able to
 encompass and explain low level learning such as neural plasticity,
 which is very subconcious itself.

  Will Pearson


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[2]: [agi] Can Google read?

2005-03-19 Thread Dennis Gorelik
Ben,

What exactly can Novamente do right now?
(What's input and what's output of these meaning extraction feature?
Can I test it?

Wednesday, March 16, 2005, 8:40:57 AM, you wrote:


 Google's crawler does exactly that.
 It examines written pages and grasp meaning of these pages.
 Unfortunately Google doesn't extract as much meaning as humans, but it
 definetely can extract some meaning.

 What program extract more meaning from general text in natural
 language?

 In fact, Novamente does right now.

 As do commercial products by several companies such as (to name a couple out
 of a larger field) Attensity and Insightful.

 Google is simply not the technology leader in NL understanding...



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[4]: [agi] Verbs Subcategoriztion vs Direct word/phrase recognition in NL understanding

2005-03-19 Thread Dennis Gorelik
Ben,

Under direct knowledge here I mean knowledge of meaning of every
particular word and phrase in NL text.
That is instead of remembering linguistic rules (like in statement
verb goes after noun), AI should remember that word cat is used in
phrases cat catches, black cat, cat jumps, my cat, etc.


Wednesday, March 16, 2005, 8:20:06 AM, you wrote:

 Ben,

 Direct knowledge of every word/phrase meaning inevitably leads to
 implicit understanding of verb frames.

 I'm not really sure what you  mean by direct knowledge.  Do you mean
 knowledge that's grounded in a physical body's experience of a physical
 world?

 Why do you need to explicitly code verb frames understanding in your
 AI?

 As I keep emphasizing, we don't NEED to, it's just one interesting approach
 (building in some  NL  knowledge as a scaffolding to help support
 experiential learning)




---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[2]: [agi] LISP as a Strong AI development tool

2005-03-14 Thread Dennis Gorelik
Lukasz,

I don't see any practical use of Start systems.
Do you?

Second reference doesn't work.

Monday, March 14, 2005, 6:04:53 AM, you wrote:

 Hi.

 I don't need detailed counter-arguments. Just give me one good example
 of strong AI implementation in LISP.
 What is the functionality of this LISP application?
 Why was LISP useful in this development?

 Not to involve into discussions about what is strong and what is not,
 you can find a lot of LISP - AI examlpes in AI textbooks. By the way
 LISP is now commonly considered an industrial-strenght language.
 To give you just two examples that do NLP:
 1) START http://start.csail.mit.edu/ 
 was implemented fully in LISP and on a specialized LISP machine
 It is the first system to answer questions in natural language and most
 probably it was the first just because they used LISP.
 2) Currently one of the most comprehensive english grammar parser and
 system to understand texts is LinGO written in Allegro Common LISP:
 http://lingo.stanford.edu:8000/erg 

 - lk



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[2]: [agi] Hard-coding Lojban syntax is a solved problem

2005-03-14 Thread Dennis Gorelik
Ben,

Let me clarify my question:
what is input and what is output of this convertor?

This reference:
http://www.goertzel.org/new_research/Lojban_AI.pdf
doesn't work...


Monday, March 14, 2005, 6:35:35 AM, you wrote:


 Robin Lee Powell's complete PEG grammar for Lojban is here:

 http://www.digitalkingdom.org/~rlpowell/hobbies/lojban/grammar/lojban.peg.tx
 t

 This is an exact algorithm for determining which Lojban sentences are
 syntactic and which are not.

 The tools that I think would be useful for doing serious AI with Lojban are
 described in my essay

 http://www.goertzel.org/new_research/Lojban_AI.pdf

 They would use Robin's grammar rules but require additional (easy) work as
 well.




---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Will be humans intelligent after implementation of strong AI?

2005-03-14 Thread Dennis Gorelik
Ben,

Imagine that strong AI is already implemented.
And software developers have easy to use tool to implement human level
functionality.
Would you claim that humans don't have general intelligence?
:-)


Monday, March 14, 2005, 6:35:35 AM, you wrote:

 
 Well,  the point of distinguishing artificial general
 intelligence from narrow AI  is the observation that it seems to
 be fairly easy to create specialized  software systems capable of
 emulating particular functionalities commonly  labeled
 intelligence, without building any kind of reflective,
 abstracting,  general-intelligence capability into the software.
 
 Google  exemplifies this fairly well. It's narrow AI.



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[2]: [agi] Google as a strong AI

2005-03-14 Thread Dennis Gorelik
Ben,

1) CYC --- I don't see why do you consider CYC intelligent
application.
From my point of view CYC in on the same level of intelligence as
MS Word. Well, probably MS Word is even more intelligent.
At least MS Word works and produce nice and intelligent results (not
super-intelligent though).

Does CYC have any practical use at all?

2) What is SOAR and why do you consider it intelligent?

3) Novamente doesn't work --- please, correct me if my information is
outdated.

4) Chess programs don't have general intelligence for sure. You may
claim that chess programs have very narrow and very specialized
intelligence at the best.
Google definitely has general intelligence. Very wide.
Not very deep though.

5) Google has very strong reflective capabilities.
http://www.google.com/search?sourceid=navclientie=UTF-8rls=GGLD,GGLD:2004-49,GGLD:enq=google


Monday, March 14, 2005, 6:35:35 AM, you wrote:

 Cyc, SOAR and Novamente are all closer to  strong AI than Google,
 since they can carry out a greater variety of  intelligence-like
 functions.
 
 According to my understanding, Google is not  significantlymore
 of a strong AI than Virtual Kasparov chess for the  Gameboy
 advance Neither is a general intelligence, neither has any 
 reflective capability, each does a particular thing and does it 
 well...
 
 ben



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Can Google read?

2005-03-14 Thread Dennis Gorelik
Ben,

What's your definition of reading?

What about this:
-
http://dictionary.reference.com/search?q=read
15. Computer Science. To obtain (data) from a storage medium, such as a 
magnetic disk.
-

Do you have any doubts now that Google can read?


But wait, let's consider definition #1:
-
http://dictionary.reference.com/search?q=read
1. To examine and grasp the meaning of (written or printed characters, words, 
or sentences).
-

Google's crawler does exactly that.
It examines written pages and grasp meaning of these pages.
Unfortunately Google doesn't extract as much meaning as humans, but it
definetely can extract some meaning.

What program extract more meaning from general text in natural
language?



MS Word is also on the way to strong AI, but MS Word is far behind
Google.
Actually MS Word has stuff which leads away from strong AI. In
particular I mean rule-based natural language processing.


Monday, March 14, 2005, 6:42:35 AM, you wrote:


 Your use of the words read and write and know is a bit too far
 stretched, IMO.

 By your definitions, MS-Word is also a strong AI, due to its spell-checker
 and grammar-checker and document summarizer... no?



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Rule based Natural Language processing

2005-03-14 Thread Dennis Gorelik
Ben,

You don't need many rules to process Natural Language.
If you have more that 100 rules then probably your NL processing model
is wrong.

These less than 100 rules include rules for finding paragraphs, statements, 
words and
phrases in the input text.
Plus a little bit more rules like that.

Monday, March 14, 2005, 6:42:35 AM, you wrote:

 What does make English statement more comlex than Lojban tanru?
 The complexity of the rule-lists and algorithms needed to parse and/or
 interpret them correctly...



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[2]: [agi] How to teach AI to understand NL

2005-03-13 Thread Dennis Gorelik
Ben,

 I think language should initially be taught
 in the context of interaction with the learner in a shared environment, not
 via analysis of texts.

 Reading of texts is important and must be learned, but AFTER language is
 learned in an experiential-interaction context.


The optimal way of teaching AI to Natural language is iterative
application of 3 methods by turns (order is not important):
- Massive reading.
- Interaction.
- Experiment.

With Lojban you will have to skip Massive reading. That's not good.



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[4]: [agi] How to teach AI to understand NL

2005-03-13 Thread Dennis Gorelik
Ben,

1) You need to apply Occam's razor principle:
why Lojban if you can do the same with English?

2) From maintenance standpoint massive reading is far less expensive
than interactive education.
In addition to that massive reading is ~10...1000 times faster than
interaction.
Of course we cannot skip interaction in AI teaching process, but we
need to replace it with massive reading as much as possible.

3) Later stages of learning through interaction in English are far less 
expensive than in Lojban.
You can expose learning AI through the Internet and let human
enthusiasts to speak with your AI.
Effect would be negligible if you try to do the same in Lojban
:-)


Sunday, March 13, 2005, 10:11:30 PM, you wrote:


 Hi,

 The optimal way of teaching AI to Natural language is iterative
 application of 3 methods by turns (order is not important):
 - Massive reading.
 - Interaction.
 - Experiment.

 With Lojban you will have to skip Massive reading. That's not good.

 Indeed, Lojban would cover the interaction and experiment parts only.

 Then the system would learn English as a second language and the massive
 reading phase would come..

 I think that language must be learned first via interaction w/ other minds
 in a shared environment.  Reading comes later.

 -- Ben





---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[4]: [agi] It won't be easier to design AI to understand Lojban

2005-03-13 Thread Dennis Gorelik
Ben,

 Lojban syntax is completely formally specified by a known set of rules;
 English syntax is not.

I'm pretty sure that live Lojban has a lot of exceptions from the
rules and cannot be formalized.
Humans introduce new rules into live language and just do errors.
Therefore you cannot rely on predefined rules much.

 2) Hard-coding Lojban syntax is a solved problem,

:-)
What exactly do you mean here?
Hard-coding Lojban syntax from what source is a solved problem?

 A moderately good analogy is COBOL versus LISP for AI programming.

Wow, great analogy!

Let me remind you that LISP is practically useless for strong AI
development.

The best results in practical strong AI implementation where achieved
by using C and C++ (Google search engine).

What kind of practical AI applications were developed with using LISP?


I'm not a huge fan of C and C++, but my point here is that industrial
languages are the best choice for almost any development.
Modern industrial languages are C#, VB, Java, C++ and of course, SQL.
Cobol is industrial software development language too (not modern
though). And LISP is not an industrial language.
That's why I think that more good AI applications were developed in COBOL
than in LISP. Let me know if I'm wrong about it.


Basically you are making the same mistake with Lojban as many
AI researchers did with LISP:
you are trying to replace reliable tool with something fancy
(industrial language - LISP and English - Lojban).
The result of such innovations is just the DEAD END.



 The fundamental algorithmic difficulty of solving problems in COBOL is the
 same as doing so in LISP, but the practical difficulties are decreased a lot
 in the latter case.

 And it's nicer to think about tanru disambiguation (a single problem) than
 dozens of different cases of preposition and verb-argument-relation
 disambiguation.  (Lojban vs. English)


What prevents you to think about full English statement as about
tanru? Why do you want to search for verb-argument-relation and
similar linguistic stuff which is irrelevant to basic NL understanding stuff???

Children have no idea about this linguistic stuff, but they handle natural
language very well.


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Google as a strong AI

2005-03-13 Thread Dennis Gorelik
 If you think Google is strong AI then we have really different definitions
 of that term, what can I say...

 Google is narrow AI, if it's AI at all.  It's great, of course.. but ...


Ok, let's see:
1) Google reads Natural Language. Every natural language.
Google write Natural Language.
2) Google constantly learns. Learning affects Google behavior.
3) Google knows words and phrases. Google distinguishes between usual
and unusual (wrong/misspelled/...) words.
4) Google's memory is implemented in form of dynamic Neural Net.
5) Google has huge memory.

Now, it's your turn: what program is a better implementation of strong
AI than Google?



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Hard-coding Lojban syntax is a solved problem

2005-03-13 Thread Dennis Gorelik
Ben,

 Hard-coding Lojban syntax from what source is a solved problem?
 I mean there's a complete formal grammar for Lojban, see e.g.
 http://www.digitalkingdom.org/~rlpowell/hobbies/lojban/grammar/index.html

I see several convertors from something to something.
What exactly would you recommend to consider as a solution?
What is the source of this convertor and what is the destination?

How it can be useful in handling tanru?



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] LISP as a Strong AI development tool

2005-03-13 Thread Dennis Gorelik
 Cobol is industrial software development language too (not modern
 though). And LISP is not an industrial language.
 That's why I think that more good AI applications were developed in COBOL
 than in LISP. Let me know if I'm wrong about it.

 Of course you're wrong, but this statement is so silly I'm not going to
 spend time making a detailed counterargument.

I don't need detailed counter-arguments. Just give me one good example
of strong AI implementation in LISP.
What is the functionality of this LISP application?
Why was LISP useful in this development?

 In fact we use C++ and Java for Novamente, not LISP... but my impression is
 that Allegro LISP is actually pretty powerful.

Theoretically powerful.
:-)




---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] English statement vs Lojban tanru

2005-03-13 Thread Dennis Gorelik
Ben,

 The English analogues of tanru are just more complicated, that's all...

What does make English statement more comlex than Lojban tanru?




---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Verbs Subcategoriztion vs Direct word/phrase recognition in NL understanding

2005-03-13 Thread Dennis Gorelik

Why do you want to search for verb-argument-relation and
 similar linguistic stuff which is irrelevant to basic NL
 understanding stuff???

 How can you say that the subcategorization frames of verbs are irrelevant to
 basic NL understanding  Nothing could be more essential...

Any reason why this subcategorization frames of verbs is
important for basic NL understanding?

Why would it be more important than just direct knowledge of what
every word/phrase means in
the context of other words/phrases?

 Children have no idea about this linguistic stuff, but they handle natural
 language very well.
 Human children have evolved brains which were shaped by natural selection to
 understand human language.

 We can try to emulate the human-child's language learning process in a
 computer program if we want to

Very good idea.
At least try to understand from children example what features are critical for 
NL
processing and what features are just fancy stuff.

 Or we can try a hybrid approach as I suggested in
 http://www.goertzel.org/papers/PostEmbodiedAI_June7.htm

I didn't find word hybrid in your article.
Could you please clarify what you mean under hybrid approach?



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[4]: [agi] Unlimited intelligence. --- Super Goals

2004-10-22 Thread Dennis Gorelik
deering,

It seems that I agree with you ~70% of the time :-)

Let's focus on 30% differences and compare our understanding of
sub-goals and super-goals. 

1) What did come first sub-goals or super-goals?
Super-goals are primary goals, aren't they?


 SUPERGOAL 1:  take actions which will aid the advancement of intelligence in the 
 universe.
 SUPERGOAL 2:  take actions which will aid in the continued survival and advancement 
 of me.
 SUPERGOAL 3:  do not take actions which will harm others.

2) You examples of super-goals look as sub-goals to me.
Highly abstractional/intelligent sub-goals.
I name them sub-goals because they are not primary. They are derived
from basic human instincts.
Some of these primary goals you named sub-goal:
 SUBGOAL 1:  satisfy bodily needs, sex, food, sleep.
But for me it's definitely super-goal.

By the way, I agree that other sub-goals in your list are really
sub-goals:
 SUBGOAL 2:  make money.
 SUBGOAL 3:  wear seatbelt in car.
 SUBGOAL 4:  build websites explaining the coming Singularity.
 SUBGOAL 5:  play with and read to son.
 SUBGOAL 6:  annoy Eliezer.
 SUBGOAL 7:  learn about nanotechnology.
 SUBGOAL 8:  do household chores.
 SUBGOAL 9:  use proper grammar and spelling.

3) Goals are not equal in value.
Doesn't matter if they are super-goals or sub-goals.
All they have different weight.

4) Sometimes sub-goals could be more valuable than super-goals.
There could be several reasons for that.
Let's compare weight of sub-goal subA and super-goal superB.
- subA could serve for super-goal superC.
If superC is far more important that superB than subA could be more
important than superB.
- subA could serve for several super-goals superD, superE, ...,
superZ. As a result subA could be more valueable than superB.
- subA could be more suitable in current situation, therefore it would
be more active that superB which is not strongly related to the
current choice situation.

5) Only reinforcement matter when the system makes decision.
If there is no (mental) force - there is no (mental) action.

Friday, October 22, 2004, 3:02:11 AM, you wrote:

 All supergoals are equal in value.  Not all supergoals are
 applicable to all situations or decisions.  Subgoals are created,
 edited, or deleted to support supergoals.  Supergoals take
 precedence over subgoals.  Subgoals are more specific than
 supergoals and therefore more commonly directly applicable to
 situations or decisions.
  
 The subject chooses option 4 because it satisfies supergoal 1,
 which takes precedence over all subgoals despite producing no
 reinforcement.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Dennis Gorelik
Deering,

I strongly disagree.
Humans have preprogrammed super-goals.
Humans don't update ability to update their super-goals.
And humans are intelligent creatures, aren't they?

Moreover: system which can easily redefine its super-goals is very
unstable.

At the same time intelligent system has to be able to define its own
sub-goals (not super-goals). These sub-goals are set based on
environment and super-goals.

 True intelligence must be aware of the widest possible context
 and derive super-goals based on direct observation of that
 context, and then generate subgoals for subcontexts.  Anything with
 preprogrammed goals is limited intelligence.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[2]: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Dennis Gorelik
Eugen,

 Yes? Can you show them in the brain coredump? Do you have such a coredump?

There is no coredump.
But we can observe humans behavior.

 Humans don't update ability to update their super-goals.

 What, precisely, is a supergoal, in an animal context?

There are many supergoals.

They are: desire to physical activity, hunger, pain (actually desire
to avoid pain), sexual attraction, society instincts (like desire to
chat).
There are many more supergoals. Not all of them are located in a
brain.

You cannot reprogram them. You can suppress some of super-goals based
on other super-goals. But this is not reprogramming.
You can also destroy some of surer-goals by medical treatments. But this
kind of reprogramming is very limited.

Supergoals comes with our genes.

 And humans are intelligent creatures, aren't they?
 
 Moreover: system which can easily redefine its super-goals is very
 unstable.
 
 At the same time intelligent system has to be able to define its own
 sub-goals (not super-goals). These sub-goals are set based on
 environment and super-goals.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[2]: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Dennis Gorelik
1) All Supergoals are implemented in form of reinforcers.

Not all reinforcers constitute supergoals.
Some reinforcers can be created as sub-goals implementation.
For instance: unconditional reflexes are Supergoal reinforcers.
Conditional reflexes are sub-goals reinforcers.


2) You are telling that Supergoals are the top level of rules.
What do you mean under top level?
What are differences between top level rules and low level rules?

3) You said: You don't have to be a slave to the biologically
programmed drives you were born with.

When you follow your desires --- it is not slavery, isn't it?

4) About stability.
Complex systems with constant set of super-goals can be very different
even inside of similar environment.
For example you can consider twins --- exactly the same set of
supergoals, vert similar environment, different personalities.

Save supergoals and different environment can make huge difference in
behavior.

Slight difference in supergoals increases difference in behavior of
the complex system even more.

Same about stability. Systems are not very stable even with constant supergoals.

System with ability to selfmodify supergoals are TOO unstable.
Such systems are not safe at all.

That's why it has no sense to allow self-modification of supergoals.

 Yes, we have instincts, drives built into our systems at a
 hardware level, beyond the ability to reprogram through merely a
 software upgrade.  These drives, sex, pain/pleasure, food, air,
 security, social status, self-actualization, are not supergoals,
 they are reinforcers.
  
 Reinforcers give you positive or negative feelings when they are encountered. 
  
 Supergoals are the top level of rules you use to determine a choice of behavior.
  
 You can make reinforcers your supergoals, which is what animals
 do because their contextual understanding, and reasoning ability is
 so limited.  People have a choice.  You don't have to be a slave to
 the biologically programmed drives you were born with.  You can
 perceive a broader context where you are not the center of the
 universe.  You can even imagine redesigning your hardware and
 software to become something completely different with no vestige of
 your human reinforcers.  
  
 Can a system choose to change its supergoal, or supergoals? 
 Obviously not, unless some method of supergoal change is
 specifically written into the supergoals.  People's supergoals
 change as they mature but this is not a voluntary process.  Systems
 can be designed to have some sensitivity to the external environment
 for supergoal modification.  Certainly systems with immutable
 supergoals are more stable, but stability isn't always desirable or
 even safe.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Teaching AI's to self-modify

2004-07-04 Thread Dennis Gorelik
Ben,

1) Could you describe what is the architecture of you INLINK
interactive framework?
How is it going to handle natural language?

2) I doubt that it's possible to communicate in natural language
completely unambiguously. There always will be some uncertainty.
Intelligent system itself will have to decide how to interpret
incoming message.

3) Could you give an example of a program which will be generated by
Sasha programming framework?

Sunday, July 4, 2004, 8:58:01 AM, you wrote:

BG We're developing two powerful methods for communicating with Novamente:
 
BG 1) the INLINK interactive NL framework, which allows natural
BG language to be communicated to Novamente correctly and
BG unambiguously
 
BG 2) the Sasha programming framework, which allows the easy
BG construction of software programs that manipulate Novamente nodes
BG and links [and, rapid executable versions of these programs will
BG be produced via supercompilation, www.supercompilers.com].  Right
BG now, Novamente MindAgents, the processes that guide Novamente
BG cognition, are coded as C++ objects which are opaque to Novamente
BG cognition; but with the advent of Sasha, we'll be able to code
BG MindAgents as Novamente nodes and links which can in principle be
BG modified/improved/replaced by Novamente cognition.


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]