AW: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-07 Thread Dr. Matthias Heger

Mike Tintner wrote,

You don't seem to understand creative/emergent problems (and I find this 
certainly not universal, but v. common here).

If your chess-playing AGI is to tackle a creative/emergent  problem (at a 
fairly minor level) re chess - it would have to be something like: find a 
new way for chess pieces to move - and therefore develop a new form of 
chess   (without any preparation other than some knowledge about different 
rules and how different pieces in different games move).  Or something like 
get your opponent to take back his move before he removes his hand from the

piece  - where some use of psychology, say, might be appropriate rather 
than anything to do directly with chess itself.


In your example you leave the domain of chess rules.
There *are* already emergent problems just within the domain of chess.
For example I could see, that my chess program tends to move the queen too
early.
Or it tends to attack the other side too late and so on. The programmer will
then have the difficult
task to change heuristics and parameters of the program to get the right
emergent behavior.
But this is possible.


I think you suppose that creativity is something very strange and mythical
and cannot be done by machines.
I don't think so. Creativity is mainly the ability to use and combine *all*
the pieces of knowledge you have.
The creativity of humans seems to be so mythical just because the knowledge
data base is so huge. Remember how many bits your brain receives every
second for many years!
A chess program has only knowledge of chess. And that's the main reason it
just can do chess. But within chess, it can be creative.

You see an inherent algorithmic problem to obtain creativity but it is in
fact just mainly a problem of knowledge.

So has the chess program the same creativity as a human if you are fair and
restrict just to the domain and knowledge of chess?
The answer is yes! Very good experts of chess often say that a certain move
of a chess program is creative, spirited, clever and so on.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-07 Thread Dr. Matthias Heger
The quantum level biases would be more general and more correct as it is the
case 
with quantum physics and classical physics.

The reasons why humans do not have modern physics biases for space and time:
There is no relevant advantage to survive when you have such biases
and probably the costs of necessary resources to obtain any advantage are
far too high
for a biological system.

But with future AGI (not the first level), these objections won't hold.
We don't need AGI do help us with middle level physics. We will need AGI
to make progress in worlds, were our innate intuitions do not hold, namely
nanotechnology, inner cellular biology.
So there would be an advantage for quantum biases and because of this
advantage the quantum biases would probably more often used than non-quantum
biases.

And what about the costs of resources? We could imagine an AGI brain which
has the size of a continent.
Of course not for the first level AGI. But I am sure, that future AGIs will
have quantum biases.

But as Ben said: First we should build AGI with biases we have and
understand.

And the main 3 problems of AGI should be solved first:
How to obtain knowledge, how to represent knowledge and how to use knowledge
to solve different problems in different domains.





Charles Hixson wrote:

I feel that an AI with quantum level biases would be less general. It 
would be drastically handicapped when dealing with the middle level, 
which is where most of living is centered. Certainly an AGI should have 
modules which can more or less directly handle quantum events, but I 
would predict that those would not be as heavily used as the ones that 
deal with the mid level. We (usually) use temperature rather then 
molecule speeds for very good reasons.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread YKY (Yan King Yin)
On Tue, Oct 7, 2008 at 8:13 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 A good idea and a euro will get you a cup of coffee. Whoever said you
 need to protect ideas is just shilly-shallying you. Ideas have no
 market value; anyone capable of taking them up, already has more ideas
 of his own than time to implement them. Don't take my word for it,
 look around you; do you see people on this list going, I'm ready to
 start work, someone give me an idea please? No, you see people going,
 here are my ideas, and other people going, great thanks, but I've
 already got my own.

 What people will pay for is to have their problems solved. If you want
 to get paid for AI, I think the best hope is to make as an open-source
 project, and offer support, consultancy etc. It's a model that has
 worked for other types of open source software.

But how do you explain the fact that many of today's top financially
successful companies rely on closed-source software?  A recent example
is Google's search engine, which remains closed source.  If they had
open-sourced their search engine, my guess is that there would be many
more copy-cats now all over the world.

True, ideas are in abundance, but in the same design space people tend
to converge on the same ideas.  So competition depends on those few
ideas.  Also, there are innovative ideas that solve some bottleneck
problems, which are very valuable.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread Mike Tintner

Russell : Whoever said you

need to protect ideas is just shilly-shallying you. Ideas have no
market value; anyone capable of taking them up, already has more ideas
of his own than time to implement them.


In AGI, that certainly seems to be true -  ideas are crucial, but require 
such a massive amount of implementation. That's why I find Peter Voss and 
others -  incl Ben at times - refusing to discuss their ideas,  silly. Even 
if say you have a novel idea for applying AGI or a sub-AGI to some highly 
commercial field,  it would still all depend on implementation. The chance 
of someone stealing your idea is v. remote. And discussing your ideas openly 
will only improve them.


In many other creative fields, there can be reason to be secretive. If you 
had an idea for some new, more efficient chemical, or way of treating a 
chemical, for an electric battery, say, that could be v. valuable and highly 
stealable. Hence all those formula movies. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-10-07 Thread YKY (Yan King Yin)
On Tue, Oct 7, 2008 at 7:55 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Cyc's DB is not publicly modifiable, but it's **huge** ... big enough that
 its bulk would take others a really long time to replicate

A competent AGI should be able to absorb Cyc's knowledge, and I will
probably do so (unless it turns out to be very difficult).  If the Cyc
KB is in FOL purely, it should be relatively easy.

 Why don't you find out if you can do anything interesting w/ Cyc's existing
 **publicly available** DB, before setting about making your own.  You may
 find out, just like Cyc has, that possessing such a DB doesn't really get
 you anywhere in terms of creating AGI ... or even in terms of creating
 surpassingly useful narrow-AI systems...

I'm building a prototype AGI now, and will give it a very small test
KB so it can parse some simple sentences and do some inferences.

The next step would be to absorb Cyc's KB and then let online users expand it.

Also, I will provide some learning algorithms to be used in
conjunction with user inputs -- for example, users can give examples
of reasoning in NL, and the AGI will learn the logical rules from
those examples.

This seems to me to be the right way towards AGI...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-10-07 Thread Ben Goertzel
Cyc's DB is not publicly modifiable, but it's **huge** ... big enough that
its bulk would take others a really long time to replicate

Why don't you find out if you can do anything interesting w/ Cyc's existing
**publicly available** DB, before setting about making your own.  You may
find out, just like Cyc has, that possessing such a DB doesn't really get
you anywhere in terms of creating AGI ... or even in terms of creating
surpassingly useful narrow-AI systems...

ben

On Tue, Oct 7, 2008 at 1:00 AM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 On Tue, Oct 7, 2008 at 11:50 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  I still don't understand why you think a simple interface for entering
 facts
  is so important... Cyc has a great UI for entering facts, and used it to
  enter millions of them already ... how far did it get them toward AGI???

 Does Cyc have a publicly modifiable AND centrally maintained KB?
 That's what I'm trying to make...

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread Russell Wallace
On Tue, Oct 7, 2008 at 1:47 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 But how do you explain the fact that many of today's top financially
 successful companies rely on closed-source software?  A recent example
 is Google's search engine, which remains closed source.

Nobody paid Google for their idea. Nobody paid them for their
prototype code. What they got paid for was *solving people's problems*
-- delivering a better search service.

I'm not saying every project has to be open source. I'm saying revenue
will accrue to an AI if and only if, when and because it solves
people's problems. If you think you can get to that stage by your own
labor alone, go for it. If you think you can persuade a venture
capitalist to fund you to hire a team to do it, go for it. If you
think you're charismatic enough to get people to fund you as charity,
go for it. If you think you can by some other method make it work as a
closed source project, go for it. If not, make it open source.

But whichever route you pick, follow it with conviction. If you flag
your project open source and then start talking about protecting
your ideas and trying to measure the exact value of everybody's
contributions so everybody gets just what's coming to them and no
more, people will avoid it like a week-dead rat. You might have the
best intentions in the world, but those intentions need to come across
clearly and unambiguously in how you present your strategy.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread YKY (Yan King Yin)
On Tue, Oct 7, 2008 at 9:16 PM, Russell Wallace
[EMAIL PROTECTED] wrote:

 But whichever route you pick, follow it with conviction. If you flag
 your project open source and then start talking about protecting
 your ideas and trying to measure the exact value of everybody's
 contributions so everybody gets just what's coming to them and no
 more, people will avoid it like a week-dead rat. You might have the
 best intentions in the world, but those intentions need to come across
 clearly and unambiguously in how you present your strategy.

I was trying to find a way so we can collaborate on one project, but
people don't seem to like the virtual credit idea.

Even if I go opensource, the number of significant contributors may
still be 0 (beside myself)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread Russell Wallace
A good idea and a euro will get you a cup of coffee. Whoever said you
need to protect ideas is just shilly-shallying you. Ideas have no
market value; anyone capable of taking them up, already has more ideas
of his own than time to implement them. Don't take my word for it,
look around you; do you see people on this list going, I'm ready to
start work, someone give me an idea please? No, you see people going,
here are my ideas, and other people going, great thanks, but I've
already got my own.

What people will pay for is to have their problems solved. If you want
to get paid for AI, I think the best hope is to make as an open-source
project, and offer support, consultancy etc. It's a model that has
worked for other types of open source software.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread Jiri Jelinek
Mike,

 The chance of someone stealing your idea is v. remote.

There are many companies that made fortune with stolen ideas (e.g. Microsoft).
But of course they are primarily after proven ideas.

YKY,

If practically doable, I would recommend closed source, utilizing (
possibly developing) as many open source components as possible. Also
consider safety issues - functional AGI in wrong hands. The level of
my safety concern would be (in part) influenced by how the AGI learns
- e.g. whether the user's ability to teach the system is the tricky
part which will ultimately make the system dumb/smart OR if the
prototype learns more-less on its own.

Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


OFFLIST [agi] Readings on evaluation of AGI systems

2008-10-07 Thread Colin Hales

Hi Ben,
A good bunch of papers.

(1) Hales, C. 'An empirical framework for objective testing for 
P-consciousness in an artificial agent', The Open Artificial 
Intelligence Journal vol.? , 2008.

Apparently it has been accepted but I'll believe it when I see it.

It's highly relevant to the forum you mentioned. I was particularly 
interested in the Wray and Lebiere work... my paper (1) would hold that 
the problem in their statement Taskability is difficult to measure 
because there is no absolute notion of taskability -- a particular 
quantitative measure for one domain might represent the best one could 
achieve, while in another, it might be a baseline is solved. An 
incidental byproduct of the execution of the test is that all the other 
metrics in their paper are delivered to some extent. Computationalist AI 
subjects will fail the (1) test. Humans won't. A real AGI will pass. 
Testing has been a big issue for me and has taken quite a while to sort 
out. Peter Voss's AI will fail it. As will everything based on NUMENTA 
products.but they can try!.the test can speak for itself. Objective 
measurement of outward agent behaviour is decisive. All contenders have 
the same demands made of them...the only requirement is that verifiably 
autonomous, embodied agents only need apply.


I don't know if this is of interest to anyone, but I thought I'd mention it.

regards,
Colin

Ben Goertzel wrote:


Hi all,

In preparation for an upcoming (invitation-only, not-organized-by-me) 
workshop on Evaluation and Metrics for Human-Level AI systems, I 
concatenated a number of papers on the evaluation of AGI systems into 
a single PDF file (in which the readings are listed alphabetically in 
order of file name).


In case anyone else finds this interesting, you can download the 
single PDF file from


http://goertzel.org/AGI_Evaluation.pdf

It's 196 pages of text  I don't condone all the ideas in all the 
papers, nor necessarily consider all the papers incredibly fascinating 
... but it's a decent sampling of historical thinking in the area by a 
certain subclass of AGI-ish academics...


ben


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com