Re: [agi] Philosophy of General Intelligence

2008-09-09 Thread Mike Tintner

Narrow AI : Stereotypical/ Patterned/ Rational

Matt:  Suppose you write a program that inputs jokes or cartoons and outputs 
whether or not they are funny


AGI : Stereotype-/Pattern-breaking/Creative

"What you rebellin' against?"
"Whatcha got?"

Marlon Brando. The Wild One (1953)  On screen, he rebelled against "the 
man"; offscreen, he rebelled against the rebel stereotype imposed on him. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-09 Thread William Pearson
2008/9/8 Benjamin Johnston <[EMAIL PROTECTED]>:
>
> Does this issue actually crop up in GA-based AGI work? If so, how did you
> get around it? If not, would you have any comments about what makes AGI
> special so that this doesn't happen?
>

Does it also happen in humans? I'd say yes, therefore it might be a
problem we can't avoid but only mitigate by having communities of
intelligences sharing ideas so that they can shake each other out of
their maxima assuming they settle in different ones (different search
landscapes and priors help with this). The community might reach a
maxima as well, but the world isn't constant so good ideas might
always be good, changing the search landscapes, meaning a maxima my
not be a maxima any longer.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] any advice

2008-09-09 Thread Valentina Poletti
I am applying for a research program and I have to chose between these two
schools:

Dalle Molle Institute of Artificial Intelligence
University of Verona (Artificial Intelligence dept)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] any advice

2008-09-09 Thread Jan Klauck
> Dalle Molle Institute of Artificial Intelligence
> University of Verona (Artificial Intelligence dept)

If they were corporations, from which one would you buy shares?

I would go for IDSIA. I mean, hey, you have Schmidhuber around. :)

Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] any advice

2008-09-09 Thread Pei Wang
IDSIA is AGI-related, though you need to love math and theoretical
computer science to work with Schmidhuber.

Don't know about Verona.

Pei

On Tue, Sep 9, 2008 at 8:27 AM, Valentina Poletti <[EMAIL PROTECTED]> wrote:
> I am applying for a research program and I have to chose between these two
> schools:
>
> Dalle Molle Institute of Artificial Intelligence
> University of Verona (Artificial Intelligence dept)
> 
> agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AI isn't cheap

2008-09-09 Thread Matt Mahoney
--- On Mon, 9/8/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
On 9/7/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: 

>>The fact is that thousands of very intelligent people have been trying
>>to solve AI for the last 50 years, and most of them shared your optimism.
 
>Unfortunately, their positions as students and professors at various
>universities have forced almost all of them into politically correct
>paths, substantially all of which lead nowhere, for otherwise they would
>have succeeded long ago. The few mavericks who aren't stuck in a
>university (like those on this forum) all lack funding.

Google is actively pursuing AI and has money to spend. If you have seen some of 
their talks, you know they are pursuing some basic and novel research.

>>Perhaps it would be more fruitful to estimate the cost of automating the
>>global economy. I explained my estimate of 10^25 bits of memory, 10^26
>>OPS, 10^17 bits of software and 10^15 dollars.

You want to replicate the work currently done by 10^10 human brains. A brain 
has 10^15 synapses. A neuron axon has an information rate of 10 bits per 
second. As I said, you can argue about these numbers but it doesn't matter 
much. An order of magnitude error only changes the time to AGI by a few years 
at the current rate of Moore's Law.

Software is not subject to Moore's Law so its cost will eventually dominate. A 
human brain has about 10^9 bits of knowledge, of which probably 10^7 to 10^8 
bits are unique to each individual. That makes 10^17 to 10^18 bits that have to 
be extracted from human brains and communicated to the AGI. This could be done 
in code or formal language, although most of it will probably be done in 
natural language once this capability is developed. Since we don't know which 
parts of our knowledge is shared, the most practical approach is to dump all of 
it and let the AGI remove the redundancies. This will require a substantial 
fraction of each person's life time, so it has to be done in non obtrusive 
ways, such as recording all of your email and conversations (which, of course, 
all the major free services already do).

The cost estimate of $10^15 comes by estimating the world GDP ($66 trillion per 
year in 2006, increasing 5% annually) from now until we have the hardware to 
support AGI. We have the option to have AGI sooner by paying more. Simple 
economics suggests we will pay up to what it is worth.

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Artificial humor

2008-09-09 Thread Matt Mahoney
A model of artificial humor, a machine that tells jokes, or at least inputs 
jokes and outputs whether or not they are funny. Identify associations of the 
form (A ~ B) and (B ~ C) in the audience language model where (A ~ C) is 
believed to be false or unlikely through other associations. Test whether the 
joke activates A, B, and C by association to induce the association (A ~ C).

This approach differs from pattern recognition and machine learning techniques 
used in other text classification tasks such as spam detection or information 
retrieval: a joke is only funny the first time you hear it. That's because once 
you form the association (A ~ C), it is added to the language model and you no 
longer have the prerequisites for the joke.

Example 1:
Q. Why did the chicken cross the road?
A. To get to the other side.

(I know, not funny, but pretend you haven't heard it).  We have:
A ~ B: Chickens have legs and can walk.
B ~ C: People walk across the road for a reason.
A ~ C: Chickens have human-like motivations.

Example 2 requires a longer associative chain:
(A comment about Sarah Palin) A vice president who likes hunting. What could go 
wrong?

It invokes the false conclusion: (Sarah Palin ~ hunting accident) by inductive 
reasoning: (Sarah Palin ~ vice president ~ Dick Cheney ~ hunting accident) and 
(Sarah Palin ~ hunting ~ hunting accident).  Note that all of the preconditions 
must be present for the joke to work. For example, the joke would not be funny 
if told about Joe Biden (doesn't hunt), George W. Bush (not vice president), or 
if you were unaware of Dick Cheney's hunting accident or that he was vice 
president. In order for a language model to detect the joke as funny, it would 
have to know that you know all four of these facts and also know that you 
haven't heard the joke before.

Humor detection obviously requires a sophisticated language model and knowledge 
of popular culture, current events, and what jokes have been told before. Since 
entertainment is a big sector of the economy, an AGI needs all human knowledge, 
not just knowledge that is work related.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-09 Thread Mike Tintner

Matt,

Humor is dependent not on inductive reasoning by association, reversed or 
otherwise, but on the crossing of whole matrices/ spaces/ scripts .. and 
that good old AGI standby, domains. See Koestler esp. for how it's one 
version of all creativity -


http://www.casbs.org/~turner/art/deacon_images/index.html

Solve humor and you solve AGI. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AI isn't cheap

2008-09-09 Thread Steve Richfield
Matt,

On 9/9/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Mon, 9/8/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
> On 9/7/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> >>The fact is that thousands of very intelligent people have been trying
> >>to solve AI for the last 50 years, and most of them shared your optimism.
>
> >Unfortunately, their positions as students and professors at various
> >universities have forced almost all of them into politically correct
> >paths, substantially all of which lead nowhere, for otherwise they would
> >have succeeded long ago. The few mavericks who aren't stuck in a
> >university (like those on this forum) all lack funding.
>
> Google is actively pursuing AI and has money to spend.


Maybe I am a couple of years out of date here, but the last time I looked,
they were narrowly interested in search capabilities and not at all
interested in linking up fragments from around the Internet, filling in
missing metadata, problem solving, and the other sorts of things that are in
my own area of interest. I attempted to interest them in my approaches, but
got blown off apparently because they thought that my efforts were in a
different direction than their interests. Have I missed something?



> If you have seen some of their talks,


I haven't. Are any of them available somewhere?



> you know they are pursuing some basic and novel research.


Outside of searching?



> >>Perhaps it would be more fruitful to estimate the cost of automating the
> >>global economy. I explained my estimate of 10^25 bits of memory, 10^26
> >>OPS, 10^17 bits of software and 10^15 dollars.
>
> You want to replicate the work currently done by 10^10 human brains. A
> brain has 10^15 synapses. A neuron axon has an information rate of 10 bits
> per second. As I said, you can argue about these numbers but it doesn't
> matter much. An order of magnitude error only changes the time to AGI by a
> few years at the current rate of Moore's Law.
>
> Software is not subject to Moore's Law so its cost will eventually
> dominate.


Here I could write a book and more. It could and should obey Moore's law,
but history and common practice has gone in other directions. Starting with
the Bell Labs Interpretive System on the IBM-650 and probably peaking at
Remote Time Sharing in 1970, methods of bootstrapping to establish a
succession of higher capabilities to grow exponentially have been known.
Imagine a time sharing system with a FORTRAN/ALGOL/BASIC all rolled into one
memory-resident compiler, significance arithmetic, etc., servicing many of
the high schools in Seattle (including Lakeside where Bill Gates and Paul
Allen learned on it), all on the equivalent of a Commodore 64. Some of the
customers complained about only having 8kB of Huffman-coded
macro-instructions to hold their programs, until a chess playing program
that ran in that 8K that never lost a game appeared in the library. Then
came the microprocessors and all this has been forgotten. Microsoft sought
to "do less with less" without ever realizing that the really BIG machine
they learned on (and which they still have yet to equal) was only the
equivalent of a Commodore 64. I wrote that compiler and chess game.

No, the primary limitation is cultural. I have discussed here how to make
processors that run 10,000 times faster, and how to build a scanning UV
fluorescent microscope that diagrams brains. The SAME thing blocks both -
culture. Intel is up against EXACTLY the same mind block that IBM was up
against when for decades they couldn't move beyond Project Stretch, and
there simply isn't any area of study into which a Scanning UV fluorescence
microscope now cleanly falls, of course because without the microscope, such
an area of study could not develop. Things are now quite stuck until either
the culture changes (don't hold your breath), or the present generations of
"experts" (including us) dies off.

At present, I don't expect to see any AGIs in our lifetime, though I do
believe that with support, one could be developed in 10-20 years. Not until
someone gives the relevant sciences a new name, stops respecting present
corporate and university structure (e.g. that PhDs have any but negative
value), and injects ~$10^9 to start it can this happen. Of course, this
requires independent rather than corporate or university money - some
rich guy who sees the light. Until I meet this guy, I'm sticking to
tractable projects like Dr. Eliza.



> A human brain has about 10^9 bits of knowledge, of which probably 10^7 to
> 10^8 bits are unique to each individual. That makes 10^17 to 10^18 bits that
> have to be extracted from human brains and communicated to the AGI. This
> could be done in code or formal language, although most of it will probably
> be done in natural language once this capability is developed.


It would be MUCH easier and cheaper to just scan it out with something like
a scanning UV fluorescent microscope.



> Since we don't know which p

Re: [agi] Re: AI isn't cheap

2008-09-09 Thread Matt Mahoney
(Top posting because Yahoo won't quote HTML email)

Steve,
Some of Google's tech talks on AI are here:
http://www.google.com/search?hl=en&q=google+video+techtalks+ai&btnG=Search

Google has an interest in AI because search is an AI problem, especially if you 
are searching for images or video. Also, their advertising model could use some 
help. I often go to data compression sites where Google is advertising 
compression socks, compression springs, air compressors, etc. I'm sure you've 
seen the problem.

>>Software is not subject to Moore's Law so its cost will eventually dominate.
 >Here I could write a book and more. It could and should obey
Moore's law, but history
>and common practice has gone in other
directions.

Since you have experience writing sophisticated software on very limited 
hardware, perhaps you can enlighten us on how to exponentially reduce the cost 
of software instead of just talking about it. Maybe you can write AGI, or the 
next version of Windows, in one day. You might encounter a few obstacles, e.g.

1. Software testing is not computable (the halting problem reduces to it).

2. The cost of software is O(n log n). This is because you need O(log n) levels 
of abstraction to keep the interconnectivity of the software below the 
threshold of stability to chaos, above which it is not maintainable (where each 
software change introduces more bugs than it fixes). Abstraction levels are 
things like symbolic names, functions, classes, namespaces, libraries, and 
client-server protocols.

3. Increasing the computational power of a computer by n only increases its 
usefulness by log n. Useful algorithms tend to have a power law distribution 
over computational requirements.

>>A
human brain has about 10^9 bits of knowledge, of which probably 10^7 to
10^8 bits are unique to each individual. That makes 10^17 to 10^18 bits
that have to be extracted from human brains and communicated to the
AGI. This could be done in code or formal language, although most of it
will probably be done in natural language once this capability is
developed.

 
>It would be MUCH easier and cheaper to just scan it out with something like a 
>scanning
>UV fluorescent microscope.

No it would not. Assuming we had the technology to copy brains (which we don't 
and you don't), then you have created a machine with human motives. You would 
still have to pay it to work. Do you really think you understand the brain well 
enough to reprogram it to want to work?

>Further, I see the interest in AGIs on this forum as a sort of
religious quest, that is
>absurd to even consider outside of Western
religions

No, it is about the money. The AGIs that actually get built will be the ones 
that can make money for their owners. If an AGI can do anything that a human 
can do, then that would include work. Currently that's worth $66 trillion per 
year.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 9/9/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
From: Steve Richfield <[EMAIL PROTECTED]>
Subject: Re: [agi] Re: AI isn't cheap
To: agi@v2.listbox.com
Date: Tuesday, September 9, 2008, 2:10 PM

Matt,


On 9/9/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- On Mon, 9/8/08, Steve Richfield <[EMAIL PROTECTED]> wrote:

On 9/7/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:

>>The fact is that thousands of very intelligent people have been trying
>>to solve AI for the last 50 years, and most of them shared your optimism.


>Unfortunately, their positions as students and professors at various
>universities have forced almost all of them into politically correct
>paths, substantially all of which lead nowhere, for otherwise they would

>have succeeded long ago. The few mavericks who aren't stuck in a
>university (like those on this forum) all lack funding.

Google is actively pursuing AI and has money to spend.
 
Maybe I am a couple of years out of date here, but the last time I looked, they 
were narrowly interested in search capabilities and not at all interested in 
linking up fragments from around the Internet, filling in missing metadata, 
problem solving, and the other sorts of things that are in my own area of 
interest. I attempted to interest them in my approaches, but got blown off 
apparently because they thought that my efforts were in a different direction 
than their interests. Have I missed something?


 
If you have seen some of their talks,
 
I haven't. Are any of them available somewhere?


 
you know they are pursuing some basic and novel research.
 
Outside of searching?

 
>>Perhaps it would be more fruitful to estimate the cost of automating the
>>global economy. I explained my estimate of 10^25 bits of memory, 10^26

>>OPS, 10^17 bits of software and 10^15 dollars.

You want to replicate the work currently done by 10^10 human brains. A brain 
has 10^15 synapses. A neuron axon has an information rate of 10 bits per 
second. As I said, you can argue about these numbers but it doesn't matter 
much. An order of magnitude error on

Re: [agi] Artificial humor

2008-09-09 Thread Mike Tintner
Here you go - should be dead simple to analyse formula - and produce 
program:)


http://www.energyquest.ca.gov/games/jokes/light_bulb.html

How many software engineers does it take to change a light bulb? Two. One 
always leaves in the middle of the project. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] OpenCogPrime tutorial tomorrow nightlll

2008-09-09 Thread Ben Goertzel
The introductory OpenCogPrime tutorial will be
tomorrow night...

http://opencog.org/wiki/OpenCogPrime:TutorialSessions

I'm flying home from California on the red-eye tonight so
don't expect me to be fully lucid, but hey ;-)

ben

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com