Ben said: "First, the threshold for recursive self improvement is not human 
level intelligence, but human civilization level intelligence. That's higher by 
a factor of 7 billion."

My research puts this value at N, not X. The primary reason being that 
knowledge behaves differently across different life stages. In a specific 
state, it seemingly assumes quantum properties. The radical thought is; given 
critical mass X, could knowledge become alive? Interacting with such an 
abstraction may be context independent, and probable with full integrity. It 
depends on the quality of the underlying methodology being employed.

________________________________
From: Ben Goertzel <b...@goertzel.org>
Sent: Saturday, 09 February 2019 12:26 PM
To: AGI
Subject: Re: [agi] The future of AGI

Hi Matt,

Regarding your comments on OpenCog, it is true that we were not able
to attract the funding I'd hoped, and we also did not progress at the
speed I had hoped.

Nevertheless, I believe the basic architecture and underlying ideas
remain sound....   A fair bit of OpenCog dev work is now occurring
within the SingularityNET AI team, including work toward scaling up
the infrastructure....

Note that deep neural nets also did not attract the funding their
inventors hoped for in the 1960s and early 1970s, and did not progress
at the speed these inventors hoped.  But eventually the funding came,
appropriate hardware came, and success came....  Which in hindsight
looks inevitable, but when I was teaching NNs in university in the
1990s seemed quite far from it...

REgarding some of your specific comments,

***
First, the threshold for recursive self improvement is not human level
intelligence, but human civilization level intelligence. That's higher
by a factor of 7 billion.
***

Obviously this is an upper bound... an AGI engineered for recursive
self-improvement could potentially do it with much less resources than this...

***
Second is Eroom's Law. The price of new drugs doubles every 9 years.
Global life expectancy has been increasing 0.2 years per year since
the early 1900's, but that rate has slowed a bit since 1990. Testing
new medical treatment is expensive because testing requires human
subjects and the value of human life is increasing as the economy
grows.
***

This will be busted when we get sufficiently accurate systems biology
simulation models.  But in any case it's an obstacle to bio research not
to AGI...

***
Third, Moore's Law doesn't cover software or knowledge collection, two
of the three components of AGI (the other being hardware). Human
knowledge collection is limited to how fast you can communicate, about
150 words per minute per person.
***

This obviously makes no sense.  E.g. modern face recognition AI gained knowledge
much faster than this, by sucking up a lot of photos all at once.   Once NLP is
sufficiently solved, AI will be able to suck up a lot of knowledge by
reading the Web.
It won't need knowledge to be explicitly typed in for it.

...

Overall you make some OK high level points, like the value of huge
amounts of hardware
and shitloads of sensors etc. for AGI.   However I think you
exaggerate these points and
underplay the acceleration that could be obtained via fundamental
algorithmic improvements...

-- Ben G

On Fri, Feb 1, 2019 at 5:17 AM Matt Mahoney <mattmahone...@gmail.com> wrote:
>
> When I asked Linas Vepstas, one of the original developers of OpenCog
> led by Ben Goertzel, about its future, he responded with a blog post.
> He compared research in AGI to astronomy. Anyone can do amateur
> astronomy with a pair of binoculars. But to make important
> discoveries, you need expensive equipment like the Hubble telescope.
> https://blog.opencog.org/2019/01/27/the-status-of-agi-and-opencog/
>
> Opencog began 10 years ago in 2009 with high hopes of solving AGI,
> building on the lessons learned from the prior 12 years of experience
> with WebMind and Novamente. At the time, its major components were
> DeStin, a neural vision system that could recognize handwritten
> digits, MOSES, an evolutionary learner that output simple programs to
> fit its training data, RelEx, a rule based language model, and
> AtomSpace, a hypergraph based knowledge representation for both
> structured knowledge and neural networks, intended to tie together the
> other components. Initial progress was rapid. There were chatbots,
> virtual environments for training AI agents, and dabbling in robotics.
> The timeline in 2011 had OpenCog progressing through a series of
> developmental stages leading up to "full-on human level AGI" in
> 2019-2021, and consulting with the Singularity Institute for AI (now
> MIRI) on the safety and ethics of recursive self improvement.
>
> Of course this did not happen. DeStin and MOSES never ran on hardware
> powerful enough to solve anything beyond toy problems. ReLex had all
> the usual problems of rule based systems like brittleness, parse
> ambiguity, and the lack of an effective learning mechanism from
> unstructured text. AtomSpace scaled poorly across distributed systems
> and was never integrated. There is no knowledge base. Investors and
> developers lost interest.
>
> Meanwhile the last decade transformed our lives with smart phones,
> social networks, and online maps. Big companies like Apple, Google,
> Facebook, and Amazon, powered it with AI: voice recognition, face
> recognition, natural language understanding, and language translation
> that actually works. It is easy to forget that none of this existed 10
> years ago. Just those four companies now have a combined market cap of
> USD $3 trillion, enough to launch hundreds of Hubble telescopes if
> they wanted to.
>
> Of course we have not yet solved AGI. We still do not have vision
> systems as good as the human eye and brain. We do not have systems
> that can tell when a song sounds good or what makes a video funny. We
> still pay people $87 trillion per year worldwide to do work that
> machines are not smart enough to do. And in spite of dire predictions
> that AGI will take our jobs, that figure is increasing at 3-4% per
> year, continuing a trend that has lasted centuries.
>
> Over a lifetime your brain processes 10^19 bits of input, performing
> 10^25 operations on 10^14 synapses at a cost of 10^-15 joule per
> operation. This level of efficiency is a million times better than we
> can do with transistors, and Moore's Law is not going to help. Clock
> speeds stalled at 2-3 GHz a decade ago. We can't make transistors
> smaller than about 10 nm, the spacing between P or N dopant atoms, and
> we are almost there now. If you want to solve AGI, then figure out how
> to compute by moving atoms instead of electrons. Otherwise Moore's Law
> is dead.
>
> Even if we can extend Moore's Law using nanotechnology and biological
> computing (and I believe we will), there are other obstacles to the
> coming Singularity.
>
> First, the threshold for recursive self improvement is not human level
> intelligence, but human civilization level intelligence. That's higher
> by a factor of 7 billion. But that's already happening. It's the
> reason our economy and population are both growing at a faster than
> exponential rate.
>
> Second is Eroom's Law. The price of new drugs doubles every 9 years.
> Global life expectancy has been increasing 0.2 years per year since
> the early 1900's, but that rate has slowed a bit since 1990. Testing
> new medical treatment is expensive because testing requires human
> subjects and the value of human life is increasing as the economy
> grows.
>
> Third, Moore's Law doesn't cover software or knowledge collection, two
> of the three components of AGI (the other being hardware). Human
> knowledge collection is limited to how fast you can communicate, about
> 150 words per minute per person. Software productivity has remained
> constant at 10 lines per day since 1950. If you were hoping for an
> automated method to develop software, keep in mind that the 6 x 10^9
> bits of DNA that is you (equivalent to 300 million lines of code)
> required 10^50 copy and transcription operations on 10^37 bits of DNA
> to write over the last 3.5 billion years.
>
> Comments?
>
> --
> -- Matt Mahoney, mattmahone...@gmail.com



--
Ben Goertzel, PhD
http://goertzel.org

"The dewdrop world / Is the dewdrop world / And yet, and yet …" --
Kobayashi Issa

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-Mc102bef94336ffc69d1e08cb
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to