Any design must start with a set of requirements. Otherwise it is
directionless and doomed to fail.

Examples of requirements for AGI.
- Write descriptions of videos.
- Pass the SAT exams.
- Drive a car.
- Compose and play good music.
- Win at Jeopardy!

Pick something. Then describe how you will solve the problem. Then solve
it. Measure the results. Then write your paper.

It's hard. The obvious application of AGI is to automate $80 trillion per
year of human labor. We already know that the most successful approaches to
vision, language, and robotics are deep neural networks. A human brain
sized neural network needs 10 petaflops and one petabyte of memory. Half of
its knowledge is encoded in DNA equivalent to 300 million lines of code.
The other half is trained on 20 years of video and other sensory input, one
exabyte. You need 7 billion of these, trained individually to allow for job
specialization. And you aren't building humans, but rather machines that
can communicate with humans, do what humans can do, and always serve us.
And you can't use transistors, because the waste heat from 7 billion 10
petaflop computers would make the planet uninhabitable.

Any little bit of progress in this direction is worthy of a paper. But
ideas are not progress. Results are.

On Tue, Apr 30, 2019, 3:42 PM Mike Archbold <jazzbo...@gmail.com> wrote:

> To be fair, I think YKY put perspiration into mathematical structure
> and it looks like a decent attempt at a fusion of logic and neural
> networks. But it seems like the next step could be an embryonic
> program. Personally I feel like I spent too long on my overall design,
> and some things become clear only through experimentation. AGI is a
> game of nuanced distinctions, as is reality.
>
> Mike Archbold
>
> On 4/30/19, Matt Mahoney <mattmahone...@gmail.com> wrote:
> > The revised paper is a bit better but really doesn't address my
> > main concerns. I mean the 1% inspiration is done (Edison) and just the
> 99%
> > persperation is left to do. Yeah, actually doing experiments and writing
> up
> > the results is hard work, but that's how papers get published. Nobody
> cares
> > about untested ideas.
> >
> > Maybe write up a paper on past work like Genifer from 2010.
> >
> http://strong-ai.info/blog/ai/2010/08/08/genifer-general-inference-engine
> > Why did it fail? What lessons were learned?
> >
> > On Tue, Apr 30, 2019, 5:36 AM John Rose <johnr...@polyplexic.com> wrote:
> >
> >> Matt > "The paper looks like a collection of random ideas with no
> >> coherent
> >> structure or goal...."
> >>
> >>
> >>
> >> Argh... I love this style of paper whenever YKY publishes something my
> >> eyes are on it. So few (if any) are written this way, it's a terse jazz
> >> fusion improv of mecho-logical-mathematical thought physics needed to
> >> describe AGI concept.
> >>
> >>
> >>
> >> Immediately on the first version when I saw the navigating the labyrinth
> >> of "thinking" I thought of the quantum many paths simultaneity in
> >> photosynthesis and YKY mentioning the discovery of a possible
> correlation
> >> of Schrödinger and RL... but that item was yanked in the second
> >> iteration.
> >> That's OK, sometimes while on the vanguard of thought viewers eyes must
> >> be
> >> shielded from that which they explicitly fear the most...coincidentally
> >> sometimes which is totally obvious thus suspending disbelief while
> >> maintaining a referential propriety and contemporary academic
> >> interestingness.
> >>
> >>
> >>
> >> Also yanked was the expression of the notion for the AGI requirement of
> >> approximating K-complexity which in that I agree is where all the good
> >> stuff is…. generally and/or specifically… IMO this where the multi-agent
> >> consciousness mechanics come in but I’ll shield some eyes on that one :)
> >>
> >>
> >>
> >> John
> >>
> >>
> >>
> >> *From:* Stefan Reich via AGI <agi@agi.topicbox.com>
> >> *Sent:* Friday, April 19, 2019 4:21 PM
> >> *To:* AGI <agi@agi.topicbox.com>
> >> *Subject:* Re: [agi] My AGI 2019 paper draft
> >>
> >>
> >>
> >> Good review
> >>
> >>
> >>
> >> On Fri, Apr 19, 2019, 22:02 Matt Mahoney <mattmahone...@gmail.com>
> wrote:
> >>
> >> It would help to get your paper published if it had an experimental
> >> results section. How do you propose to test your system? How do you plan
> >> to
> >> compare the output with prior work on comparable systems? What will you
> >> measure? What benchmarks will you use (for example, image recognition,
> >> text
> >> prediction, robotic performance)?
> >>
> >>
> >>
> >> The paper looks like a collection of random ideas with no coherent
> >> structure or goal. The math seems to confuse or mislead rather than
> >> explain. For example you show father(x,y) as a function in the real
> plain
> >> rather than a predicate over discrete variables. This is interesting for
> >> a
> >> moment, but doesn't go anywhere, so you move on to the next topic. The
> >> whole paper is like this, plugging variables from one field of study
> into
> >> equations from another and hoping something useful comes out.
> >>
> >>
> >>
> >> I know that you are just full of ideas. But actually writing some code
> >> that does something interesting might really help in sorting out the
> >> useful
> >> ideas from the ones that go nowhere and advance the field of AGI.
> >>
> >>
> >>
> >> On Fri, Apr 19, 2019, 9:15 AM YKY (Yan King Yin, 甄景贤) <
> >> generic.intellige...@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >>
> >>
> >> This is my latest draft paper:
> >>
> >> https://drive.google.com/open?id=12v_gMtq4GzNtu1kUn9MundMc6OEhJdS8
> >>
> >>
> >>
> >> I submitted the same basic idea in AGI 2016, but was rejected by some
> >> rather superficial reasons.  At that time, reinforcement learning for AI
> >> was not widely heard of, but since then it has become a ubiquitous hot
> >> topic.  I hope this time I can get published, as it would allow me to
> >> share
> >> my ideas more easily with other researchers and mathematicians so that I
> >> could solicit their help and improve my theory, possibly starting the
> >> coding project as well.
> >>
> >>
> >>
> >> Comments and suggestions are welcome 😊
> >>
> >> --
> >>
> >> *YKY*
> >>
> >> *"The ultimate goal of mathematics is to eliminate any need for
> >> intelligent thought"* -- Alfred North Whitehead
> >>
> >> *Artificial General Intelligence List <https://agi.topicbox.com/latest
> >*
> >> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> >> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> >> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> >> <
> https://agi.topicbox.com/groups/agi/T3cad55ae5144b323-M5270f3477e3d62edc3b33160

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3cad55ae5144b323-M647dc62e9003e3b9a0f3500d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to