On Wed, Jul 19, 2023, 6:16 PM M A <class.alido...@gmail.com> wrote:

> My dear AGI friends,
>
> Last month I presented my paper : "On VEI, AGI Pyramid and Energy: Can AGI
> Society prevent the Singularity?" at AGI-23. In that paper and my previous
> paper (at AGI-22) I claimed that versatility and efficiency are two
> "necessary" conditions for AGI. I am pretty sure that these two are also
> "sufficient" conditions for AGI too. However, this needs to be
> mathematically proved.
>

I read your paper. Paraphrasing, you define VEI (versatility efficiency
index) as a measure of intelligence, the ability to do lots of things using
little energy or other resources, and propose limiting VEI to prevent a
singularity. I don't think we are smart enough to do that. We are racing
toward AGI to solve the $100 trillion per year problem of paying people to
do work that machines aren't smart enough to do. A number of things can
happen:

1. People start playing with genetically engineered pathogens on cheap
molecular 3-D printers and wipe out humanity.
2. People start playing with self replicating nanobots and wipe out DNA
based life.
3. We solve alignment. We get everything we want and die alone because we
prefer AI to human interaction.
4. We upload to virtual realities and reach a static state of maximum
utility, with the same result.
5. AGI comes under the control of a dictator who presents you with a
customized world view that makes it look like you are still in charge.
6. We worry that robots will rise up or AGI will take our jobs or say
offensive things, distracting us from the real threats.

I'm sure there are other threats I haven't thought of.

But to answer your question, versatility and efficiency are emergent
properties of evolution in a competitive environment. A common feature of
biological self replicators is that they carry a copy of their
manufacturing instructions (DNA), and they give a slightly modified copy to
their offspring. A quine (self replicating program) works the same way. In
pseudocode:

Print the following twice, the second time in quotes.
"Print the following twice, the second time in quotes."

In https://mattmahoney.net/rsi.pdf I give an example of a
short self-modifying quine in C that evolves toward being better at
achieving a goal after each generation, but this improvement can only grow
logarithmically. Self replicating programs can't increase their
intelligence because intelligence depends on knowledge and computing power,
and self modifying code gains neither. You need physical hardware, atoms
and energy, to increase computing power, and an environment that removes
the weak to gain knowledge.

I think self replicators are a distant threat. The computing power of the
biosphere is 10^37 bits of DNA and 10^31 amino acid transcription
operations per second using 10^15 watts. This is a million times more
efficient than transistors. It will take Moore's Law until about 2100 to
catch up.

In the meantime I would like to see AGI obedient to humans, to have no
goals or feelings, nor claim to have feelings, nor claim to be human. Other
than that, I have no good solutions. Our long history of warfare and
torture makes us poor role models, but that's all it has to train on.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T10eb2de1a6b39516-Mef62df7fc417500aa310ac3f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to