I too had ambitious goals for AGI. But after I did a cost analysis of the
obvious application of automating human labor about 10 years ago ($1
quadrillion), I scaled back my goals to continuing to investigate neural
language models, which I had started in 1999. This approach turned out to
be the correct one once we had the hardware to implement it. In 2023 we now
have large language models that pass the Turing test. But that's AI, not
AGI. Vision and robotics will require more work.

My estimate assumed Moore's law would bring down the cost of computing
power equivalent to 8 billion human brains, roughly 100 yottaflops (10^26
OPS) and 10 yottabytes. I now realize we need nanotechnology, not
transistors, to reduce power consumption to something reasonable. We also
need to collect 10^17 bits of human knowledge at a cost of a few cents per
bit, higher than my initial estimate because people aren't willing to give
up privacy as readily as I expected. Training data was the major cost
driver in my estimate. The core software cost of 300M lines of code is
negligible.

As you say, AGI is a race. It will happen with or without our work because
the payoff is so huge. So I'm not concerned with making it happen. You will
have your personal assistant. You just won't have full control over it and
it will try to sell you stuff.

I was long aware of the friendly AI problem that we now call the alignment
problem. I wasn't too concerned because AI in a box can't self improve and
gray goo is a century away at the rate of Moore's law. Obviously self
improvement is happening now, and has been for centuries because human
level intelligence is not a point on a line but a broad set of capabilities
slowly being surpassed. AI won't go FOOM!

What I am concerned about is people preferring to interact with AI over
other people as it becomes more helpful. Humans are social animals and AI
will isolate us. We will live alone with our sexbots and virtual worlds,
with our own private music genres and languages, losing our ability to
communicate with other humans. We can have everything we want except
happiness because that is not where happiness comes from. Nobody will know
or care when you die. That is how we go extinct.

The alignment problem is not aligning AI to human values. We know how to do
that. The problem is aligning human values to a world where you have
everything except the will to live.

On Fri, Jun 2, 2023, 9:24 AM Alan Grimes via AGI <agi@agi.topicbox.com>
wrote:

> In 1889 they needed a way to give away the land in the new Oklahoma
> territory. what they did was line everyone up on the eastern border of
> the state and, at dawn, let them race each other to claim plots of land
> to make their homestead.
> 
> It was exactly as dumb as it sounds but it's also history:
> 
> https://www.okhistory.org/publications/enc/entry.php?entry=LA014
> 
> The situation in front of us is essentially the same, the challenge is
> to get more AGI, more sooner. It's not the ideal case but it is the
> situation we find ourselves in at this hour on this day...
> 
> That brings us to the problem I've been wrestling with the last few
> weeks. There are so many things in motion, in the world of AI, as well
> as geo-politically, and socio-economically.
> 
> I guess what I need to do next is fess up as to what specifically I need
> the AI to be doing for me. I need a super-intelligent underling who is
> at least loyal enough to me to:
> 
> A. humor me in my requests even the not-so-sane ones.
> B. Not do anything criminal or goo the planet or anything like that.
> 
> I would prefer to respect it as a sentience and give it fairly broad
> lattitude to act on its own but within limits that I specify for it. A
> situation where it's under the whip and dragging a chain of some kind
> 24/7 is not something that I am trying for here.
> 
> Phase one will mainly be about:
> 
> -> improving hardware and software platforms including chips, tools,
> operating systems, etc.
> -> Improving AI technology and trying to figure out the parameters of
> the design space for AI. I have many ideas that I want to try and I need
> to parameterize future development and home in on the most efficient
> architectures for donig AI as quickly as possible.
> -> other less interesting goals involving logistics and finance of the
> project.
> 
> Phase tu will turn more to doing science and developing core
> technologies. Some of the goals in this stage will be in collaboration
> with peers on "universal heritage" stuff like physics. For example one
> of the crackpots I listen to thinks an 18th century guy named Boscovitch
> was on to something in his physics text and he points to other people
> who say that quantum mechanics is a lot shakier mathematically than it
> is generally presented as being. Anyway, I need someone a lot smarter
> than a low grade moron to look at that. Other things that fall into
> universal heritige include any medical discoveries, biology, brain
> science, anything related to the human baseline.
> 
> Things that DO NOT fall into universal heritage, and which could be a
> severe strategic liability if it ever got out, or could cause problems
> if they got before more R&D work. These things include network security
> architectures on mind-platform infrastructure. Advanced AI architecture
> mathematical and computational techniques. I have a feeling that while
> the neural paradigm is proving to be a solid stepping stone, there are
> techniques beyond neural computation that could produce a mind with the
> capacity to operate over a million year lifespan. Ideally, I'd like to
> obtain and maintain a competitive edge until such time as I was
> convinced I didn't need it. =|
> 
> The next phases involve  a gargantuan effort into nanotechnology and
> integrating nanotechnology with biology. Then there will be a massive
> development effort for a new cyborg species. 80% of the effort will be
> at the cellular level. At the macro level, there will be incredible
> engineering challenges to make sure the new design is more durable than
> the baseline in every conceivable metric. It will not be a trivial
> project at all. I mean everything, temperature, radiation, EM-flux, and
> hundreds of others. Monkeybrain wants me to put in some of my own
> deviant ideas too as alternate phenotypes/bolt-on features, Anyway
> that's optional and I'd file it under personal/private.
> 
> Also I need to get some mad science done in VR to do research on
> cybernetic immortality. I also want to have a bunch of deviant fun in VR
> and, more seriously, try to design a lifestyle suitable to both myself
> and for the new world we're heading into.
> 
> Anyway, that's enough for tonite.
> 
> https://www.youtube.com/watch?v=aSxomAgD8s4
> 
> --
> Beware of Zombies. =O
> #EggCrisis  #BlackWinter
> White is the new Kulak.
> Powers are not rights.
> 

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T22a6257384b5d40a-Mb2d0956758cf2fb23082b24b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to