y'know, I just realized I had the fragments of a fairly important point scattered across my last few posts. The problem is that training is hard but you don't learn anything if you are only able to run inference.

Again, within the paradigm of the AGI attempts to predict its inputs and then extracts an "error" signal (don't be mislead by other semantic meanings for the word error). The key is that the prediction runs in inference mode but only the error signal requires training. You can then dial up and down the amount of compute spent on training by setting a threshold/squelch value on the error signal and only run your Big Compute when the error signal exceeds threshold.


Anyway, now that our airplane is finally getting above the cloud layer we can start taking a look at what's happening. I mean some of us had been setting up for a 2030 singularity, but it turns out that the singularity is already running late. Observe how much work the latest GPT family of proto-AGIs can actually accomplish in given time. How much faster are we than the human baseline are we already? Lets say that the human is still applying 10x as much smarts/wisdom etc than GPT, still the speedup is impressive. Consider that I spend between an hour and an entire week writing the long-form posts I make to this list. Adjust for the fact that GPT is currently running stream of consciousness only, and compute the speedup factor...

Yeah, the singularity is actually running behind schedule and some of the safety concerns are (or very soon will be) in play. =\


Anyway, lets apply the anthropic principle and assume that we aren't immediately heading to Yudkowskyland and have some say in the matter. Problem 1-A is that we are in the middle of an "Omnicrisis" that will likely see the collapse of fiat currency globally, an event that reset civilization across the world 3,140 years ago. On top of that, the criminal ruling class is trying to forment every kind of conflict ranging from global thermonuclear war down to street riots to cover their crimes. Everywhere you look there is crisis. There is the culture crisis, the education crisis, a famine crisis, and the race relations crisis, the immigration criss. And now, goddamnit, we have a robot apocalypse to worry about. =0

well, in terms of worry, I propose the following list.

1. The robot apocalypse.
2. The famine crisis.
3. The global financial crisis.
4. The zombie apocalypse. (the vaccine causes a degenerative brain condition and the idiots who got the vaccine will behave increasingly zombie like as time progresses, as they die off.......)
5. [everything else as time and opportunity permits.]

I am aware of which list I'm posting on so I'll stick to #1, I just felt obliged to rase the others out of basic decency. Anyway, the methodology I propose is to specify what good looks like and then nail down what needs to achieve outcome = good, and finally get ppl mobilized towards achieving that.

Good is a tricky thing. You can go out and do good things. Really! You can! =P And you can get things which are good, or at least goods. But when you start going out and doing things to people for their own good, things get really bad really quick. (nightmare = ASI in the hands of a SJW or some creature of that order...) I only need ASI for a number of various R&D projects and a number of various .... personal entertainment activities (mostly in VR)...

The problem is that we are probably going to need a way to get militant when anyone decides to get ASI just to go ideological on other people. =\

That's the real problem isn't it?

A. We want everyone to have access to AGI.

B. The world is filled to the brim with stupid dumbfucks.


I'd love to say "Imagine a world where everyone was intelligent and emotionally stable..." just about now. =( (It's not even worth entertaining that line of thought...)


What I have wanted to stop for a LONG time is anyone from dictating how anyone else is going to evolve. Personal evolution should be a private discussion between each individual and his own AGI. =\

Anyway, this situation is bonkers nuts crazy. My main focus is doing the R&D I need to do though it is very uclear how much hardware I will need.

Anyway, it's only going to get faster and crazier from here.... I'll probably think of more things to say after I press send... =P

--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tab4c8f96214bb261-M3abf56772a22b9735f6f7604
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to