On Saturday, August 12, 2023, at 6:07 PM, Matt Mahoney wrote:
> On Sat, Aug 12, 2023, 3:47 PM  <immortal.discover...@gmail.com> wrote:
>> 
>> But AGIs can easily avoid death and live 10000000x longer than humans AFAIK 
>> by merely simply repairing its old parts and using multiple parts to make 
>> parts have more reliability.
> 
> Uploading and immortality is not that hard. Once your mind is in software, 
> you can make backup copies and program replacement robots as better versions 
> become available. You don't even need cryonics. There is enough personal 
> information about you on your phone and in your email, messages, and social 
> media accounts to program a LLM to convince everyone that it's you. It 
> doesn't matter if some memories are missing or made up because you won't 
> notice. By next year you will forget 90% of what happened yesterday, yet this 
> doesn't bother you. It will be even easier in 100 years when everyone that 
> knows you today is dead.
> 
> The big obstacle to uploading is convincing people that their consciousness 
> will be transferred to the silicon version of you when the carbon version is 
> destroyed. But that objection will go away once you see others wake up in 
> their new bodies, younger, stronger, and smarter, with super powers like 
> infrared vision and built in GPS and WiFi. Since consciousness is an illusion 
> not taken seriously by AI developers, it's just a matter of programming your 
> copy to claim to have feelings. That just means having the LLM predict your 
> actions and carry them out in real time.
> 
> Of course once we start doing this, carbon based humans will quickly become 
> extinct and nobody will care.

Yes it's true, an exact clone of me would be exactly me, yet, due to my silly 
human nature that I truly yes "believe" and "abide by" (yes, I do), I believe I 
am a viewer of the senses that pass through my eyes/brain, and that me the 
viewer wants to stay alive, and that a clone of me fails to be me or transfer 
me.

Even if i was in a simulation on a bed enjoying a new video game, and had an 
exact clone of me on same bed doing same thing at SAME time in parallel ran 
together, I'd still cry out, don't shut my sim life off, for then, I won't work 
anymore!!!

So, I can't prove that me2 in the sim, is not exactly me. It is. And there I 
still am saying don't kill me, see? So, it can't be because of anything actual 
other than a false belief. There is no identity that me2 has that me1 doesn't 
have. It's the same machine. So why do I say in the sim theory example don't 
kill me? Assuming me1 at least stays alive.

However. We already know machines are machines, no matter the machine. There is 
nothing to even chat about.

Humans, even AIs still need that prediction to be stirred so it says "don't 
kill me". This need of no death has to do with memories (not the "soul") being 
lost. Yet, too, if the machine dies, it can't work, it is true there too, a 
lost machine is lost resources. To me memories don't matter as much as keeping 
my self alive - my body, the machine, because I can always relearn and remake 
them. But AIs might say is costs them less to just remake a new better machine, 
and that they *won't* lose anything if kill a young 10 year old human, both in 
memories, and in body too and its low-ability to do intelligent reasoning 
compared to the ASI level technologies.

Humans don't make better than themselves level humans often. Nor can we see if 
their brain's memories are worthy to kill them or are they a useful citizen in 
making the homeworld survive longer? No, the government, the people, are as 
good as you are, no one can prove you are useless, not even a hobo on the road. 
Ok some do get killed or jailed. But otherwise, no. And it is not that easy or 
kill others that are the very same type. What about ants or dogs though? We 
home dogs, even if know they won't reach the singularity. But often they are 
there only because we love them. If we could use their resources to make better 
machines, we might kill them in this thought experiment.

It's hard to say. It might be a human-only case and not found in the AI's new 
homeworld that'll be built. Maybe I can think about it later.

I was thinking hmm the AI homeworld could shed its WHOLE self off and make a 
better self, since it is made of smaller parts that die but itself CAN'T, well, 
maybe it too can, as long as it knows it is going to get replaced?
(I had wrote this but now realize it is not so I think:)
*But I know one thing at least, the larger system(s) of the homeworld wants to 
survive, it wants to keep its memories and machinery intact. This thing is not 
able to say OK, let me die, an other will have the same stuff I got. It has a 
lot of redundancy to do that. But, it might see some of its parts that make it 
up also might want the same thing. Obviously it becomes harder or impossible to 
save n repair things small enough such as atoms.*
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M37330d2fa0df6db5e28906b2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to