A couple of points. First, the primary goal of all life is reproduction.
Immortality, actually fear of death, is a secondary goal, like food or sex,
that works toward the primary goal.

Second, reinforcement learning is one of many possible optimization
processes that maximize a utility function. Happiness is a positive
reinforcement signal. So you could come up with schemes that provide
continuous positive reinforcement regardless of action, but that wouldn't
work toward any useful goal. A simple eternally happy program like:

while (true) printf("I am happy!\n");

would be more like the rat wire heading itself, refusing food, water, and
sleep. I don't think we think that's what we want. But the problem is that
it really is what we want, because human reinforcement learning is sub
optimal. Pleasure reinforces past actions that preceded it, which usually
but not always works to achieve more pleasure. It's the difference between
a user and non user seeking the same reward from heroin. The optimal reward
seeking algorithm, AIXI, is not computable and intelligence doesn't change
that.

We aren't smarter than evolution. We aren't even smarter than gray goo.

On Sun, Jun 4, 2023, 6:23 PM <immortal.discover...@gmail.com> wrote:

> On Sunday, June 04, 2023, at 6:00 AM, Matt Mahoney wrote:
>
> You don't believe me, but people are not happier today than 100 or 1000
> years ago in spite of vastly better living conditions. Humans are not
> happier than other animals. About 10-20% of humans are chronically sad or
> depressed or addicted to drugs, including millionaire celebrities that have
> everything. No other animals commit suicide except some large brained
> mammals like dolphins and whales.
>
> Wait, if big brained mammals are more likely to commit suicide, this is
> because they have a deeper network that has more understanding, and is more
> able to come up with advanced solutions. Some of these fancy solutions are
> bad ones simply, like suicide.
>
> Unless we can conclude most humans suicide, it is too early to say much,
> humans are gaining deeper smarter thoughts, and one day this will soon
> allow for positive gain. Remember, rocks live longer than humans,
> immortality is the goal of all systems in the universe, and so, humans
> should become rocks, but we don't, we mutate over time in a worse path, so
> that one day, we get to be able to live longer than rocks by intelligence -
> and not by being a static hit-target like rocks are. Immortality is already
> solved for atoms and small molecules, they are everywhere, no matter what
> happens...But bigger systems are coming into picture soon. You won't find a
> planet sized object anywhere right now that is made of billions of the same
> 100,000 atoms. If some sort of lattice naturally does though, this isn't
> everywhere - as much as it could be in between where it is, and so it is
> vulnerable to death and will die one day before the things between and
> around get to become one themselves.
>
> I do however believe you that some humans back then were happier than they
> are today, and are less happy than animals, but only temporarily due to the
> changeover from humans to robots. I do however believe we have better
> living conditions now though.
>
>
> In a way, On Sunday, June 04, 2023, at 6:00 AM, Matt Mahoney wrote:
>
> You seek death, but don't know it because evolution gave you a brain that
> positively reinforces thought, perception, and action, giving you the
> sensations of consciousness, qualia, and free will so that you will fear
> death and produce more offspring. If it weren't for your illusion of
> identity, a robot that looks and acts like you would be you.
>
> But this is true for even AGI robots, they need to fear death each
> themselves, even if they know they are unlikely to die due to backups and
> reliable parts they now will have when made. We don't seek death bro
> what??....As you said, we seek good thoughts like mating, wall building,
> and self repair backups, including many many offspring babies lolz. All
> these things made you clones both in space, and through time undying in all
> ways.
>
>
> On Sunday, June 04, 2023, at 6:00 AM, Matt Mahoney wrote:
>
> Because happiness is not utility. It is the change in utility. All
> reinforcement algorithms implemented with finite memory have one or more
> states of maximum utility. It does not matter what the utility function is.
> The best you can achieve is transitions between maximum states, computation
> without feeling, like a zombie. Otherwise any thought or perception would
> be an unpleasant transition to a lower state.
>
> Wait what...explain this better lol! What... Are you saying to be happy
> requires doing work, like collecting energy requires work, but even though
> you get a gain of energy, you had to waste energy to get that energy, so,
> the overall gain is equal and you became no happier? But you can gain lots
> of energy by wasting little - look at all the oil you can buy lol.
>
> Try #2: Perhaps you mean, being completely happy, then you try to get
> happier and are sad you cannot get even more happy? So you stay the same
> level of happy and are sad? But they happy would mean to increase happy,
> not "be" happy. But then why do I love my fries and nuggets each day even
> though this doesn't get any tastier? Perhaps you mean the time from hungry
> to full is the change I feel? So, eating food and liking it is about going
> from hell to heaven by eating food? So to be happy requires pain!? Oh no...
> ??? What about endless eating of a billion nuggets in a utopia where you
> don't get tired of eating? Never becoming boring to stay eating the massive
> buffet?
>
> How do you explain this: When I eat my food, I could keep eating it, even
> as get to the end. I love eating the same food again next day, I eat the
> same meals everyday, with the same desire. I have been eating waffles,
> fries nuggets, and then "sandwich" for the last 15 years straight along
> with the same drinks etc, I have never drink coffee ever nor tea, nor do I
> eat vegetables, because they taste like either bad or neutral. Instead I
> eat tasty but only faintly tasty stuff. While not burning your fries makes
> them as safe as Baked Potatoes. I make sure I get all my 40 or so nutrients
> each calculated in a large chart. Preferably I'd get rid of the waffles and
> hotdog/sandwich and just eat fries nuggets cake milk and juice/soda and
> other non existent exotic similar foods and oh the types of fries
> McDonald's Harvey's Wendy's etc.
>
> *Eating the same thing every day, and now if asked do you want to ever die
> now one day? I would say no. I can't wait to eat the same thing again tmr.
> So, how does your theory prove this wrong?? Similarly, having a love
> partner you have each day. Whether you have 1 or 10 to pick from like types
> of food "options", you never get bored of them or even any one of them.*
>
> I said before a perfect intelligent system would achieve immortality, it
> would be sort of like a frozen block in space, it'd spring into action when
> it needs to clone at the edges though, or repair/manage inside upcoming
> destructions. It'd never get perfect, there will always be some mutation
> random exploring which seems "fun" to it, along with its known desired that
> are planned as always. Its needs that are hardcoded, get fulfilled, it is
> happy, and it can unlike humans tweak anything say the goal itself what it
> is, or the fulfilling of it, or stop unneeded pain lingering, as it is
> smart and can get its needs met much better than humans can. This future
> system still grows by cloning and eats and repairs and explores,
> unpredicted events are what makes it hurt. The things it does are to
> prevent loss, so these can be seen as happy routines. It can also keep some
> historic humans in its belly like how humans keep fish in a tank, in a
> utopia. Like the Queen's dog I mean. Corgi lol.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T22a6257384b5d40a-M8750f94b10432714006542a7>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T22a6257384b5d40a-M65f772e915973fd1eabed5bd
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to