Im just saying it makes no difference to me, ai or no ai. =) hehe
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5e30c339c3bfa713-Maa29adc5517abebcebd65a53
Delivery options:
Yeah, let's get deeper into that bullshit. Because there's not real work to be
done, right?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5e30c339c3bfa713-Mb0ae68a6c814a412b382f763
Delivery options:
On Thu, Sep 16, 2021 at 12:56 AM wrote:
>
> We are already replaced. God doesnt need any of us.
Dude. Each of us is a fractal image and portion of "God". Get your
eschatology straight my friend ;)
--
Artificial General Intelligence List: AGI
We are already replaced. God doesnt need any of us.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5e30c339c3bfa713-Ma632c2ae5435ef5b0a6ba748
Delivery options: https://agi.topicbox.com/groups/agi/subscription
His point was even if here's a twin machine of you, you still naught want to
die, you want to stop the replacement.
We *will *make true AGI though, it will be just like us soon, maybe by 2030, it
will simply be made of different materials that's all.
It will replace us for all tasks ex. coming
On Wednesday, September 15, 2021, at 9:08 AM, Matt Mahoney wrote:
> Here is a robot that looks and acts like you as far as anyone can tell,
> except that it is younger, healthier, stronger, smarter, upgradable, immortal
> through backups, and it has super powers like infrared vision and wireless
Machines already do 99% of work, as measured by global economic
productivity relative to the price of food in 1800.
And machines are not slaves. We abolished slavery because it was cruel to
humans. Machines are not human, even if we can make them look like humans,
pass the Turing test, and mimic
On 2021-09-09 23:21:PM, Matt Mahoney wrote:
It would be existentially dangerous to make AGI so much
like humans that we give human rights to competing
machines more powerful than us.
Not having much in the way of human rights did not prevent slaves
from thriving during the era of slavery.
If you program your AGI to positively reinforce input, learning, and
output, will it develop senses of qualia, consciousness, and free will? I
mean in the sense that it is motivated like we are to preserve the reward
signal by not dying. Do we need this in AGI, or can it learn a model of the
human
Theme for discussion this week: Characterizing and Implementing
Human-Like Consciousness
See
https://wiki.opencog.org/w/AGI_Discussion_Forum#Sessions
URL for video-chat: https://singularitynet.zoom.us/my/benbot ...
Background reading:
10 matches
Mail list logo