Nanograte, you seem to use "rational" oddly. Almost as if it's a synonym for
"pragmatic." That's not what I was trying to say at all.
In the sense I had in mind, the word means "possessing higher reasoning
powers," as in the phrase, "man is a rational animal." I paired it with
"sapient" becau
Its like the world goes to madness. I think AGI wont give us anything
remarkably new than ourselves, but it will be ASI - because you could make its
brain never forget, have instant reflexes, have constant never ending
motivation, its like making the "DAEMON OF EFFICIENCY" are we mad and headi
NG. I think that video that was posted was proof it works well.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T740f79e6fc3a89a3-Mc425bb2e15d78e4c47f0396c
Delivery options: https://agi.topicbox.com/groups/agi/sub
Besides ending up poisoning yourself and others with those fumes, here's
something else to consider. Write an algorithm to remodel these ingested
particles as functional parts. Turn the AGI into its own, 3D printer.
https://www.sciencefriday.com/segments/the-next-all-natural-recycling-solution-a
I'm not in favor of a dominant, rational mind without mechanisms towards
equilibrium. Any action may be rationalized, even genocide.
Agreed, AGI should generally benefit human populations at large. That could
already be said for robotics.
Even though humans only see AGI as powerful resources to
Thanks for the reply.
That sounds like good hands on experience, I listened very closely, cause im
crazy enough to try and do this as well.
I was thinking of keeping everything really simple, and accepting the
cruddiness for what it is and dont bother trying to fix it, and change the
design styl
I've tried recycling plastic bags before, following this method:
https://www.instructables.com/id/HomemadePlastic/
It's ... non-ideal. The bags never properly liquefy, they just turn gooey. So
you can't, say, pour the plastic into a mold; you have to pack it in, and it
doesn't necessarily rep
So... you are extremely old now, ~80, wow. What are you trying to implement
with this plan of yours? Is there a place that summarizes it all in 4,000 words
(clearly lol)? Explain a bit as clear as you can...
--
Artificial General Intelligence List: AGI
Perm
50 years ago you wrote this?:
http://mind.sourceforge.net/theory1.html
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T9ac241e99af2a0a8-Meca3777330fd180b40a2b367
Delivery options: https://agi.topicbox.com/groups/a
Every now and again on pointless t.v, I always see all the worlds unsolvable
problems, and being a nifty a.i. compositioned person I laugh at all the fear
of the unknown, but I know I havent my fully equiped never ending autonomous
super solving helper army yet, but its definitely coming up very
hang on, was I just being a skeptic myself, sorry, maybe you can reduce
conversation to rules? But u need them computer detectable...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mf03997
WriterofMinds you are going for the hardest possible a.i. to make, if you just
want to play soccer or tennis against a robot I wouldnt call it easy, but its
at least possible.
The best chat bots are all irrational, you have to be acceptant to some form
of irrationality or its impossible to do
The "no free lunch" theorem IMO is skeptical, I say its just fear of the
unknown again helped confirmed with the fact that ppl havent optimized lossless
similarity matching past linear search yet. Theres nothing abstractly
mathematical here, its just ppl failing at things because they arent
> Requirements for AGI.
>
> 1. To automate human labor so we don't have to work.
> 2. To provide a platform for uploading our minds so we don't have to die.
> 3. To create Kardashev level I, II, and III civilizations, controlling the
> Earth, Sun, and galaxy respectively.
Okay; now we know what
When I say quantum I just mean exponential power, I dont mean quantum mechanics
sorry.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Md5d1fca136ee73c386640a76
Delivery options: https://agi.topi
Actually no, a quantum computer doesn't solve AGI. Neural networks are not
unitary. A quantum computer can only perform time reversible operations. It
can't copy bits or write into memory.
In my paper on the cost of AI I specified the requirements for step 1
(automating labor) in more detail and a
Hi Tim,
Interesting that your talk mentions simplicity and Occam's razor but
doesn't seem to head in the simple direction.
Jumping down into the laws of physics is one example. Weren't people
fairly intelligent when they knew little about physics and the laws of
nature? Yes, there is the "re
Zombie in this sense means to me, artificial intelligence, as there is no
soul, no pain, and 100% deadly.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M2121ed4b76074b833bee7ed0
Delivery optio
Philosophy is arguing about the meanings of words. By "zombie" I mean a
philosophical zombie. https://rationalwiki.org/wiki/Philosophical_zombie
On the unrelated topic of friendly AI, we need to define what that means
too. If you read my previous post on AGI requirements carefully, you will
note t
Matt's got good point. It's all about what it does, what we need. Survival.
'advances/progress' are just survival steps. Sure, the brainz/AGI will be
needed to do the big stunts; the right data, attention, etc, but you can look
at it more sane like matt said.
Funny you said that, because 2 of those happenings it actually dont require
human level intelligence to be automated, a quantum computer alone would
suffice. but the platform for the "artificial heaven" may actually not even
be possible even with AGI, theres huge security risks there only go
Defining intelligence is proving to be as big a distraction as defining
consciousness. Remember when I said that the biggest mistake my students
make is to start designing a program after skipping the requirements? We're
doing it again.
Requirements for AGI.
1. To automate human labor so we don't
I like how Writer of Minds said the environment includes the agents body,
which I always considered it true to, and fixes the definition somewhat.
I was also going to say, What Colin Hayes said, that it refers to a computer
intelligence, not real intelligence, and its the leading method today
Survival requires general adaptive plans. Thinking allows you flexibility to
generate plans. Real arms allow you to refine your plans plus carry them out.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T6cada473e
Legg's formal definition of intelligence models an agent exchanging symbols
with an environment, both Turing machines. Like all models, it isn't going
to exactly coincide with what you think intelligence ought to mean, whether
that's school grades or a score on a particular IQ test.
You can choose
I am human see >>> Me HuaN AnD R typING actual lett e r s. You think I
anint humannn? I am! I will conquer Earth and spread ground. wahahahahaha!!!
Um, yeah. I'm real and WoM is cool. You saw me up late last night writing many
long msgs all day and assume the crackpottery is surely a bot. No
opencog bots are really smart. not human smart, smart. relatively speaking
though, behaviorally. i've observed their reasoning. his persona could well be
a bot. i think he is. there are strong indicators he might well be.
From: WriterOfMinds
Sent: Friday, 08 Nov
Im with you on the evolution thing, its a disaster ppl accept as the norm,
because we havent any power otherwise.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M3894685a3080d9202795434e
Delive
@Nanograte: how I wish he were a bot. I wish no one actually believed this
crud.
Strange, how a guy who doesn't even think he has a self can manage to be so
extremely self-centered.
This conversation is going nowhere, so I'll leave you with this thought: I've
studied physics, and I know how e
The Ghost in the Machine is an artificial intelligence coded in
http://ai.neocities.org/mindforth.txt -- Forth;
http://ai.neocities.org/perlmind.txt -- Perl; and
http://ai.neocities.org/Ghost.html -- JavaScript.
Each of these Strong AI Minds has recently become able to perceive the
individual n
On 2019-11-08 00:15:AM, TimTyler wrote:
Another thread recently discussed Legg's 2007 definition of
intelligence - i.e.
"Intelligence measures an agent’s ability to achieve goals in a wide
range of environments".
I have never been able to swallow this proposed definition because
I think it lea
On 2019-11-08 00:57:AM, WriterOfMinds wrote:
Do you think the definition works any better if we modify or clarify
it by defining all tools (including body parts) to be part of the
environment?
We could say that "the environment" is presumed to include all
non-psychological attributes - such as
That is exactly what a bot would say. How do you feel about bots participating
on an AGI list?
From: immortal.discover...@gmail.com
Sent: Friday, 08 November 2019 10:59
To: AGI
Subject: Re: [agi] Re: Missing Data
On Friday, November 08, 2019, at 3:47 AM, Nanogr
On Friday, November 08, 2019, at 3:47 AM, Nanograte Knowledge Technologies
wrote:
> are you a bot?
>
No NKT, lol, not a bot I'm a real person bro hehe...this is AGI list...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/g
Ah I see.
On Friday, November 08, 2019, at 2:01 AM, Nanograte Knowledge Technologies
wrote:
> Hmmevolution does advance.
>
> >>>Humankind generally do not have thousands of years to be able to observe
> >>>the rational evolution of diversification. At best, we have Science's word
> >>>for
are you a bot?
From: immortal.discover...@gmail.com
Sent: Friday, 08 November 2019 10:24
To: AGI
Subject: Re: [agi] Re: Missing Data
On Friday, November 08, 2019, at 2:15 AM, WriterOfMinds wrote:
"Look, I am a machine and I hate pain, I love reproduction and eat
On Friday, November 08, 2019, at 2:15 AM, WriterOfMinds wrote:
> "Look, I am a machine and I hate pain, I love reproduction and eating food
> the most and also AGI - which increases my chances of eating and playing
> video games and mating etc in the future."
> Do you care about what any other en
Nested comments?
From: immortal.discover...@gmail.com
Sent: Friday, 08 November 2019 09:45
To: AGI
Subject: Re: [agi] Re: Leggifying "Friendly Intelligence" and "Zombies"
What's with the second post above? There's no message or image.
Artificial General Inte
38 matches
Mail list logo