Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-18 Thread rouncer81
What do you mean it isnt beat writing,  theres no intelligence involved in doing it.  Definitely... beat writing. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M3cfc3990597e8ece4445807c

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-18 Thread John Rose
Errors are input, are ideas, and are an intelligence component. Optimal intelligence has some error threshold and it's not always zero. In fact errors in complicated environments enhance intelligence by adding a complexity reference or sort of a modulation feed...

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-17 Thread immortal . discoveries
I love how I forget the other ) often like GPT-2 does or type the wrong word when I say a wrong too often and it sounded similar or when I miss a key and I add an extra tap just to fell good. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-17 Thread John Rose
I enjoyed reading that rather large paragraph. Reminded me of Beat writing with an AGI/consciousness twist to it. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M2175067dad4afab3bc90eec9

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-15 Thread immortal . discoveries
sorry i forgot "A mind which is completely controlled by one idea." -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M240ae69955a877f99059109b Delivery options:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-15 Thread immortal . discoveries
As for John's finding, nice pdf. I'm gonna go through every one of them here: There's a lot of theories for one's 'self', isn't there? None can be measured. You CAN make PERFECT clones run in perfect parallel in the same computer. Copies ARE you. We can't test if a ghost is linked to a machine

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-15 Thread immortal . discoveries
On Friday, November 15, 2019, at 2:34 PM, Matt Mahoney wrote: > I stopped following them. I don't believe there will be a singularity like > Vinge predicted for 2023 because infinite gains in technology are impossible > in a finite universe. > Exponential singularity is definitely true, we have

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-15 Thread Matt Mahoney
On Fri, Nov 15, 2019, 8:03 AM John Rose wrote: > Hey look a partial taxonomy: > > http://immortality-roadmap.com/zombiemap3.pdf > >From http://immortality-roadmap.com/ Turchin's summary of all the ideas on LessWrong organized into tables. I stopped following them. I don't believe there will be

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-15 Thread John Rose
Hey look a partial taxonomy: http://immortality-roadmap.com/zombiemap3.pdf -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M83b94db32a801fb28236948c Delivery options:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread rouncer81
yes, its an utter nightmare really. :) -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M10bb9fb56473bb1ede02f370 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread immortal . discoveries
Their children are literally adults. Clones. Their skills are others skills. List goes on. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M3a755c79fd06856c983fb854 Delivery options:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread immortal . discoveries
They can literally teleport to Asia from America and posses their supercomputer and then start breeding. Tin men come to life by ghosts and sometimes multiple demons!! I take the left eye and arm! -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread immortal . discoveries
Such ASIs will spread like humans did, but much much faster. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M968d803351759d709cbf0064 Delivery options:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread rouncer81
yeh.  In fact I think if it were some maximized search it would be a completely different thing to us,  its reasoning actually isnt better than our potential, but it works in an artificial way to find the maximum result. So if it were artificial like that, it would kick our ass at everything, 

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread immortal . discoveries
My point was that, AGI with no enhancements, isn't gonna help us at-all in fact not at all. We need make them work on science, be open minded, have calculators, skills, more mem, fast thinking/motors, can clone brain, never gets tired.  Meaning we need ASIs. Thankfully it's very easy to make

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread rouncer81
If u made a robot have only equal intelligence to us,  it already would be ASI because u make it not forget, u can give it more sensors, instant reflexes (even tho reflexes doesnt help thinking, only activity) it can have calculators inside of it, all sorts of business that god leaves out of

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread immortal . discoveries
I know I said we need ASIs not AGIs because strictly AGIs with no enhancements is just another congress, but ASI is the cherry on the cake, our real goal IS AGI after all, once we do it we can give them their enhancements (more mem, never tired, smaller body, clones, etc) and up them to ASI

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread rouncer81
This forum is excellent,  it sounds like we are all turning into serial killers. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M06e743afa763a4e86c5fbe24 Delivery options:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread immortal . discoveries
Sell fake meat and artificial woman parts in stores. Fake reality through computer screen. When you can't step out the door for real and get REAL meat or game. You improvise. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread John Rose
True. And why bother learning to write with your hand when you can just wave the magical smartphone wand while emitting grunts? It's like a purpose of AI is to suck the intelligence out of smart monkeys then resell it when it's gone. Net effect? Mass subservient zombification with parasitic AI

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread rouncer81
Yeh you dont need sugar cane when you have nutrisweet. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-Mc9d24865f47b0c13a245b84b Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread John Rose
On Tuesday, November 12, 2019, at 11:07 AM, rouncer81 wrote: > AGI is alot pointless, just like us, if all we end up doing is scoring chicks > what the hell was the point of making us so intelligent??? Our destination is to emit AGI and AGI will emerge from us and then we become entropy

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-12 Thread rouncer81
Artificial general intelligence for labour, is not even required, making the robots that smart just for picking cherries is not utilizing the unit for what it can actually do.  If it was algorythmic information theory then thats more like all thats required. AGI is alot pointless, just like us,

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-12 Thread John Rose
We might go through a phase where our minds occupy the minds of robots, remote control, before we get to AGI automating human labor. One person can occupy many robots simultaneously. Multiple self-driving cars can be occupied by one person. Imagine wireless connections to the brain to the

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-11 Thread immortal . discoveries
**https://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still_(2008_film) -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-Mc8b15e512eea70d53f4c2d33 Delivery options:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread rouncer81
Comon, thats a big stab in the dark.   But I do believe in a definite timeline!  but it is not taught anywhere correctly, the most correct one on the planet is wrong, and him youll never ever get to even ask,  because hes a rare man indeed. -- Artificial

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread immortal . discoveries
Let me tell you a story: 1) First the apes couldn't think. But they loved food and mating. 2) Finally apes could think. They wondered why they don't kill themselves and concluded it would be horrible. Don't wanna. 3) At last, the apes realized they are made of elementary particles. They

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread immortal . discoveries
Oh the OP meant giving your life up to help your species replicate too, in all 4, the piggy backing is sorta like hey, i'm stronger, i take your resources and spread, the others submit/succumb and say OK under the law of physics (they die). The offspring take over the land. As for the 4th way,

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread rouncer81
turked humans arent conscious...   probably some form of dissociation is what they have they call consciousness? >XD -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M287ee7a67aef2df21b7e8dc1

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread immortal . discoveries
On Sunday, November 10, 2019, at 12:42 PM, rouncer81 wrote: > I MEAN-> god caused the problem he wants MAN to solve, even tho he can solve > his own problem he caused himself anyway. The big bang gave us our lives, our fun moments, our suffering, and our death. It could have solved all our

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread rouncer81
But ask yourself! :) Why hasnt it been fixed yet? Ill tell what I think the answer is. It is not because man is not potentially able, u work it out yourself that you can, and others do as well. So that means the ones before us could as well. But I think all dragons get knocked on the head when

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread John Rose
That's because you're "god" and the universe that you created is like this little soft marble inside your head against a black background. If you zoom in closely you can see me waving as I write this email, see me? "Helloo halloo I'm here!!! Can you fix this sh*t please??"

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread rouncer81
I MEAN-> god caused the problem he wants MAN to solve, even tho he can solve his own problem he caused himself anyway. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-Mfc86d88c84ad579f635a5dbd

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread rouncer81
Ok then,  howabout this one-> I find it such a pissoff that god wants man to do things for himself that he can easily do himself and probably caused the problem to be solved anyway! The absolute pointlessness of it, and how much pain it causes to rectify it just makes me insanely frustrated.

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread John Rose
Your post is becoming parasitically zombified by p-zombie impostors. Wait, can you have a zombie p-zombie? Ooops... the rabbit hole watch the rabbit hole. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread James Bowery
On Sun, Nov 10, 2019 at 10:13 AM WriterOfMinds wrote: > @James: the trouble is that we are, in fact, talking about philosophical > zombies. > > > ... If the "we" to which you refer are those who have hijacked my

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread John Rose
I think we're capable of distinguishing between p-zombie and zombie. That's why they threw the p in front of it if you read the background. Also there seems to be some sort of reluctance to incorporating p-zombie concepts into engineering concepts by some individuals. As if philosophical

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread WriterOfMinds
@James: the trouble is that we are, in fact, talking about philosophical zombies.  A discussion about some other type of zombie would be a different discussion.  It's not the *word* "zombie" that is the problem, it's the *concept* of a p-zombie.  Some of us find that a useful concept, some of

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread rouncer81
I like the definition of zombie to be how artificial intelligence is soulless animation,  but the use of the nasty cricket here (which im sure it was *its* idea, the horrible little shit, he jumped in the water on purpose!) makes me think something else is meant here.

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread James Bowery
The reason I introduced "A reasonably objective definition of "zombiness" is its vernacular use in ethology..." is precisely to *avoid* arguing over the definition of "zombie" by introducing a different *sense* of "zombie" than "philosophical zombie" -- one that is not only "reasonably objective"

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-10 Thread Nanograte Knowledge Technologies
28 To: AGI Subject: Re: [agi] Leggifying "Friendly Intelligence" and "Zombies" Maybe I am a bot. Beep. Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / see discussions<https://agi.topicbox.com/groups/agi> + participants<https://agi.topicbo

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
Maybe I am a bot. Beep. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M6d9db2f4c62c4fd55464177f Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread Nanograte Knowledge Technologies
04 To: AGI Subject: Re: [agi] Leggifying "Friendly Intelligence" and "Zombies" I have something shocking to tell yous. Strap your seat belt in. If the whole universe is just a bunch of particles and everything is a machine made of machines and nothing is alive or conscious

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
Good one. It's all in my head and it's only me. Best way to cover up what I said isn't it. That it's just me thinking it all. But I can think using what I learnt. And what I learnt is what I see. It is that everything is matter and so am I. I see my desktop right now, and part of my body, it's

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread WriterOfMinds
What? You finally figured out "I think, therefore I am," sort of? It's about time. I'm perfectly happy to consider myself to be a ghost, or observer, or whatever you want to call it. I can't objectively measure/detect/verify the existence of *any other* consciousness.  I agree with Matt that

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
*I have something shocking to tell yous*. Strap your seat belt in. If the whole universe is just a bunch of particles and everything is a machine made of machines and nothing is alive or conscious and can all be moved, metaled, squished, or rotated (as we do to hamburgers when they enter our

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 11:34 PM, immortal.discoveries wrote: > "consciousness" isn't a real thing and can't be tested in a lab... hm... I don't know. It's kind of like doing generalized principle component analysis on white noise. Something has to do it. Something has to do the

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
That worm coming out of the cricket was cringeworthy. Cymothoa exigua is another. It’s not the worm’s fault though it’s just living it’s joyful and pleasureful life to the fullest. And the cricket is being open and submissive. I think there are nonphysical parasites that effect human beings...

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-08 Thread rouncer81
Zombie in this sense means to me, artificial intelligence,  as there is no soul, no pain, and 100% deadly. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M2121ed4b76074b833bee7ed0 Delivery

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-07 Thread Nanograte Knowledge Technologies
ts for fun. irrational persons are the minority. therefore, they do not count. my hope is that the dominant, alien species inhabiting earth, would be irrational too. From: James Bowery Sent: Friday, 08 November 2019 03:19 To: AGI Subject: [agi] Leggifying

[agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-07 Thread James Bowery
See my LinkedIn post "A Leggian Approach to "Friendly AI " for background. My 2¢: This is related to the "consciousness" confusion in the following sense: Matt Mahoney's reductio ad absurdum of of the sensibility of