Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
Maybe I am a bot. Beep.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M6d9db2f4c62c4fd55464177f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread Nanograte Knowledge Technologies
Usually, the acid test for this crow's nest of conjecture is a good whack in 
the head. If it shouts "Ow!!!", it must surely exist. The end.

See why I suspect you of being a bot?


From: immortal.discover...@gmail.com 
Sent: Saturday, 09 November 2019 22:04
To: AGI 
Subject: Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

I have something shocking to tell yous. Strap your seat belt in. If the whole 
universe is just a bunch of particles and everything is a machine made of 
machines and nothing is alive or conscious and can all be moved, metaled, 
squished, or rotated (as we do to hamburgers when they enter our mouth, or 
electrons when they move on a chip), then we cannot tell if we are really here! 
That's right, us machines cannot observe that the universe exists. Nothing can! 
Therefore, although I am typing on my keyboard, I am not, you are not reading 
this, and so on, as it's all under the hood, nothing can 'observe or act' what 
we are doing, for we are just a machine like the molecule in a hamburger. 
There's no such thing as 'observing'. There's no ghost that actually sees the 
camera feed or sends out motor actions!! No one or thing can see we are here!! 
There's no way to test it or live it. There's no machine that can 'live' or 
'have fun'. My body is a giant collage connected to my bedroom. Forget 
verifying we are here. We need something to actually see or feel we are here 
and 'live'. And it's not us machines that can do that. Therefore, observers 
(ghosts) MUST exist for the universe TO exist! So how many ghosts are there? 
I'm still skeptical. Does each brain have a ghost? Do worms? Molecules? Atoms? 
Particles? All we know is the universe needs someone to experience/enjoy the 
universe. But what law spawns a ghost? The hosting machine must live a long 
time. But so do atoms or molecules. What happens during all the time a brain 
doesn't recognize an image or internal thought? Do ghosts come back to 
existence 60fps in our brains? What if I'm uploaded to the cloud? What if I 
have a exact clone of my brain? Does it create a new ghost? How does the 
universe handle 2 now? It seems silly. Yet we don't exist if there's no ghost, 
because all we would be is a load of particles in  a new arrangement and 
nothing can 'see' or 'enjoy' the universe. Therefore I take with me my crazy 
view of Earth and walk away with a ghostly thought that may be true. SO, The 
whole universe isn't actually here if there's no ghost to actually feel it. You 
may argue but the particles ARE here, well, yeah, but, there's no one to see we 
are here, and more importantly there's no one to even enjoy anything! We are 
just machines, and can't even be observed to exist! We are tin cans, and 
there's not even a movie of them. To actually see the universe plus enjoy it, 
you need a ghost, no machine can see nor enjoy anything as there's no such 
thing that you could build that ever does that. And why do I keep saying see. 
What is see. What about smell, or the things my kidney does. At least if we are 
just tin cans we could be observed to exist, but there's not even a movie. The 
universe doesn't exist, or better put, none of us are here and we are talking 
about nothing. No one here actually knows what I just explained, and no one 
will, because we aren't ghosts and therefore no 'one' or 'thing' in the 
universe can know what we just learnt today. The machine is working in the 
universe, no 'one' learnt anything today. Get me? You're not a person and not 
segmentable, or to even be talked about, nor can you.

Dear Machine Learner crackpots; Zombies Are Us already. Humans are machines. 
Humans can't actually feel pleasure or pain (just like you can't touch the real 
world being a brain) and death is fine. It already happens. Human shoot others 
in the head with guns. Animals are tortured. Birth/death is normal and both 
pleasure/pain are needed to train our algorithms. You think death won't happen 
to future AIs but it will as their old body is updated with a new one. Ignore 
the brain. It's all machinery, there's no segmentation. Meaning we will 
certainly die at least once. Every day our memories change, you are not the 
same machine. We will avoid extinction probably, as our bodies are inclined to 
resist death (or death of our group for the non-selfish machines). We just 
don't want ASIs killing us or torturing us, they may trick us.
Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M57aa93ec75b1d717278037bd
Delivery options:

Re: [agi] Re: Missing Data

2019-11-09 Thread John Rose
There will often be loss in perceptual lossless for some humans. Some humans 
see in the dark and some have hypersensitize hearing so the perceptual lossless 
will be perceptual lossy to them. But it depends, do they want perceptual 
lossless with unperceptual lossy or perceptual lossy with unperceptual lossless 
or just lossless both or lossy both. It depends on the perceivers could be for 
robots that transmit outside of human sensory say in a battlefield where 
electromagnetic is not available due to jamming…. or for a scientist that 
records sounds outside of human audible to bring them into human audible.

And is it perceptual lossless or perceptual near-lossless. Near is with error 
control.

Either way it’s all lossylosslessness or losslesslossyness or both, 
lossylosslesslosslesslossyness.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-Mc1990a87793e6c8b6ef2331d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-09 Thread WriterOfMinds
Poor analogy.
Suppose you receive a requirement from a customer, for a "lossy compressor," 
and you design them a compressor that delivers lossless results for some data 
sets.  No one will mind.  You have met the requirement.
Suppose you receive a requirement from a customer, for a "lossless compressor," 
and you design them a compressor that is sometimes lossy.  If your customer 
finds out ... you are in trouble.  You have failed to meet the requirement.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-Mf3bec4cfc4e56f82105eab51
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
Good one. It's all in my head and it's only me. Best way to cover up what I 
said isn't it. That it's just me thinking it all. But I can think using what I 
learnt. And what I learnt is what I see. It is that everything is matter and so 
am I. I see my desktop right now, and part of my body, it's all matter. Every 
one of my thoughts/senses registered in my brain be it from eyes or self is a 
different image or sound, and they all the same being made of the same stuff, 
particles. That's all I am. I see stuff. But I'll never know if I'm in a sim. 
Maybe I am a ghost and the universe is in me. But then it's the same theory I 
gave above. It's all matter with a ghost who does observe.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M2a1bca9a29d79bb4566bd512
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread WriterOfMinds
What? You finally figured out "I think, therefore I am," sort of? It's about 
time.

I'm perfectly happy to consider myself to be a ghost, or observer, or whatever 
you want to call it. I can't objectively measure/detect/verify the existence of 
*any other* consciousness.  I agree with Matt that far -- but no farther.  The 
existence of *a* consciousness, the one that I refer to as "me," is an 
empirical fact.  It's not a "religious" idea because there is no element of 
faith about it; I know it by direct observation.  I can't be sure if there are 
other observers, other experience streams, in the universe ... without access 
to them, how would I know? ... but I guarantee that this *one* is real.  And it 
seems to be connected to my body/machine, because when my body enters a 
dreamless sleep, the experience stream is interrupted.  It doesn't shut off 
when anyone else's body is sleeping.  Weird.

Go on: try it.  Believe in at least one ghost.  Then you won't have to tie 
yourself in knots wondering why you think the universe exists.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M2d2b2e3971bb1e4c502a22eb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread immortal . discoveries
*I have something shocking to tell yous*. Strap your seat belt in. If the whole 
universe is just a bunch of particles and everything is a machine made of 
machines and nothing is alive or conscious and can all be moved, metaled, 
squished, or rotated (as we do to hamburgers when they enter our mouth, or 
electrons when they move on a chip), then we cannot tell if we are really here! 
That's right, us machines cannot observe that the universe exists. Nothing can! 
Therefore, although I am typing on my keyboard, I am not, you are not reading 
this, and so on, as it's all under the hood, nothing can 'observe or act' what 
we are doing, for we are just a machine like the molecule in a hamburger. 
There's no such thing as 'observing'. There's no ghost that actually sees the 
camera feed or sends out motor actions!! No one or thing can see we are here!! 
There's no way to test it or live it. There's no machine that can 'live' or 
'have fun'. My body is a giant collage connected to my bedroom. Forget 
verifying we are here. We need something to actually see or feel we are here 
and 'live'. And it's not us machines that can do that. *Therefore*, observers 
(ghosts) MUST exist for the universe TO exist! So how many ghosts are there? 
I'm still skeptical. Does each brain have a ghost? Do worms? Molecules? Atoms? 
Particles? All we know is the universe needs someone to experience/enjoy the 
universe. But what law spawns a ghost? The hosting machine must live a long 
time. But so do atoms or molecules. What happens during all the time a brain 
doesn't recognize an image or internal thought? Do ghosts come back to 
existence 60fps in our brains? What if I'm uploaded to the cloud? What if I 
have a exact clone of my brain? Does it create a new ghost? How does the 
universe handle 2 now? It seems silly. Yet we don't exist if there's no ghost, 
because all we would be is a load of particles in  a new arrangement and 
nothing can 'see' or 'enjoy' the universe. Therefore I take with me my crazy 
view of Earth and walk away with a ghostly thought that may be true. *SO*,* 
*The whole universe isn't actually here if there's no ghost to actually feel 
it. You may argue but the particles ARE here, well, yeah, but, there's no one 
to see we are here, and more importantly there's no one to even enjoy anything! 
We are just machines, and can't even be observed to exist! We are tin cans, and 
there's not even a movie of them. To actually see the universe plus enjoy it, 
you need a ghost, no machine can see nor enjoy anything as there's no such 
thing that you could build that ever does that. And why do I keep saying see. 
What is see. What about smell, or the things my kidney does. At least if we are 
just tin cans we could be observed to exist, but there's not even a movie. The 
universe doesn't exist, or better put, none of us are here and we are talking 
about nothing. No one here actually knows what I just explained, and no one 
will, because we aren't ghosts and therefore no 'one' or 'thing' in the 
universe can know what we just learnt today. The machine is working in the 
universe, no 'one' learnt anything today. Get me? You're not a person and not 
segmentable, or to even be talked about, nor can you.

Dear Machine Learner crackpots; Zombies Are Us already. Humans are machines. 
Humans can't actually feel pleasure or pain (just like you can't touch the real 
world being a brain) and death is fine. It already happens. Human shoot others 
in the head with guns. Animals are tortured. Birth/death is normal and both 
pleasure/pain are needed to train our algorithms. You think death won't happen 
to future AIs but it will as their old body is updated with a new one. Ignore 
the brain. It's all machinery, there's no segmentation. Meaning we will 
certainly die at least once. Every day our memories change, you are not the 
same machine. We will avoid extinction probably, as our bodies are *inclined* 
to resist death (or death of our group for the non-selfish machines). We just 
don't want ASIs killing us or torturing us, they may trick us.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M1e4e77902ed9935e4fa11627
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] arXiv endorsement request from Basile Starynkevitch for RefPerSys (a symbolic AGI project - design draft)

2019-11-09 Thread Matt Mahoney
Again skipping the requirements straight to design. Exactly what problem
are you trying to solve?

On Sat, Nov 9, 2019, 10:07 AM Mike Archbold  wrote:

> The abstract is a bit deflating. Why not take out "hobby"?  If you're
> this serious I wouldn't call it a hobby maybe just say "early
> stages... embryonic... first milestone" etc.  Mike A
>
>
>
> On 11/9/19, Basile Starynkevitch  wrote:
> > Hello all,
> >
> > I would like to submit the draft
> > http://starynkevitch.net/Basile/refpersys-design.pdf to arxiv.
> >
> > Basile Starynkevitch requests your endorsement to submit an article to
> > the cs.AI section of arXiv. To tell us that you would (or would not)
> > like to endorse this person, please visit the following URL:
> >
> > https://arxiv.org/auth/endorse?x=VXZWK4
> >
> > If that URL does not work for you, please visit
> >
> > http://arxiv.org/auth/endorse.php
> >
> > and enter the following six-digit alphanumeric string:
> >
> > Endorsement Code: VXZWK4
> >
> > Regards
> >
> > --
> > Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
> > opinions are mine only - les opinions sont seulement miennes
> > Bourg La Reine, France; 
> > (mobile phone: cf my web page / voir ma page web...)
> >

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0abbd9c5ee66e240-Md9095ac22835ca5b2bc6d3c7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 1:30 PM, WriterOfMinds wrote:
>> Re: John Rose: "It might be effectively lossless it’s not guaranteed to be 
>> lossy."
> True. But I think the usual procedure is that unless the algorithm guarantees 
> losslessness, you treat the compressed output as lossy.  Lossless is, how 
> does one say it, the protected category?

This is like saying, for example it's late fall in the northern latitudes and 
it's 50 °F and you say to your friend "It's warm today." He says, "Agreed."

Then it's mid-summer and it's 50 °F and you say to your friend "It's warm 
today." He says, "Disagreed. Why did you say that?" And you say "Fall is the 
protected category."

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M54787d2f951607f78e5f8896
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread John Rose
Perhaps we need definitions of stupidity. With all artificial intelligence 
there is artificial stupidity? Take the diff and correlate to bliss 
(ignorance). Blue pill me baby. Consumes less watts. More efficient? But 
survival is negentropy. So knowledge is potential energy. Causal entropic force?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M464c55ef1215f51c8a4afc56
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 11:34 PM, immortal.discoveries wrote:
> "consciousness" isn't a real thing and can't be tested in a lab...

hm... I don't know. It's kind of like doing generalized principle component 
analysis on white noise. Something has to do it. Something has to do the 
consciousing.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M515a0e87e04018e136474d5c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] arXiv endorsement request from Basile Starynkevitch for RefPerSys (a symbolic AGI project - design draft)

2019-11-09 Thread Mike Archbold
The abstract is a bit deflating. Why not take out "hobby"?  If you're
this serious I wouldn't call it a hobby maybe just say "early
stages... embryonic... first milestone" etc.  Mike A



On 11/9/19, Basile Starynkevitch  wrote:
> Hello all,
> 
> I would like to submit the draft
> http://starynkevitch.net/Basile/refpersys-design.pdf to arxiv.
> 
> Basile Starynkevitch requests your endorsement to submit an article to
> the cs.AI section of arXiv. To tell us that you would (or would not)
> like to endorse this person, please visit the following URL:
> 
> https://arxiv.org/auth/endorse?x=VXZWK4
> 
> If that URL does not work for you, please visit
> 
> http://arxiv.org/auth/endorse.php
> 
> and enter the following six-digit alphanumeric string:
> 
> Endorsement Code: VXZWK4
> 
> Regards
> 
> --
> Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
> opinions are mine only - les opinions sont seulement miennes
> Bourg La Reine, France; 
> (mobile phone: cf my web page / voir ma page web...)
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0abbd9c5ee66e240-M4966123dc0b012535f0ad291
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
That worm coming out of the cricket was cringeworthy. Cymothoa exigua is 
another.

It’s not the worm’s fault though it’s just living it’s joyful and pleasureful 
life to the fullest. And the cricket is being open and submissive.

I think there are nonphysical parasites that effect human beings... 
informational, replicating, mind controlling. Though evolution has endowed us 
with defenses with AGI we'll be easily manipulable. It will be able to 
construct particular sorts of mental knots and distributed knots and patterns 
to lock in thinking and use them I would hope in good ways. Skillful rulers and 
parties effectively use that and/or take advantage of it sometimes creating 
human zombies where independent thinking is punished. But if an AGI has no sort 
of higher authority why would it not utilize the ability to benefit only itself 
and a privileged elite few? Like the happy worm AGI could eventually embody 
itself in us instead of the vice-versa mind uploading people usually think 
about.

To be congenial and symbiotic beings it might be easier to embrace our fate 
like the cricket and become willfully zombified. Isn’t it more efficient to 
have one mind thinking for everyone instead of many independent? Like having 
one totalitarian world government instead of many contending individuals? Saves 
energy, less pollution, less resources needed to power the overall 
intelligence. Instead of occupying static patterns we occupy manipulated 
because they or it knows what’s better and how to guide us for the benefit of 
all!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M774616f91bad0415e1ebc797
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread Bill Hibbard via AGI
> Philosophy is arguing about the meanings of words.

For me, the great lesson of philosophy is that any
language that is general enough to express all the
ideas we need to express is able to express questions
that do not have answers. For example, "Is there a god?"

This may be related to the fact that if a programming
language is general enough to express all algorithms
then there are undecidable questions about programs in
the language. For example, "Which programs halt?"

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M621ffbdf83c11a26afeef9b3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread TimTyler

On 2019-11-08 15:58:PM, Matt Mahoney wrote:
You can choose to model I/O peripherals as either part of the agent or 
part of the environment. Likewise for an input delay line. In one case 
it lowers intelligence and in the other case it doesn't.


Thinking about it in computer science terms blurs the issue,
because there you can model everything as signal processing,
and the agent-environment distinction can become more murky.

Definitions of intelligence should also apply to biological
systems. The distinction between agent and environment can
get a bit blurry there as well, what with the "extended phenotype",
but eyes, ears, and muscles are normally part of the agent,
not part of the environment. I don't think it can coherently
be argued that Legg and Hutter intended to exclude sensory /
motor systems from their definition on the grounds that those
were part of the environment.

--

__
 |im |yler http://timtyler.org/  t...@tt1.org  617-671-9930


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M4f1a14bcbce2577d0f70a8f5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread TimTyler

On 2019-11-08 17:53:PM, Matt Mahoney wrote:

> we can approximate reward as dollars per hour over a set of
> real environments of practical value. In that case, it does
> matter how well you can see, hear, walk, and lift heavy objects.
> Whether you think that's fair or not, it matters for AGI too,
> whether it's purpose is to automate human labor or to upload
> your mind into a robot.

The issue isn't about whether sensors and motors are important.
It is about *terminology* - whether we include these components
in definitions of intelligence.

> Defining intelligence is proving to be as big a distraction
> as defining consciousness.

No way! ;-)

> Philosophy is arguing about the meanings of words.

Fair enough. An engineer might not care much how intelligence
is defined. However, Orwell argued that language shapes thought -
and I believe it. I first want to make sure I know that my audience
knows what I am talking about when I use a word, and also want
to make sure the meanings I use are good - and not too rare or
counter-intuitive.

--
__
 |im |yler http://timtyler.org/


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M28d95acfa84a9240607536fe
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Deviations from generality

2019-11-09 Thread TimTyler

On 2019-11-08 20:34:PM, rounce...@hotmail.com wrote:
The thing about the adversary controlling the environment around the 
agent,  his brain is working with the same physics as your feet 
hitting the floor,  but its not simulatable in a physics system, 
because its not mechanical to start with, but why it could never be 
ever simulated is you dont have xray vision to build the model of his 
brain to predict what he does! 

Adversaries don't need perfect models to be able to thwart your
ability to attain your goals. They need some skill and ability,
of course, but high quality simulations of your brain are
absolutely not required.

On 2019-11-08 19:30:PM, Stanley Nilsen wrote:

Jumping down into the laws of physics is one example.  Weren't people 
fairly intelligent when they knew little about physics and the laws of 
nature?  Yes, there is the "repeatability" of natural phenomenon given 
that nature runs on pretty strict rules, but is the "intelligent" 
stuff man does, that is making "better" choices, due to the fact that 
man "learned" the details of nature's rules?


That's part of it, yes. The brain builds a model of the world,
and uses it to predict the future consequences of its possible
actions, followed by evaluation of the results. Then, that's the
data that is used to choose between actions. Of course the model
is not all represented consciously and labelled as being
"the laws of physics" - but a representation of the laws of
physics is in still there, even in cavemen who lived long
before Newton was born.

--

__
 |im |yler http://timtyler.org/  t...@tt1.org  617-671-9930


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3326778943da25b8-Mf4892dfa1d7e7e7c20396249
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] arXiv endorsement request from Basile Starynkevitch for RefPerSys (a symbolic AGI project - design draft)

2019-11-09 Thread Basile Starynkevitch

Hello all,


I would like to submit the draft 
http://starynkevitch.net/Basile/refpersys-design.pdf to arxiv.




Basile Starynkevitch requests your endorsement to submit an article to
the cs.AI section of arXiv. To tell us that you would (or would not)
like to endorse this person, please visit the following URL:

https://arxiv.org/auth/endorse?x=VXZWK4

If that URL does not work for you, please visit

http://arxiv.org/auth/endorse.php

and enter the following six-digit alphanumeric string:

Endorsement Code: VXZWK4


Regards

--
Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
opinions are mine only - les opinions sont seulement miennes
Bourg La Reine, France; 
(mobile phone: cf my web page / voir ma page web...)


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0abbd9c5ee66e240-M9b480024e50fe9528ac0d4f8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread Nanograte Knowledge Technologies
I use rational in the sense of being reasonable. To me, the phrase: "It stands 
to reason." = "It seems rational."

The difference between my version of 'rational' and your version seems rather 
odd to me too. Being rational is not being sentient. An animal- when acting 
outside the scope of its instinct alone - could be said to be rational. A 
judgment of 'pragmatism" has nothing to do with the fact that it wags its tail 
at you because it recognizes you, or ignores you when it doesn't. Pragmatism is 
a rational means for solving paradoxical situations.

Sentience, of senses and spirit, of dimensionality - is not something one can 
induce via rational thought alone. I suspect your universe of the mind has much 
room for expansion, for you seem to limit the boundaries of your vocabulary to 
become less than even the Oxford dictionary allows for.

Rational  - "​(of a person) able to think clearly and make decisions based on 
reason rather than emotions synonym reasonable No rational person would ever 
behave like that. Oxford Collocations Dictionary."

Sentient - "[usually before noun] (formal) ​able to see or feel things through 
the senses Man is a sentient being. There was no sign of any sentient life or 
activity. Oxford Collocations DictionarySentient is used with these nouns: 
being See full entry Word Origin."

Irrational - I somewhat confer with Miriam Websters b version thereof ": not 
governed by or according to reason"
But then, if we did that, we would have to reject all science making use of any 
irrational term. Clearly, the term irrational, in this sense, refers to another 
form of reason we have not yet defined properly. For example, is consciousness 
rational, or irrational, or something else?

From: WriterOfMinds 
Sent: Saturday, 09 November 2019 08:46
To: AGI 
Subject: Re: [agi] Against Legg's 2007 definition of intelligence

Nanograte, you seem to use "rational" oddly.  Almost as if it's a synonym for 
"pragmatic." That's not what I was trying to say at all.

In the sense I had in mind, the word means "possessing higher reasoning 
powers," as in the phrase, "man is a rational animal."  I paired it with 
"sapient" because that's a similar concept.  I did not mean "strictly logical" 
or "hyper-practical" or "single-minded and obsessive" or "amoral" or "rigid."

Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M97f5e999ae7c66479fe6cef5
Delivery options: https://agi.topicbox.com/groups/agi/subscription