On 10/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
> What's that got to do with superAGI's? This: the whole idea of a superAGI
> "taking off" rests on the assumption that the problems we face in life are
> soluble if only we - or superAGI's- have more brainpower.
> That doesn't mean that a s
On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
--- Jef Allbright <[EMAIL PROTECTED]> wrote:
> On 7/2/07, Tom McCabe <[EMAIL PROTECTED]>
> wrote:
>
> > I think we're getting terms mixed up here. By
> > "values", do you mean the "
On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
I think we're getting terms mixed up here. By
"values", do you mean the "ends", the ultimate moral
objectives that the AGI has, things that the AGI
thinks are good across all possible situations?
No, sorry. By "values", I mean something similar
On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
--- Jef Allbright <[EMAIL PROTECTED]> wrote:
> I hope that my response to Stathis might further
> elucidate.
Er, okay. I read this email first.
Might I suggest reading an entire post for comprehension /before/
beginni
On 7/1/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
--- Jef Allbright <[EMAIL PROTECTED]> wrote:
> For years I've observed and occasionally
> participated in these
> discussions of humans (however augmented and/or
> organized) vis-à-vis
> volitional superinte
On 7/2/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
On 02/07/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
> While I agree with you in regard to decoupling intelligence and any
> particular goals, this doesn't mean goals can be random or arbitrary.
> To the e
On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
"
I am not sure you are capable of following an argument
in a manner that makes it worth my while to continue.
- s"
So, you're saying that I have no idea what I'm talking
about, so therefore you're not going to bother arguing
with me anymore. Thi
On 7/1/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
If its top level goal is to allow its other goals to vary randomly,
then evolution will favour those AI's which decide to spread and
multiply, perhaps consuming humans in the process. Building an AI like
this would be like building a bomb
On 5/29/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
On 29/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
> I. Any instance of rational choice is about an agent acting so as to
> promote its own present values into the future. The agent has a model
> of its re
On 5/28/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
On 28/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
> > Before you consider whether killing the machine would be bad, you have
to
> > consider whether the machine minds being killed, and how much it minds
bei
On 5/27/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote:
> Which got me thinking. It seems reasonable to think that killing a
> human is worse than killing a mouse because a human is more
> intelligent/complex/conscious/...etc...(use what ev
On 5/25/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
On 5/25/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
>
> On the Dangers of Incautious Research and Development
>
> A scientist, slightly insane
> Created a robotic brain
> But the brain, on completion
>
On 5/25/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
On the Dangers of Incautious Research and Development
A scientist, slightly insane
Created a robotic brain
But the brain, on completion
Favored assimilation
His final words: "Damn, what a pain!"
I'll send this before I have my coffee an
On 3/26/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Quedice always had said this: "I cannot make this world a better place.
All is for the best in this most perfect of all possible worlds. And by
world I also mean reality, or any existence, dimension, plane, or even
thing. I cannot make the best
On 3/4/07, Matt Mahoney wrote:
What does the definition of intelligence have to do with AIXI? AIXI is an
optimization problem. The problem is to maximize an accumulated signal in an
unknown environment. AIXI says the solution is to guess the simplest
explanation for past observation (Occam's
On 3/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- Jef Allbright <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Is it possible to program to program any autonomous agent
> > that responds to reinforcement learning (a reward/penalty signal) that
>
Matt Mahoney wrote:
Is it possible to program to program any autonomous agent
that responds to reinforcement learning (a reward/penalty signal) that does
not act as though its environment were real? How would one test for this
belief?
Exactly.
Of course an agent could certainly claim that its
On 3/3/07, gts <[EMAIL PROTECTED]> wrote:
Do believe in what people sometimes call 'the force of logic', Jef? Do you
believe every rational mind is as a matter of definition compelled to
accept the conclusions of sound arguments?
I do. If you do too then we have no disagreement.
If you disagre
On 3/3/07, gts <[EMAIL PROTECTED]> wrote:
On Sat, 03 Mar 2007 09:43:29 -0500, Jef Allbright <[EMAIL PROTECTED]>
wrote:
> In a very general, and thus
> widely applicable sense, both induction and deduction are descriptions
> of methods of organizing information, perfo
On 3/3/07, gts <[EMAIL PROTECTED]> wrote:
On Fri, 02 Mar 2007 21:09:08 -0500, John Ku <[EMAIL PROTECTED]> wrote:
Okay, thanks, I suppose I was coming at this from a different perspective:
personally I take Hume's criticism of induction somewhat seriously and
Carroll's criticism of deduction n
On 3/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- Jef Allbright <[EMAIL PROTECTED]> wrote:
> On 3/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> > Second, I used the same reasoning to guess about the nature of the
> universe
> > (assuming it is si
On 3/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Second, I used the same reasoning to guess about the nature of the universe
(assuming it is simulated), and the only thing we know is that shorter
simulation programs are more likely than longer ones. My conclusion was that
bizarre behavior or
On 3/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- Jef Allbright <[EMAIL PROTECTED]> wrote:
> Matt -
>
> I think this answers my question to you, at least I think I see where
> you're coming from.
>
> I would say that you have justification for saying t
On 3/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- Jef Allbright <[EMAIL PROTECTED]> wrote:
> On 3/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> > What I argue is this: the fact that Occam's Razor holds suggests that the
> > universe is a com
On 3/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
What I argue is this: the fact that Occam's Razor holds suggests that the
universe is a computation.
Matt -
Would you please clarify how/why you think B follows from A in your
preceding statement?
- Jef
-
This list is sponsored by AGIRI
Ben Goertzel wrote:
> Hi,
>
>> In regard to your "finally" paragraph, I would speculate that
>> advanced intelligence would tend to converge on a structure of
>> increasing stability feeding on increasing diversity. As the
>> intelligence evolved, a form of natural selection would guide its
>> st
Sorry, I neglected to include my summary statement, now appended below.
- Jef
Jef Allbright wrote:
> Ben Goertzel wrote:
>
>> Finally, it is interesting to speculate regarding how self may differ
>> in future AI systems as opposed to in humans. The relative stability
>>
Ben Goertzel wrote:
> Finally, it is interesting to speculate regarding how self
> may differ in future AI systems as opposed to in humans. The
> relative stability we see in human selves may not exist in AI
> systems that can self-improve and change more fundamentally
> and rapidly than humans c
Brian Atkins wrote:
> I'd like to do a small data gathering project regarding
> producing a Might-Be-Friendly AI (MBFAI). In other words, for
> whatever reason (don't want to go into it again in this
> thread), we assume 100% provability is out of the question
> for now, so we take one step ba
29 matches
Mail list logo