I do suspect that superhumanly intelligent AI~s are intrinsically
uncontrollable by humans...
Ben G
On 12/25/06, Philip Goetz <[EMAIL PROTECTED]> wrote:
On 12/22/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> I don't consider there is any "correct" language for stuff like this,
> but I believe
On 12/22/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
I don't consider there is any "correct" language for stuff like this,
but I believe my use of "supergoal" is more standard than yours...
It's just that, on this list in particular, when people speak of
"supergoals", they're usually asking wh
Hi Phil,
In the language I am using
* goal A is a subgoal of goal B if achieving goal A is a special case of goal B
* a top-level supergoal of a system, is a goal of the system that is
not a subgoal of any other goal of the system (to a significant
degree)
* a meta-goal is a goal that is speci
On 12/7/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
erased along with it. So, e.g. even though you give up your supergoal
of drinking yourself to death, you may involuntarily retain your
subgoal of drinking (even though you started doing it only out of a
desire to drink yourself to death).
I
On 12/7/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Ben Goertzel wrote:
> 1) optimizing the set of subgoals chosen in pursuit of a given set of
> supergoals. This is well-studied in computer science and operations
> research. Not easy computationally or emotionally, but conceptually
> stra
Yeah, I am trying to be careful to skirt the bounds of many of the fields of
AI, and not get stuck in the full-cmplexity of any of them :} Tight rope to
walk, but I believe that if you have an AGI that can communicate effectively at
a minimum then it will be ok.
And this of course does not ex
Humans give subtler rewards to each other (not just one-dimensional
rewards) because we share a complex emotional/social system.
Potentially, AGIs could learn to accept complex, nuanced rewards from
humans via interacting with them in a sim world for a while, in a
variety of situations...
This i
I think that separating language learning from commonsense learning as
you're doing is a possibly viable option, but a tricky one, as in
humans the two kinds of learning are tightly bound together...
ben g
On 12/8/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
Well, badly worded then. I can't
Well, badly worded then. I can't draw a direct distinction from your system
to mine in a comprehensible way.
I intend to have an AGI that starts with minimilistically nothing, in a fully
detailed as possible virtual environment, and then accord them a great deal of
freedom in the choices it
>Right now, the only representationally explicit goal is "please the
>teacher." Learning/creating information is as of now left as an
>implicit goal. But once the system has reached Piaget's formal stage,
>it will be useful to make learning/creating information a reflectively
>(and possibly repre
I intend to start at a bit higher age level of teen / reduced knowledge
adult,
That is not possible in an approach that, like Novamente, is primarily
experiential-learning-based...
-- Ben
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, p
On 12/8/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
What are the "meta-goal" properties there defined?
For example:
-- have as few distinct supergoals as possible
-- keep the supergoals as simple as possible
-- avoid logical contradiction between supergoals
-- minimize pragmatic, probabilisti
Richard,
Can you please define 'diffuse constraint driven system' again, in technical,
and then in real simple terms for me?
Im not sure exactly how that would work.
It seems to me, that there is mainly an underlying motivational system that
creates the goal-stack system by taking explicit ou
What are the "meta-goal" properties there defined?
Are you using the same simple reward mechanism for 'learn and create new
information'?
I intend to start at a bit higher age level of teen / reduced knowledge adult,
and believe the motivational systems (though dang hard) are very important to a
Initially, the Novamente system's motivations will be
-- please its human teachers
-- make sure its goal system maintains certain desirable "meta-goal" properties
-- learn and create new information
Designing the right initial goal system for the "representationally
explicit" portion of the "ref
Ok,
One more problem I have with goals and autonomous AGI, is in humans it appears
that we really have 2 major motivational factors, physilogical needs, and
personal 'likes'.
If you are working on an AGI that will truly be autonomous, what are its base
motivations? Most AGI's will have no
(2) the "supergoals vs. subgoals" issue --- this is where I disagree
with what you said. Even though you mentioned topics like "goal
alienation", you still suggest that to a large extent it is the
"supergoals" that determine the system's goal-oriented activities,
while I believe the system's behav
On 12/7/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Pei,
As usual, comparing my views to yours reveals subtle differences in terminology!
It surely does, though this time there seems to be more than terminology.
There are two issues:
(1) the "implicit goals vs. explicit goals" issue --- we d
Pei,
As usual, comparing my views to yours reveals subtle differences in terminology!
I can see now that my language of implicit versus explicit goals is
confusing in a non-Novamente context, and actually even in a Novamente
context. Let me try to rephrase the distinction
IMPLICIT GOAL: a func
Ben Goertzel wrote:
Hi Richard,
Once again, I have to say that this characterization ignores the
distinctions I have been making between "goal-stack" (GS) systems and
"diffuse motivational constraint" (DMC) systems. As such, it only
addresses one set of possibilities for how to drive the behav
I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought), *and* a major role for
diffuse motivational constraints (guiding most mainly-unconscious
thought). I suggest that functional AGI systems will have to do so,
also.
Also, I believe th
Hi Richard,
Once again, I have to say that this characterization ignores the
distinctions I have been making between "goal-stack" (GS) systems and
"diffuse motivational constraint" (DMC) systems. As such, it only
addresses one set of possibilities for how to drive the behavior of an AGI.
And
Ben,
Very nice --- we do need to approach this topic in a systematic manner.
In the following, I'll first make some position statements, then
comment on your email.
Position statements:
(1) The system's behaviors are driven by its existing tasks/goals.
(2) At any given time, there are multiple
Ben Goertzel wrote:
The topic of the relation between rationality and goals came up on the
extropy-chat list recently, and I wrote a long post about it, which I
think is also relevant to some recent discussions on this list...
-- Ben
Once again, I have to say that this characterization ignores
Another aspect I have had to handle is the different temperal aspects of
goals/states, like immediate gains vs short term and long terms goals and
how they can coexist together. This is difficult to grasp as well.
In Novamente, this is dealt with by having goals explicitly refer to time-scope.
That sounds good so far.
Now how can we program all of that :}
Another aspect I have had to handle is the different temperal aspects of
goals/states, like immediate gains vs short term and long terms goals and how
they can coexist together. This is difficult to grasp as well.
Your baby AGI c
26 matches
Mail list logo