On 12/04/2015 11:24 AM, Jim Bromer wrote:
If meta-data can be used to invoke rules, and rules (systems of rules
and conditional data) can be learned or acquired (perhaps implicitly)
then the program would have to have a way to govern the actions the
program might take. One way might be through the use of goals. But I
would want my program to be able to derive or develop some of its own
goals.
The problem I see with goals is the way we tend to think of them. We
humans set goals, change goals and dream of goals without knowing much
about how we will make the goal happen. We acquire ideas about reaching
the goal and eventually take steps related to the goal. Fine, but we
also have already developed strategies for pursuit. The AGI unit is far
from developing much of anything, let alone a general strategy for
reaching goals.
In my thinking about AGI I rarely use the term goal, but rather think of
governing the actions in terms of benefit. Benefit ties things together
for me. If you lived in a country with really poor people, you would
have very little trouble coming up with ways to benefit those poor. And
so it might be with the fledgling AGI. The wannabe AGI is
"functionality" poor, and needs to have more methods to increase the
chance that it will be able to do something beneficial. The AGI is a
long way from having a world concept that allows it to assess what is
beneficial to others.
Values (rules about values) come into play as the AGI picks the next
thing to do. But, we already know that early AGI doesn't have a
"values" structure to refer to. To program one is really not much of an
option - it is too complex to "calculate" what the value of something
is. To test the validity of my statement that it is too complex to
calculate, try it. Imagine that you are writing this into code!
What's the alternative to calculating a value factor? Adoption (my
preferred term.)
What I mean by Adoption is the acquiring of a "behavior" that the AGI
could perform and along with instructions to implement the behavior,
also acquire the data that tells the AGI when and if important. When is
this behavior to be used? what is the combination or triggers? and how
significant is this behavior in terms of priority to be executed?
In my design concept, this package of information is referred to as the
"opportunity." I like the term opportunity because we relate to it as
human beings. People can share opportunity with each other. In
describing an opportunity we offer the "when" can this be done; and are
given a rough idea of why this is considered important, or at least
given a recommendation. It is the recommendation that is of value to us
if we ever come to the situation where the opportunity is an option for
the moment.
If the AGI had a large database of opportunity available to it, wouldn't
that be smart! It could probably produce some benefit.
Stan
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com