In the case of an AI (presumably a robot) negotiating around humans I
expect that the way in which this would be done would be quite
different from the way that humans do it.  In the human case the
circuitry controlling walking direction and speed is substantially the
same for two individuals approaching each other, so a prediction about
what will happen next can be made by applying some vector to your own
control system in a fairly simple way.  In the case of a robot doing
the same thing it isn't equipped with the same near identical
circuitry and so would need to do things in a more cognitive manner -
for example "I can see an object of a certain size with a certain
estimated mass moving in a particular direction, so applying the laws
of motion I can estimate where it will be at some time in the near
future".  For this kind of scenario I doubt that humans operate in
this highly cognitive physics based way, instead using a crude hack
due to both control systems being near identical.


On 13/10/2007, Tim Freeman <[EMAIL PROTECTED]> wrote:
> From: Derek Zahn <[EMAIL PROTECTED]>
> >What do you suggest is a rational approach for AGI research to follow?
>
> That's a very broad question.  If I narrow it to something relevant to
> the recent conversation, I get:
>
>         What do you suggest is a rational approach to preventing AI's
>         from doing something grossly different from what is desired?
>
> When an AI is going to make a change, at some point an analysis has to
> be done that estimates the consequences of the change and takes the
> social context into account to figure out whether the change is likely
> to have undesired consequences.  (The social context is relevant
> because it determines the meaning of "undesired".)  Right now that
> analysis is generally done by humans, but we'll have to automate it
> when either there are too many changes happening, or the consequences
> are not something humans can accurately estimate.
>
> You know when you're walking down a narrow hallway, and you see
> someone else coming toward you going the opposite way, and you do this
> little nonverbal negotiation to figure out how you get around each
> other?  Humans fairly reliably infer what other humans want from their
> behavior, and they routinely act upon these inferences.  I'd like an
> AI to figure out what entities in the environment want from their
> behavior and react appropriately, so it would be able to figure out
> this nonverbal negotiation in the hallway on its own.
>
> If the same sort of reasoning can motivate more sophisticated
> behavior, then so far as I can tell we would have a solution to the
> Friendly AI problem.
>
> Maybe someone has already done this.
>
> I have a theoretical solution that's partially written up.  I'll have
> more details later.
>
> --
> Tim Freeman               http://www.fungible.com           [EMAIL PROTECTED]
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=54494688-7f3037

Reply via email to