On Tue, Jan 29, 2013 at 3:48 AM, Piaget Modeler
<[email protected]> wrote:
> The other question is what happens when some warbots (like Big Dog, or some 
> of the ones with armaments),
> and the aerial or undersea drones become sufficiently "sentient" (or 
> intelligent)?  Perhaps via (inadvertent or
> clandestine) software upgrades. That's the real "Oh sh*t" moment.
>
> What then?

You still have not answered my question. How would you know if Big Dog
was sentient?

Do you think that the only way we can solve hard problems like
language, vision, robotics, and predicting human behavior is for the
algorithm to also be constrained by a model of human emotions? In
fact, none of the partial solutions that we have today to these
problems need any such constraints.

What is so hard about *not* programming a robot to have human
emotions? It seems like a much easier problem to me if you don't
program it to not want to do what you tell it.

--
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to