On Tue, Jan 29, 2013 at 2:42 PM, Stan Nilsen <[email protected]> wrote:
> On 01/29/2013 10:03 AM, Matt Mahoney wrote:
>>
>> What is so hard about *not* programming a robot to have human
>> emotions? It seems like a much easier problem to me if you don't
>> program it to not want to do what you tell it.
>
> Seems like an odd view from you Matt.  You often speak about "knowing what
> you want..." and the extensive surveillance data.  Bottom line is that we
> know what we want by our emotional acceptance.  If these "know it all"
> servants will know us, they will need to be experts on emotion.

That's not my view. It is a requirement of AGI that it be able to
model human emotions. It must be able to recognize when a person is
happy or mad. It must be able to predict what actions will make you
happy or mad. It must be able to predict your actions given that you
are happy or mad. It is not a requirement for AGI that it ever be
happy or mad.

> I cringe at the thought of a bunch of critters running around that think
> they know what I want.  Nothing frustrates like a software program that goes
> overboard on anticipating your every move - give me a little dumber program
> and I won't have to fight it.

When a program incorrectly guesses what you want, then you should be
able to correct it and it should modify its behavior accordingly.
There are already programs that do this. Gmail will predict which
incoming mail I am interested in reading. It is right about 99% of the
time. When it makes a mistake, I click "spam" or "not spam" and its
accuracy improves.

--
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to