This conversation is getting interesting - and getting close to the
motivations behind creating DrEliza.com. There, it has become "incompetent"
to logically analyze illnesses, because there are numerous valid
cause-and-effect relationships that have been falsely "proven" not to
exist. Any doctor acting fully rationally would lose his license - and I
can supply specific names to back this up.

However, no one can take away Dr. Eliza's license, because it is a computer
program. Like Google proffering all sorts of negative information in
response to queries and claiming it to just be the results of their
algorithm (which could easily be fixed), Dr. Eliza can be openly
"incompetent" without problem.

For example, there is a bullshit paid-for study supposedly showing that
body temperature is irrelevant, which every doctor is shown in medical
school. So, anyone who says otherwise is a "quack". As a result, Dr. Denis
Wilson is now listed on QuackWatch.org for just this reason. However, Dr.
Eliza knows all about body temperature, and proffers this information as
appropriate.

I suspect that in the near future, computers will "own" the treatment of
chronic illnesses, because competing warm-blooded doctors would lose their
licenses.

Perhaps the same will happen in society - where a politically correct agent
will eventually supplant Facebook as a source of conversation?!!!

Thoughts?

*Steve*
===========================

On Fri, Feb 24, 2017 at 7:26 AM, Jan Matusiewicz <[email protected]>
wrote:

> > You cannot expect an AI program that is designed to use human-like
> thought processes to be free of negative 'personality' traits
> We may try to design AI to think like a human but it is going to be very,
> very different anyway. Artificial neural neutworks are very successful
> recently, but artificial neuron is very much different from the biological
> one. And that is the right path, you should not try to mimic nature too
> much.
>
> Just like an airplane differs from a bird, car from a horse or submarine
> from a whale - AI would be and already is very much different than human
> intelligence. Even if it finally isbe able to perform most of its functions
> it would be alien. It think is going to have its own isssues, its flaws,
> but don't assume they will be the same as of humans.
>
> On Fri, Feb 24, 2017 at 3:33 PM, Jim Bromer <[email protected]> wrote:
>
>> You cannot expect an AI program that is designed to use human-like
>> thought processes to be free of negative 'personality' traits. The
>> problem is that a person not only learns from various sources but he
>> is also capable of designing his own learning programs because human
>> beings are, to some, extent self-directed. The belief that superior
>> intellect (whatever you want to call it) is going to prevent future
>> AGI from being noxious is not realistic.
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/28565694-
>> f30243b8
>> Modify Your Subscription: https://www.listbox.com/member
>> /?& <https://www.listbox.com/member/?&;>
>> Powered by Listbox: http://www.listbox.com
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to