ces of a
set of data.
This does not, of course, mean that you should give Novamente the ability to
solve this kind of problem. But it does hint that what you're building is a
different kind of mind than what humans have...
Billy Brown
---
To unsubscribe, change your address, or tempora
hiefly www.singinst.org/CFAI.html . Also, I have a brief
> informal essay on the topic, www.goertzel.org/dynapsyc/2002/AIMorality.htm
,
> although my thoughts on the topic have progressed a fair bit since I wrote
> that.
Yes, I've been following Eliezer's work since around
you have to figure out how to make an AI that
doesn't want to tinker with its reward system in the first place. This, in
turn, requires some tricky design work that would not necessarily seem
important unless one were aware of this problem. Which, of course, is the
reason I commented on it in the
e best defensive measures the AI can think of require engineering projects
that would wipe us out as a side effect.
Billy Brown
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
en it has to make sure
no alien civilization ever interferes with the reward button, which is the
same problem on a much larger scale. There are lots of approaches it might
take to this problem, but most of the obvious ones either wipe out the human
race as a side effect or reduce us to the position
needs to happen to problems like NLP,
computer vision, memory, attention, etc.
Too bad there isn't much of a market for most of those partial solutions...
Billy Brown
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
rm for future work. What we have now is
like a football team where the quarterback won't throw a pass unless the
receiver is standing next to the goal post. Lots of long shots, little
progress.
OTOH, at least Novamente has enough internal complexity to reach territory
that hasn't already
in
most cases they would be better off just buying a commercial product. IMHO
this is a complete waste of effort - an AI team should spend as much of its
time as possible solving AI problems, not trying to optimize their file IO.
Billy Brown
---
To unsubscribe, change your address, or temporarily d
V2 rocket. It's a long road from here to there, and we're never going
to get anywhere until we admit that fact. The next step is the nasty,
challenging problem of getting into space at all, not the nigh-impossible
feat of reaching another solar system.
Billy Brown
---
To unsubscri
ill
stuck in a mire of wishful thinking, because we aren't ready to build AGI
safely.
Billy Brown
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
realistic to
expect to encounter a whole new level of difficult problems that are poorly
studied today, due to the lack of AI systems that are complex enough to
produce them.
Billy Brown
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
ally solve the problem. At a minimum, we should look for
a coherent theory as to why humans make these kinds of mistakes, but the AI
is unlikely to do so.
Billy Brown
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
12 matches
Mail list logo