RE: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Billy Brown
ces of a set of data. This does not, of course, mean that you should give Novamente the ability to solve this kind of problem. But it does hint that what you're building is a different kind of mind than what humans have... Billy Brown --- To unsubscribe, change your address, or tempora

RE: [agi] Breaking AIXI-tl

2003-02-20 Thread Billy Brown
hiefly www.singinst.org/CFAI.html . Also, I have a brief > informal essay on the topic, www.goertzel.org/dynapsyc/2002/AIMorality.htm , > although my thoughts on the topic have progressed a fair bit since I wrote > that. Yes, I've been following Eliezer's work since around

RE: [agi] Breaking AIXI-tl

2003-02-20 Thread Billy Brown
you have to figure out how to make an AI that doesn't want to tinker with its reward system in the first place. This, in turn, requires some tricky design work that would not necessarily seem important unless one were aware of this problem. Which, of course, is the reason I commented on it in the

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Billy Brown
e best defensive measures the AI can think of require engineering projects that would wipe us out as a side effect. Billy Brown --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Billy Brown
en it has to make sure no alien civilization ever interferes with the reward button, which is the same problem on a much larger scale. There are lots of approaches it might take to this problem, but most of the obvious ones either wipe out the human race as a side effect or reduce us to the position

RE: AGI Complexity (WAS: RE: [agi] "doubling time" watcher.)

2003-02-18 Thread Billy Brown
needs to happen to problems like NLP, computer vision, memory, attention, etc. Too bad there isn't much of a market for most of those partial solutions... Billy Brown --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

RE: AGI Complexity (WAS: RE: [agi] "doubling time" watcher.)

2003-02-18 Thread Billy Brown
rm for future work. What we have now is like a football team where the quarterback won't throw a pass unless the receiver is standing next to the goal post. Lots of long shots, little progress. OTOH, at least Novamente has enough internal complexity to reach territory that hasn't already

RE: AGI Complexity (WAS: RE: [agi] "doubling time" watcher.)

2003-02-18 Thread Billy Brown
in most cases they would be better off just buying a commercial product. IMHO this is a complete waste of effort - an AI team should spend as much of its time as possible solving AI problems, not trying to optimize their file IO. Billy Brown --- To unsubscribe, change your address, or temporarily d

AGI Complexity (WAS: RE: [agi] "doubling time" watcher.)

2003-02-18 Thread Billy Brown
V2 rocket. It's a long road from here to there, and we're never going to get anywhere until we admit that fact. The next step is the nasty, challenging problem of getting into space at all, not the nigh-impossible feat of reaching another solar system. Billy Brown --- To unsubscri

RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Billy Brown
ill stuck in a mire of wishful thinking, because we aren't ready to build AGI safely. Billy Brown --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Billy Brown
realistic to expect to encounter a whole new level of difficult problems that are poorly studied today, due to the lack of AI systems that are complex enough to produce them. Billy Brown --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Billy Brown
ally solve the problem. At a minimum, we should look for a coherent theory as to why humans make these kinds of mistakes, but the AI is unlikely to do so. Billy Brown --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]