On 8/5/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
> > Now, figuring out all the heuristical NTV / symbolic qualifier's update rules, such that an AGI will always be internally consistent, and provably increasing in accuracy, is a very non-trivial task.
>
> Well indeed it is of course impossible, no matter what techniques you use. I don't see that as a reason to not use numbers where numbers are the appropriate tool, though.
 
Why is that impossible?  It doesn't violate Godel's theorem, if that's what you have in mind.  In practice, inconsistencies may exist because the knowledgebase is huge.  So let me not insist that consistency is required.
 
Maybe a better requirement is that the numbers should always be meaningful under all circumstances.  For one this requires us to keep track of their precision.  There should be an update rule for the decrease in precision when a high-precision prior is mixed with a low-precision prior.  We need to know how many significant figures are in the output numbers.
 
Secondly we need to ensure that the update rules for numbers always make sense before and after inferences.
 
Numbers should be used when they are appropriate, but that may not occur frequently.  For example:
 
1.  10% of males are homosexual (given as physical observation).
2.  The probability of a person, John, being homosexual is thus 0.1.
3.  additional fact:  John looks up a woman's skirt.  (Sorry, I use this example because it's realistic to me;  using unreal examples often leads me to bad thinking).
4.  AGI should conclude that John is very likely heterosexual.
 
A strictly numerical approach would require knowing the probability of
a.  heterosexual male looking up a woman's skirt, p1
b.  homosexual male looking up a woman's skirt, p2
I think these numbers are hard to derive.  (Doing a physical observation of a large number of males is of course impractical).
 
The best inference I can draw is to conclude that John is heterosexual with p = 1, because p1 and p2 are unknown, and all we know is that p1 >> p2.
 
As you can see, the precision of p decreases to 1 bit after inference.  This may happen to the majority of probabilities in the AGI -- they become simple facts.
 
One possible way is improve this conclusion, is to derive p1 and p2 from some known probabilities.  But how?
 
YKY

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to