This seems to also be dealt with at the end of Cox's book...

Interesting. I'm tempted to read Cox's book so that you and I can discuss his ideas in more detail here on your list. (I worry that my enthusiasm for this subject is only annoying people on that other discussion list.) Is that something you would like to do? Please let me know!

I'm copying Jef and Stu here, as this is not the first time the principle of maximum entropy has come up in the dialogue. (I don't want you guys to think your thoughts on this subject went ignored or unanswered.)


Discussing Cox's work is on-topic for this list...

Let me tell you one research project that interests me re Cox and subjective probability:

****
Justifying Probability Theory as a Foundation for Cognition.

Cox's axioms and de Finetti's subjective probability approach, developed in the first part of the last century, give mathematical arguments as to why probability theory is the optimal way to reason under conditions of uncertainty. However, given limited computational resources, AI systems cannot always afford to reason optimally. It is thus interesting to ask how Cox's or de Finetti's ideas can be extended to the situation of limited computational resources. Can one show that, among all systems with a certain amount of resources, the most intelligent one will be the one whose reasoning most closely approximates probability theory?
****

I don't know of any work explicitly addressing this sort of issue, do you?

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to