In a closed model, every statement is either true or false. In an open model, 
every statement is either true or uncertain. In reality, all statements are 
uncertain, but we have a means to assign them probabilities (not necessarily 
accurate probabilities).

A closed model is unrealistic, but an open model is even more unrealistic 
because you lack a means of assigning likelihoods to statements like "the sun 
will rise tomorrow" or "the world will end tomorrow". You absolutely must have 
a means of guessing probabilities to do anything at all in the real world.


-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/4/08, Abram Demski <[EMAIL PROTECTED]> wrote:

> From: Abram Demski <[EMAIL PROTECTED]>
> Subject: [agi] open models, closed models, priors
> To: agi@v2.listbox.com
> Date: Thursday, September 4, 2008, 2:19 PM
> A closed model is one that is interpreted as representing
> all truths
> about that which is modeled. An open model is instead
> interpreted as
> making a specific set of assertions, and leaving the rest
> undecided.
> Formally, we might say that a closed model is interpreted
> to include
> all of the truths, so that any other statements are false.
> This is
> also known as the closed-world assumption.
> 
> A typical example of an open model is a set of statements
> in predicate
> logic. This could be changed to a closed model simply by
> applying the
> closed-world assumption. A possibly more typical example of
> a
> closed-world model is a computer program that outputs the
> data so far
> (and predicts specific future output), as in Solomonoff
> induction.
> 
> These two types of model are very different! One important
> difference
> is that we can simply *add* to an open model if we need to
> account for
> new data, while we must always *modify* a closed model if
> we want to
> account for more information.
> 
> The key difference I want to ask about here is: a
> length-based
> bayesian prior seems to apply well to closed models, but
> not so well
> to open models.
> 
> First, such priors are generally supposed to apply to
> entire joint
> states; in other words, probability theory itself (and in
> particular
> bayesian learning) is built with an assumption of an
> underlying space
> of closed models, not open ones.
> 
> Second, an open model always has room for additional stuff
> somewhere
> else in the universe, unobserved by the agent. This
> suggests that,
> made probabilistic, open models would generally predict
> universes with
> infinite description length. Whatever information was
> known, there
> would be an infinite number of chances for other unknown
> things to be
> out there; so it seems as if the probability of *something*
> more being
> there would converge to 1. (This is not, however,
> mathematically
> necessary.) If so, then taking that other thing into
> account, the same
> argument would still suggest something *else* was out
> there, and so
> on; in other words, a probabilistic open-model-learner
> would seem to
> predict a universe with an infinite description length.
> This does not
> make it easy to apply the description length principle.
> 
> I am not arguing that open models are a necessity for AI,
> but I am
> curious if anyone has ideas of how to handle this. I know
> that Pei
> Wang suggests abandoning standard probability in order to
> learn open
> models, for example.
> 
> --Abram Demski
> 



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to