Hmmm.

This paper derives entropy-based criteria for causal inference, by
tweaking some causal Bayes net math... interesting...

http://arxiv.org/pdf/1407.2256.pdf

This paper defines a notion of Causation Entropy

http://arxiv.org/pdf/1401.7574.pdf

and derives some nice theorems about it, but their analytical methods
are only applicable when you have lots of high quality time series
data

-- ben


On Wed, Nov 26, 2014 at 4:31 PM, Ben Goertzel <[email protected]> wrote:
> Interesting -- these guys use some mutual information methods for
> guessing the direction of
> causation from the (non-temporal) expression data, as described in
> equations 3 and 4 of
>
> http://www.clopinet.com/isabelle/Projects/NIPS2008/factsheets/LOCANET_OlsenFactsheet.pdf
>
> and this poster
>
> http://www.ulb.ac.be/di/map/colsen/poster.pdf
>
> Of course, their analysis is mathematically based on assumptions that
> aren't generally true,
>
> "
> (a) causal sufficiency,(b) causal Markov and (c) faithfulness assumptions
> "
>
> (see http://mlg.eng.cam.ac.uk/zoubin/SALD/Intro-Causal.pdf for
> interpretation of these) ...
>
> but that's usually going to be true for any pragmatic computational method...
>
> -- ben
>
>
> On Wed, Nov 26, 2014 at 12:34 PM, Ben Goertzel <[email protected]> wrote:
>> (half-baked brainstorming below, beware....  What I'm musing about is
>> how to guess the causal direction of a correlation based on
>> non-temporal data...)
>>
>>> I've been re-reading this nice old paper on the foundations of the Second 
>>> Law..
>>>
>>> http://necsi.edu/projects/baranger/cce.pdf
>>
>> It's a physics-y paper but I think one can apply it to AGI with some
>> appropriate set-up
>>
>> The key thing that Baranger's arguments show there is that --
>> Within the view of a coarse-graining observer (one whose precision of
>> observation is much less than the precision of the universe he's
>> observing), it's more likely for
>>
>> -- two states that seemed the same at time T, to seem different at time T+1
>>
>> than for
>>
>> -- two states that seemed different at time T, to seem the same at time T+1
>>
>> (this is for an arbitrary trajectory in a conservative dynamical
>> system, blabla...)
>>
>> Now, suppose we apply this reasoning (hands waving kinda wildly) to a
>> space of **situations** in some universe.  Each point in the state
>> space is a certain situation.   A trajectory in the state space is a
>> series of situations, e.g. the series of situations encountered by
>> some agent.  Suppose that the trajectories of situations encountered
>> by agents, when plotted in situation-space, are complex and
>> fractal-looking like the ones in Baranger's paper.  Each agent may be
>> associated with a probability distribution over trajectories (the
>> possible histories it experiences).
>>
>> A possible commonsensical cause or effect like "rain" or "dark", in
>> this framework, corresponds to a set of situations (e.g. the
>> situations involving rain).   Thus it corresponds to a certain region
>> in the situation space.  Let's call these "event-sets".   Each point
>> on a trajectory through situation-space is going to pass through
>> various event-sets.
>>
>> To say what it means for one event-set to cause another, relative to a
>> certain set of trajectories (or probability distribution over
>> trajectories), we can use the definitions from Luke Glynn's paper
>> http://philsci-archive.pitt.edu/9729/1/Website_Version_2.pdf
>>
>> What Baranger's line of argument (via which he derives the Second Law)
>> suggests is that overall
>>
>> -- same cause, different effects
>>
>> is more likely than
>>
>> -- different cause, same effects
>>
>> This is because "same cause, different effects" means "two different
>> situations, which are put into the same event-set by the observer,
>> lead to two different situations, which are put into different
>> event-sets by the observer", etc.
>>
>> Since event-sets are regions of situation-space, and generally
>> (because of the coarse-graining observer) an earlier time-point on a
>> trajectory is going to be less spread-out through situation-space than
>> a later time-point on the same trajectory --- therefore the cause is
>> likely to be less spread-out than the effect.
>>
>> Thus overall we might conclude: given a pair  of event-sets (X,Y) that
>> are correlated (meaning e.g. that there is mutual information between
>> the distribution of particular events within category X, and the
>> distribution of particular events within category Y),
>>
>> --  the one with greater spread (i.e. the greatest differentiation,
>> i.e. the greatest entropy, among the different particular situations
>> in the event-set) is more likely to be in the future...
>>
>> The basic idea is: if event-categories X and event-categories Y are
>> sufficiently correlated that it seems likely one of
>>
>> A)  The states of the universe corresponding to observation of X tend
>> to causally affect the states of the universe corresponding to
>> observation of Y [within the assumed set of trajectories along which
>> causation is being estimated]
>>
>> or
>>
>> B) The states of the universe corresponding to observation of Y tend
>> to causally affect the states of the universe corresponding to
>> observation of X [within the assumed set of trajectories along which
>> causation is being estimated]
>>
>> then, to choose between X and Y, on the average we will guess right
>> more often if we assume the lower-entropy one of X and Y is the cause
>> and the higher-entropy one is the effect ...
>>
>> So according to this way of thinking, the asymmetry required in
>> Glynn's analysis of causality could potentially be taken as entropy
>> rather than time...
>>
>> maybe ;)
>>
>> -- Ben
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "The reasonable man adapts himself to the world: the unreasonable one
> persists in trying to adapt the world to himself. Therefore all
> progress depends on the unreasonable man." -- George Bernard Shaw



-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man." -- George Bernard Shaw


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to