>
> Are you familiar with "causal networks" as developed by Judea Pearl,
> etc.?


I have heard the name thrown around but that's the extent of it. I'll have
to read up on this.

But once the causal network theory has done its
> job, one is left with a set of events that are *potentially*
> consistently labeled causal -- one still needs some other kind of
> insight to tell what should really be considered causal or not


That's a bit vague. Can you put your finger on precisely which types of
event pairs would mistakenly be considered potentially causal by this
approach? Based on what you have said so far, I would think it would just
be a matter of formulating an experiment. The agent could either test the
ability to control one variable (the effect) with another (the cause) by
controlling the other, or if the cause itself could not be controlled by
the agent then whether it is "truly" a cause is a moot point because the
information is inaccessible and consequentially irrelevant to the agent's
internal model.

On Tue, Jan 13, 2015 at 11:34 AM, Ben Goertzel <[email protected]> wrote:

> Hi Aaron,
>
> Are you familiar with "causal networks" as developed by Judea Pearl,
> etc.?  The basic theory thereof involves precisely this type of
> manipulation that you're talking about....
>
> However, I've never been fully satisfied with causal network theory.
> Ultimately my conclusion is that, given a network of probabilistically
> related events, causal network theory helps rule out which
> relationships are almost surely NOT causal but rather consequential
> from other events.  But once the causal network theory has done its
> job, one is left with a set of events that are *potentially*
> consistently labeled causal -- one still needs some other kind of
> insight to tell what should really be considered causal or not
>
>
> -- Ben
>
> On Tue, Jan 13, 2015 at 11:23 AM, Aaron Hosford <[email protected]>
> wrote:
> > Ben,
> >
> > I'm not sure this would be useful within the scope of what you're working
> > on, but I don't think probabilities are the right basis for understanding
> > causation. Causation, to me, requires interaction between objects or
> events
> > that results in a transfer of information/control. Think of a
> simulation. If
> > changing the simulation in some way (i.e. modifying the state of an
> object
> > or introducing/removing an object or event) changes whether a predicate
> of
> > the simulation's (future) history is satisfied, then that change can be
> said
> > to be the cause of the predicate's (or its negation's) satisfaction. In
> > other words, you can control one locus of the model indirectly by
> modifying
> > some other locus of it. (It makes sense that human beings would be
> > preoccupied with flow of control, given our comparative expertise at
> > controlling our environments relative to other species.) This notion of
> > causality is a purely deterministic one, which is in agreement with most
> > people's intuitive understanding of causality. Probabilities are
> introduced
> > only when we take into account potential uncertainties, inaccuracies, or
> > imprecisions that may be present in the model used for simulation; they
> are
> > averages or densities of deterministic behaviors over the space of
> > possibilities. If you start from this deterministic perspective and then
> ask
> > yourself how you can derive probabilistic models of such a deterministic
> > phenomenon under uncertainty, I think the relationships between
> causality,
> > probability, correlation, and flow of time will become fairly evident,
> and
> > it will be much easier to put together a system that reconstructs
> underlying
> > causal relationships from sample data.
> >
> > On Tue, Nov 25, 2014 at 5:53 AM, Ben Goertzel via AGI <[email protected]>
> > wrote:
> >>
> >> Hmmm...
> >>
> >> Having thought about this more, while I was indeed traveling backwards
> >> in time when I wrote the previous email, it's not too relevant anyhow
> >> because the Second Law only holds globally, and in complex systems
> >> there are many subsystems that are behaving anti-entropically.  So I'm
> >> no sure one can use the law of entropy increase to draw conclusions
> >> about local causality.
> >>
> >> However, I was thinking about section 6.3.2 of
> >>
> >> http://cqi.inf.usi.ch/qic/94_Lloyd.pdf
> >>
> >> where Seth Lloyd observes that
> >>
> >> "Having a common effect does not induce correlation between events,
> >> while having a common cause does."
> >>
> >> I.e.
> >>
> >> -- In the case of two causes with a common effect ... there is an
> >> increase of information from past to future (the probability spread
> >> across two causes is now concentrated on a single effect).   There no
> >> correlation in the past (between the causes).   This is the opposite
> >> direction of the Second Law of Thermodynamics.
> >>
> >> -- In the case of two effects with a common cause ...  there is a
> >> decrease of information from past to future (the probability
> >> concentrated in one cause is now spread across two effects).   There
> >> is correlation in the future (between the effects).  This is in the
> >> direction of the Second Law of Thermodynamics.
> >>
> >> ...
> >>
> >> I.e. in many cases the direction of causal influence may be
> >> identifiable as the direction of increasing correlation....   I'm not
> >> sure exactly what are the limits of this conclusion though.
> >>
> >> ...
> >>
> >> Soo --   What if one has two sets of variables, S and T, and there is
> >> significant mutual information between the values of S and the values
> >> of T, as evaluated across different cases...?   So, suppose we have
> >> both
> >>
> >> S --> T
> >>
> >> and
> >>
> >> T --> S
> >>
> >> in a sense....    But, if there is significantly more correlation
> >> among the variables within T, than among the variables within S, then
> >> we can say that it's more likely that T is the effect and S is the
> >> cause...
> >>
> >> The asymmetry used to identify causation is then one of correlation
> >> rather than of temporality directly...
> >>
> >> This may be a way of heuristically inferring causality from
> >> non-temporal data, if one has a sufficient ensemble of data samples...
> >>
> >> -- Ben
> >>
> >>
> >> On Tue, Nov 25, 2014 at 1:46 PM, Ben Goertzel <[email protected]> wrote:
> >> >
> >> > Hmm, maybe you're right , maybe I was traveling backwards in time
> when I
> >> > wrote that ...
> >> >
> >> > (More later)
> >> >
> >> > On Tuesday, November 25, 2014, martin biehl <[email protected]> wrote:
> >> >>
> >> >> hm, sounds interesting, but I don't get it either. If entropy
> >> >> increases,
> >> >> the uncertainty of the state increases and information (about the
> >> >> state)
> >> >> decreases as you say, but why would the past then contain more
> >> >> information
> >> >> about the future than vice versa? Let X be the past, Y be the future,
> >> >> then
> >> >> as mutual information is symmetric:
> >> >> H(X) - H(X|Y) = H(Y) - H(Y|X)
> >> >> now H(Y) > H(X) because of entropy increase.
> >> >> then
> >> >> H(Y|X) > H(X|Y)
> >> >> and the future should be more uncertain given the past than vice
> versa.
> >> >> Where did this go wrong?
> >> >>
> >> >>
> >> >> On Tue, Nov 25, 2014 at 2:13 AM, Ben Goertzel via AGI <
> [email protected]>
> >> >> wrote:
> >> >>>
> >> >>> Information is negentropy, so increase of entropy implies decrease
> of
> >> >>> information...
> >> >>>
> >> >>> Acquiring information about a system is associated with entropy
> >> >>> production...
> >> >>>
> >> >>> On Tue, Nov 25, 2014 at 9:59 AM, Aaron Nitzkin <[email protected]>
> >> >>> wrote:
> >> >>> > Sorry, I must be a little confused -- probably thinking from the
> >> >>> > wrong
> >> >>> > perspective . . . I would think that there is more information
> >> >>> > in the future about the past than vice versa, because we know more
> >> >>> > about the
> >> >>> > past than we do about the future, and also, doesn't
> >> >>> > increase in entropy imply increase in information (because it
> >> >>> > requries
> >> >>> > more
> >> >>> > information to specify the configuration of a system
> >> >>> > with higher entropy than the same system with lower entropy?)
> >> >>> >
> >> >>> > On Tue, Nov 25, 2014 at 8:27 AM, Ben Goertzel <[email protected]>
> >> >>> > wrote:
> >> >>> >>
> >> >>> >> In the early part of the paper, the author clarifies that while
> he
> >> >>> >> assumes "temporal precedence as an aspect of causality" for
> >> >>> >> simplicity, actually his approach would work with any other
> >> >>> >> systematic
> >> >>> >> way of assigning asymmetric directions to relationships between
> >> >>> >> events
> >> >>> >>
> >> >>> >> I have been thinking a lot about how to infer causality from
> >> >>> >> non-time-series data (e.g. categorial gene expression data), and
> >> >>> >> this
> >> >>> >> is a case where looking at some other sort of asymmetry than
> >> >>> >> temporal
> >> >>> >> precedence (but that may generally correlated with temporal
> >> >>> >> precedence) seems to make sense.   E.g. I've been thinking about
> >> >>> >> looking at informational asymmetry: If one has P(A = a | B=b),
> one
> >> >>> >> can
> >> >>> >> look at whether the distribution for A gives more information
> about
> >> >>> >> the distribution for B, or vice versa.   This informational
> >> >>> >> asymmetry
> >> >>> >> can be used similarly to temporal asymmetry in defining
> causality.
> >> >>> >> Furthermore, it on the average is going to correlate with
> temporal
> >> >>> >> asymmetry, because the past tends to contain more information
> about
> >> >>> >> the future than vice versa (due to entropy increase, roughly
> >> >>> >> speaking... but there's more story here...)
> >> >>> >>
> >> >>> >> -- Ben
> >> >>> >>
> >> >>> >>
> >> >>> >> On Tue, Nov 25, 2014 at 5:34 AM, Michael van der Gulik
> >> >>> >> <[email protected]> wrote:
> >> >>> >> > "Chapter 1. Quantum mechanics... "
> >> >>> >> >
> >> >>> >> > It's a nice article; I'll add it to my reading list. Prediction
> >> >>> >> > involves
> >> >>> >> > working out what causes what, so it's pretty fundamental.
> >> >>> >> >
> >> >>> >> > I have a question. Causation in my mind seems to always involve
> >> >>> >> > time,
> >> >>> >> > and I
> >> >>> >> > suspect it's impossible to have causation without including
> >> >>> >> > timing.
> >> >>> >> > So...
> >> >>> >> >
> >> >>> >> > Is it possible for a cause to happen at exactly the same moment
> >> >>> >> > as
> >> >>> >> > its
> >> >>> >> > effect?
> >> >>> >> >
> >> >>> >> > Is it possible for a cause to happen after its effect?
> >> >>> >> >
> >> >>> >> > One instance I'm trying to get my head around is when an
> >> >>> >> > intelligence
> >> >>> >> > anticipates a cause (which is an event in the future), which
> >> >>> >> > results
> >> >>> >> > in
> >> >>> >> > the
> >> >>> >> > intelligence acting such that the effect occurs before the
> cause.
> >> >>> >> > Perhaps
> >> >>> >> > the anticipation itself is the causal event.
> >> >>> >> >
> >> >>> >> > Regards,
> >> >>> >> > Michael.
> >> >>> >> >
> >> >>> >> >
> >> >>> >> > On Sun, Nov 23, 2014 at 7:17 AM, Ben Goertzel <
> [email protected]>
> >> >>> >> > wrote:
> >> >>> >> >>
> >> >>> >> >> I just happened across this 2011 paper on the probabilistic
> >> >>> >> >> foundation
> >> >>> >> >> of causality,
> >> >>> >> >>
> >> >>> >> >> http://philsci-archive.pitt.edu/9729/1/Website_Version_2.pdf
> >> >>> >> >>
> >> >>> >> >> which seems to carefully clarify a bunch of issues that remain
> >> >>> >> >> dangling in prior discussions of the topic
> >> >>> >> >>
> >> >>> >> >> It seems to give a good characterization of what it means for
> "P
> >> >>> >> >> to
> >> >>> >> >> appear to cause Q, based on the knowledge-base of observer O"
> >> >>> >> >>
> >> >>> >> >> --
> >> >>> >> >> Ben Goertzel, PhD
> >> >>> >> >> http://goertzel.org
> >> >>> >> >>
> >> >>> >> >> "The reasonable man adapts himself to the world: the
> >> >>> >> >> unreasonable
> >> >>> >> >> one
> >> >>> >> >> persists in trying to adapt the world to himself. Therefore
> all
> >> >>> >> >> progress depends on the unreasonable man." -- George Bernard
> >> >>> >> >> Shaw
> >> >>> >> >>
> >> >>> >> >> --
> >> >>> >> >> You received this message because you are subscribed to the
> >> >>> >> >> Google
> >> >>> >> >> Groups
> >> >>> >> >> "Artificial General Intelligence" group.
> >> >>> >> >> To unsubscribe from this group and stop receiving emails from
> >> >>> >> >> it,
> >> >>> >> >> send
> >> >>> >> >> an
> >> >>> >> >> email to
> >> >>> >> >> [email protected].
> >> >>> >> >> For more options, visit https://groups.google.com/d/optout.
> >> >>> >> >
> >> >>> >> >
> >> >>> >> >
> >> >>> >> >
> >> >>> >> > --
> >> >>> >> > http://gulik.pbwiki.com/
> >> >>> >> >
> >> >>> >> > --
> >> >>> >> > You received this message because you are subscribed to the
> >> >>> >> > Google
> >> >>> >> > Groups
> >> >>> >> > "Artificial General Intelligence" group.
> >> >>> >> > To unsubscribe from this group and stop receiving emails from
> it,
> >> >>> >> > send
> >> >>> >> > an
> >> >>> >> > email to
> >> >>> >> > [email protected].
> >> >>> >> > For more options, visit https://groups.google.com/d/optout.
> >> >>> >>
> >> >>> >>
> >> >>> >>
> >> >>> >> --
> >> >>> >> Ben Goertzel, PhD
> >> >>> >> http://goertzel.org
> >> >>> >>
> >> >>> >> "The reasonable man adapts himself to the world: the unreasonable
> >> >>> >> one
> >> >>> >> persists in trying to adapt the world to himself. Therefore all
> >> >>> >> progress depends on the unreasonable man." -- George Bernard Shaw
> >> >>> >>
> >> >>> >> --
> >> >>> >> You received this message because you are subscribed to the
> Google
> >> >>> >> Groups
> >> >>> >> "opencog" group.
> >> >>> >> To unsubscribe from this group and stop receiving emails from it,
> >> >>> >> send
> >> >>> >> an
> >> >>> >> email to [email protected].
> >> >>> >> To post to this group, send email to [email protected].
> >> >>> >> Visit this group at http://groups.google.com/group/opencog.
> >> >>> >> For more options, visit https://groups.google.com/d/optout.
> >> >>> >
> >> >>> >
> >> >>>
> >> >>>
> >> >>>
> >> >>> --
> >> >>> Ben Goertzel, PhD
> >> >>> http://goertzel.org
> >> >>>
> >> >>> "The reasonable man adapts himself to the world: the unreasonable
> one
> >> >>> persists in trying to adapt the world to himself. Therefore all
> >> >>> progress depends on the unreasonable man." -- George Bernard Shaw
> >> >>>
> >> >>>
> >> >>> -------------------------------------------
> >> >>> AGI
> >> >>> Archives: https://www.listbox.com/member/archive/303/=now
> >> >>> RSS Feed:
> >> >>> https://www.listbox.com/member/archive/rss/303/10872673-8f99760d
> >> >>> Modify Your Subscription:
> >> >>> https://www.listbox.com/member/?&;
> >> >>> Powered by Listbox: http://www.listbox.com
> >> >>
> >> >>
> >> >
> >> >
> >> > --
> >> > Ben Goertzel, PhD
> >> > http://goertzel.org
> >> >
> >> > "The reasonable man adapts himself to the world: the unreasonable one
> >> > persists in trying to adapt the world to himself. Therefore all
> progress
> >> > depends on the unreasonable man." -- George Bernard Shaw
> >> >
> >>
> >>
> >>
> >> --
> >> Ben Goertzel, PhD
> >> http://goertzel.org
> >>
> >> "The reasonable man adapts himself to the world: the unreasonable one
> >> persists in trying to adapt the world to himself. Therefore all
> >> progress depends on the unreasonable man." -- George Bernard Shaw
> >>
> >>
> >> -------------------------------------------
> >> AGI
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed:
> https://www.listbox.com/member/archive/rss/303/23050605-2da819ff
> >> Modify Your Subscription:
> >>
> https://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> >
> >
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "The reasonable man adapts himself to the world: the unreasonable one
> persists in trying to adapt the world to himself. Therefore all
> progress depends on the unreasonable man." -- George Bernard Shaw
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to