Hmm, maybe you're right , maybe I was traveling backwards in time when I
wrote that ...

(More later)

On Tuesday, November 25, 2014, martin biehl <[email protected]> wrote:

> hm, sounds interesting, but I don't get it either. If entropy increases,
> the uncertainty of the state increases and information (about the state)
> decreases as you say, but why would the past then contain more information
> about the future than vice versa? Let X be the past, Y be the future, then
> as mutual information is symmetric:
> H(X) - H(X|Y) = H(Y) - H(Y|X)
> now H(Y) > H(X) because of entropy increase.
> then
> H(Y|X) > H(X|Y)
> and the future should be more uncertain given the past than vice versa.
> Where did this go wrong?
>
>
> On Tue, Nov 25, 2014 at 2:13 AM, Ben Goertzel via AGI <[email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>
>> Information is negentropy, so increase of entropy implies decrease of
>> information...
>>
>> Acquiring information about a system is associated with entropy
>> production...
>>
>> On Tue, Nov 25, 2014 at 9:59 AM, Aaron Nitzkin <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>> > Sorry, I must be a little confused -- probably thinking from the wrong
>> > perspective . . . I would think that there is more information
>> > in the future about the past than vice versa, because we know more
>> about the
>> > past than we do about the future, and also, doesn't
>> > increase in entropy imply increase in information (because it requries
>> more
>> > information to specify the configuration of a system
>> > with higher entropy than the same system with lower entropy?)
>> >
>> > On Tue, Nov 25, 2014 at 8:27 AM, Ben Goertzel <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>> >>
>> >> In the early part of the paper, the author clarifies that while he
>> >> assumes "temporal precedence as an aspect of causality" for
>> >> simplicity, actually his approach would work with any other systematic
>> >> way of assigning asymmetric directions to relationships between events
>> >>
>> >> I have been thinking a lot about how to infer causality from
>> >> non-time-series data (e.g. categorial gene expression data), and this
>> >> is a case where looking at some other sort of asymmetry than temporal
>> >> precedence (but that may generally correlated with temporal
>> >> precedence) seems to make sense.   E.g. I've been thinking about
>> >> looking at informational asymmetry: If one has P(A = a | B=b), one can
>> >> look at whether the distribution for A gives more information about
>> >> the distribution for B, or vice versa.   This informational asymmetry
>> >> can be used similarly to temporal asymmetry in defining causality.
>> >> Furthermore, it on the average is going to correlate with temporal
>> >> asymmetry, because the past tends to contain more information about
>> >> the future than vice versa (due to entropy increase, roughly
>> >> speaking... but there's more story here...)
>> >>
>> >> -- Ben
>> >>
>> >>
>> >> On Tue, Nov 25, 2014 at 5:34 AM, Michael van der Gulik
>> >> <[email protected] <javascript:_e(%7B%7D,'cvml','[email protected]');>>
>> wrote:
>> >> > "Chapter 1. Quantum mechanics... "
>> >> >
>> >> > It's a nice article; I'll add it to my reading list. Prediction
>> involves
>> >> > working out what causes what, so it's pretty fundamental.
>> >> >
>> >> > I have a question. Causation in my mind seems to always involve time,
>> >> > and I
>> >> > suspect it's impossible to have causation without including timing.
>> >> > So...
>> >> >
>> >> > Is it possible for a cause to happen at exactly the same moment as
>> its
>> >> > effect?
>> >> >
>> >> > Is it possible for a cause to happen after its effect?
>> >> >
>> >> > One instance I'm trying to get my head around is when an intelligence
>> >> > anticipates a cause (which is an event in the future), which results
>> in
>> >> > the
>> >> > intelligence acting such that the effect occurs before the cause.
>> >> > Perhaps
>> >> > the anticipation itself is the causal event.
>> >> >
>> >> > Regards,
>> >> > Michael.
>> >> >
>> >> >
>> >> > On Sun, Nov 23, 2014 at 7:17 AM, Ben Goertzel <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>> >> >>
>> >> >> I just happened across this 2011 paper on the probabilistic
>> foundation
>> >> >> of causality,
>> >> >>
>> >> >> http://philsci-archive.pitt.edu/9729/1/Website_Version_2.pdf
>> >> >>
>> >> >> which seems to carefully clarify a bunch of issues that remain
>> >> >> dangling in prior discussions of the topic
>> >> >>
>> >> >> It seems to give a good characterization of what it means for "P to
>> >> >> appear to cause Q, based on the knowledge-base of observer O"
>> >> >>
>> >> >> --
>> >> >> Ben Goertzel, PhD
>> >> >> http://goertzel.org
>> >> >>
>> >> >> "The reasonable man adapts himself to the world: the unreasonable
>> one
>> >> >> persists in trying to adapt the world to himself. Therefore all
>> >> >> progress depends on the unreasonable man." -- George Bernard Shaw
>> >> >>
>> >> >> --
>> >> >> You received this message because you are subscribed to the Google
>> >> >> Groups
>> >> >> "Artificial General Intelligence" group.
>> >> >> To unsubscribe from this group and stop receiving emails from it,
>> send
>> >> >> an
>> >> >> email to
>> [email protected]
>> <javascript:_e(%7B%7D,'cvml','artificial-general-intelligence%[email protected]');>
>> .
>> >> >> For more options, visit https://groups.google.com/d/optout.
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > http://gulik.pbwiki.com/
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google
>> >> > Groups
>> >> > "Artificial General Intelligence" group.
>> >> > To unsubscribe from this group and stop receiving emails from it,
>> send
>> >> > an
>> >> > email to
>> [email protected]
>> <javascript:_e(%7B%7D,'cvml','artificial-general-intelligence%[email protected]');>
>> .
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >>
>> >>
>> >>
>> >> --
>> >> Ben Goertzel, PhD
>> >> http://goertzel.org
>> >>
>> >> "The reasonable man adapts himself to the world: the unreasonable one
>> >> persists in trying to adapt the world to himself. Therefore all
>> >> progress depends on the unreasonable man." -- George Bernard Shaw
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> Groups
>> >> "opencog" group.
>> >> To unsubscribe from this group and stop receiving emails from it, send
>> an
>> >> email to [email protected]
>> <javascript:_e(%7B%7D,'cvml','opencog%[email protected]');>.
>> >> To post to this group, send email to [email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>.
>> >> Visit this group at http://groups.google.com/group/opencog.
>> >> For more options, visit https://groups.google.com/d/optout.
>> >
>> >
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> "The reasonable man adapts himself to the world: the unreasonable one
>> persists in trying to adapt the world to himself. Therefore all
>> progress depends on the unreasonable man." -- George Bernard Shaw
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/10872673-8f99760d
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>

-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man." -- George Bernard Shaw



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to