Boris: "AIT quantifies compression for sequences of inputs, while I define
match for comparisons among individual inputs. On this level, a match is a
lossless compression by replacing a larger comparand with its derivative
(miss), relative to the smaller comparand. In other words, a match a
complementary of a miss. That’s a deeper level of analysis, which I think
can enable a far more incremental (thus potentially scalable) approach.

--------------------------
You are talking about an evaluation method that is derived from (or built
on the scaffolding of) Bayesian Reasoning right?



On Wed, Aug 8, 2012 at 10:21 AM, Boris Kazachenko <bori...@verizon.net>wrote:

> **
> Jim,
>
> I agree with your focus on binary computational compression, but, as you
> said, that efficiency depends on specific operands. Even though low-power
> operations (addition) are more efficient for most data, it's the exceptions
> that matter. Most data is noise, what we care about is patterns. So, to
> improve both representational & computational compression, we need
> to quantify it for each operand ) operation. And the atomic operation that
> quantifies compression is what I call comparison, which starts with an
> inverse, vs. direct arithmetic operation. This reflects on our basic
> disagreement, - you (& most logicians, mathematicians, & programmers) start
> from deduction / pattern projection, which is based on direct operations.
> And I think real GI must start from induction / pattern discovery, which is
> intrinsically an inverse operation.  It's pretty dumb to generate / project
> patterns at random, vs. first discovering them in the real world &
> projecting accordingly.
>
> This is how I proposed to quantify compression (pattern strength) in my
> intro, part 2:
>
>  "AIT quantifies compression for sequences of inputs, while I define
> match for comparisons among individual inputs. On this level, a match is a
> lossless compression by replacing a larger comparand with its derivative
> (miss), relative to the smaller comparand. In other words, a match a
> complementary of a miss. That’s a deeper level of analysis, which I think
> can enable a far more incremental (thus potentially scalable) approach.
>
> Given incremental complexity of representation, initial inputs should have
> binary resolution. However, average binary match won’t justify the cost of
> comparison, which adds a syntactic overhead of newly differentiated match &
> miss to positionally distinct inputs. Rather, these binary inputs are
> compressed by digitization: a selective carry, aggregated & then forwarded
> up the hierarchy of digits. This is analogous to hierarchical search,
> explained in the next chapter, where selected templates are compared &
> conditionally forwarded up the hierarchy of expansion levels: a “digital
> hierarchy” of a corresponding coordinate. Digitization is done on inputs
> within a shared coordinate, the resolution of which is adjusted by
> feedback. This resolution must form average integers that are large enough
> for an average match between them (a subset of their magnitude) to merit
> the above-mentioned costs of comparison.
>
> Hence, the next order of compression is comparison across coordinates
> (initially defined with binary resolution as before | after input). Any
> comparison is an inverse arithmetic operation of incremental power: Boolean
> AND, subtraction, division, logarithm, & so on. Binary match is a sum of
> AND: partial identity of uncompressed bit strings, & miss is !AND. Binary
> comparison is useful for digitization, but it won’t further compress the
> integers produced thereby. In general, the products of a given-power
> comparison are further compressed only by a higher-power comparison between
> them, where match is the *additive* compression.
>
> Thus, initial comparison between digitized integers is done by
> subtraction, which increases match by compressing miss from !AND to
> difference, in which opposite-sign bits cancel each other via carry |
> borrow. The match is increased because it is a complimentary of difference,
> equal to the smaller of the comparands.
>
> All-to-all comparison across 1D queue of pixels forms signed derivatives,
> complemented by which new inputs can losslessly & compressively replace
> older templates. At the same time, current input match determines whether
> individual derivatives are also compared (vs. aggregated), forming
> successively higher derivatives. “Atomic” comparison is between a
> single-variable input & a template (older input):
> Comparison: match= min (input, template), miss= dif (i-t): aggregated over
> the span of constant sign.
> Evaluation: match - average_match_per_average_difference_match, formed on
> the next search level.
> This evaluation is for comparing higher derivatives, vs. evaluation for
> higher-level inputs explained in part 3. It can also be increasingly
> complex, but I will need a meaningful feedback to elaborate.
>
> Division further reduces difference to a ratio, which can then be reduced
> to a logarithm, & so on. Thus, complimentary match is increased with the
> power of comparison. But the costs may grow even faster, for both
> operations & incremental syntax to record incidental sign, fraction,
> irrational fraction. The power of comparison is increased if current match
> plus miss predict an improvement, as indicated by higher-order comparison
> between the results from different powers of comparison. This
> meta-comparison can discover algorithms, or meta-patterns..."
>
> *http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html*<http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html>
> **
> **
>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to