n-ary state computers that represent n-ary numbers constitute a
lossless compression of any unary number (sticks or marks using the
'natural' representation of numbers) within the range of the extent of
the numeration (which means I can't casually figure the range out in a
few minutes.)
Jim Bromer

On Sat, Oct 6, 2018 at 9:24 PM Matt Mahoney via AGI
<agi@agi.topicbox.com> wrote:
>
> I understand the desire to understand what an AGI knows. But that makes you 
> smarter than the AGI. I don't think you want that.
>
> A neural network learner compresses its training data lossily. It is lossy 
> because the training data information content can exceed the neural network's 
> memory capacity (as all learners should). Then it compresses the remainder 
> effectively by storing as prediction errors. Learning simply means making 
> whatever adjustments reduce the error.
>
> On Fri, Oct 5, 2018, 10:29 PM Ben Goertzel <b...@goertzel.org> wrote:
>>
>> Jim,
>>
>> If you look at how lossless compression works, e.g. lossless text
>> compression, it is mostly based on predictive probability models ...
>>
>> If you have an opaque predictive model of a body of text, e.g. a deep
>> NN, then it's hard to manipulate the internals of the model ...
>>
>> OTOH if you have a predictive model that is explicitly represented as
>> (say) a probabilistic logic program, then it's easier to manipulate
>> the internals of the model...
>>
>> So I think actually "operating on compressed versions of data" is
>> roughly equivalent to "producing highly accurate probabilistic models
>> that have transparent internal semantics"
>>
>> Which is important for AGI for a lot of reasons
>>
>> -- Ben
>> On Sat, Oct 6, 2018 at 5:05 AM Jim Bromer via AGI <agi@agi.topicbox.com> 
>> wrote:
>> >
>> > A good goal for a next generation compression system is to allow
>> > functional transformations to operate on some compressed data without
>> > needing to decompress it first. (I forgot what this is called but
>> > there is a Wikipedia entry on something s8milar in cryptography.)
>> > This is how multiplication works by the way.
>> >
>> > If a 'dynamic compression' was preformed in stages using 'components'
>> > which had certain abstract attributes that could be used in
>> > computations that were done in multiple passes, then it might be
>> > possible to postpone a complete analysis or computation until the data
>> > was presented in a more abstract format (relative to the given
>> > problem). The goal is to find a way to make each pass effective but
>> > seriously less complicated. The idea is that the data 'components'
>> > (the data produced by a previous pass) might have certain abstract
>> > properties that were general, and subsequent passes might then operate
>> > on narrower classes. (This is how many algorithms work now that I
>> > think about it, but they are not described and defined using the
>> > concept of compression abstractions as a fundamental principle.)
>> > Jim Bromer
>> 
>> 
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>> 
>> "The dewdrop world / Is the dewdrop world / And yet, and yet …" --
>> Kobayashi Issa
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-M2c2d3be0d28fe18619931795
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to