Hi John,

I probably should have read this thread earlier.

I agree with your insight. I have been pushing this idea that cognition, or
at least specifically natural language grammar, is lossy, for some time
now. Matt Mahoney may remember me pushing it re. the Hutter Prize to
compress language, when that came out.

And yes, this relates to the idea that "true and false don't purely exist
as crisp booleans". Which actually has become a big theme in philosophy,
and is tearing society apart right now.

But I suggest a re-brand. More recently I've started expressing it not so
much as the idea that cognition is lossy, but more that cognition is an
expansion.

If you think of cognition as an expansion I think you'll get most of the
lossy compression insight you are seeing. In short, if cognition is an
expansion, details matter.

There is now a handful of work which I think you can interpret this way:

Tomas Mikolov - "We can design systems where complexity seems to be
growing".
Bob Coecke - "Togetherness". And a thread of quantum cognition emphasizing
subjectivity of category. Which is maybe not quite expansion, but it has
the rejection of abstraction aspect.

And of course I have such a model which I presented most recently at AGI-21:

Vector Parser - Cognition a compression or expansion of the world? - AGI-21
Contributed Talks
https://youtu.be/0FmOblTl26Q

Even OpenAI has embraced this idea to an extent. As I cite in my talk:

Vepstas, “Mereology”, 2020: "In the remaining chapters, the sheaf
construction will be used as a tool to create A(G)I representations of
reality. Whether the constructed network is an accurate representation of
reality is undecidable, and this is true even in a narrow, formal, sense."

Technically, to avoid arguments about what is lossless and what not, I
suggest you focus on the decidability result.

Personally, as I describe in my talk, I think it simplifies AI
tremendously. Roughly comparable to taking all the stuff we have now, but
turning it upside down. At which point it ceases to be a lot of confusing
detail, but becomes instead some rather nice, compact, productive
principles.

Which is nice and inclusive. Because it means that nothing which has been
done in AI up to this point is really wrong. We've just been interpreting
it wrong. We can use most of it. And don't need to do a lot of work
starting from scratch.

But it does mean we need to change the way we think about the problem.

-Rob

On Thu, Nov 4, 2021 at 11:50 PM John Rose <johnr...@polyplexic.com> wrote:

> While performing thought experiments on an AGI model I realized that there
> is no purely lossless compression. Something is always lost. For most
> practical purposes yes lossless exists. This might sound trivially obvious
> and non-obvious but it does impact the theory in the model.
>
> In other words, I could not imagine any purely lossless compression, it
> might physically exist I just can't imagine it as I'm not a physicist. So
> maybe it does exist? or perhaps we just prefer it to be so... I suppose
> it's the same as saying true and false don't purely exist as crisp
> booleans. And, exists doesn’t purely exist…so everything is relative. But
> the implications are enormous when dealing with chaotic and complex systems
> models. Thus it being trivially obvious and trivially non-obvious or,
> non-trivially non-obvious... or...
>
> Net effect? Zero. Oh wait zero doesn't fully exist now does it. WTH?
>
> https://www.youtube.com/watch?v=JwZwkk7q25I
>
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-Mce05fa9f1ab04ee9cd87e46f>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-M830a1567208dc742087a400d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to