Twenkid,

Wow. Lots of words. I don't mind detail, but words are slippery.

If you actually want to do stuff, it's better to keep the words to a
minimum and start with concrete examples. At least until some minimum
of consensus is agreed.

Trying to focus on your concrete questions...

On Sat, Jun 22, 2024 at 10:16 AM twenkid <twen...@gmail.com> wrote:
>
> Regarding the contradictions - I guess you mean ambiguity

Yeah, I guess so. I was thinking more abstractly in terms of
grammatical classes. You can never learn grammar, because any rule you
make always ends up being violated: AB->C, **except** in certain
cases... etc. But probably ambiguity at the meaning level resolves to
the same kinds of issues, sure.

> * BTW, what is "not to contradict"? How it would look like in a particular 
> case, example?

Oh, I suppose any formal language is an example of a system that
doesn't contradict. Programming languages... Maths, once axioms are
fixed, would be another example of non-contradiction (by definition?
Of course the thing with maths is that different sets of possible
axioms contradict, and that contradiction of possible axiomatizations
is the whole deal.)

> What do you mean by "the language problem"?

"Grammar", text compression, text prediction...

> The language models lead to such an advance: compared to what else, other 
> (non-language?) models.

Advance, compared to everything before transformers.

In terms of company market cap, if you want to quibble.

> Rob: >Do you have any comments on that idea, that patterns of meaning which 
> can be learned contradict, and so have to be generated in real time?
>
> I am not sure about the proper interpretation of your use of "to contradict"; 
> words/texts have multiple meanings and language and text are lower resolution 
> than thought if they are supposed to represent the reality "exactly" lower 
> level, higher precision representations are needed as well

In terms of my analogy to maths, this reads to me like saying: the
fact there are multiple axiomatizations for maths, means maths axioms
are somehow "lower resolution", and the solution for maths is to have
"higher precision representations" for maths... :-b

If you can appreciate how nonsensical that analysis would be within
the context of maths, then you may get a read on what it sounds like
to me from the way I'm looking at language. Instead, I think different
grammaticalizations of language are like different axiomatizations of
maths (inherently random and infinite?)

You're not the only one doing that, of course. Just the other day
LeCun was tweeting something comparable in response to a study which
revealed transformer world models seem to... contradict! Contradict?!
Who'd 've thought it?! The more resolution you get, the less coverage
you get. Wow. Surprise. Gee, that must mean that we need to find a
"higher precision representation" somewhere else!

LeCun's post here:

https://x.com/ylecun/status/1803677519314407752

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M268d0affcf74d745427a406e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to