On Wed, Jun 19, 2024, 10:52 AM Terren Suydam <terren.suy...@gmail.com>
wrote:

>
> On Tue, Jun 18, 2024 at 2:04 PM John Clark <johnkcl...@gmail.com> wrote:
>
>>
>>
>> On Tue, Jun 18, 2024 at 11:23 AM Terren Suydam <terren.suy...@gmail.com>
>> wrote:
>>
>>
>>> * LLMs are not AGI (yet), but it's hard to ignore they're (sometimes
>>> astonishingly) competent at answering multi-modal questions across most, if
>>> not all domains of human knowledge*
>>>
>>
>> I agree.
>>
>>
>>
>>
>>> *>  Here's probably the best result
>>> <https://chatgpt.com/share/b4403435-e071-46ef-b1ce-ac1def2ce501> but I'm
>>> not sure there's anything actually novel there. Despite that, it's still
>>> quite impressive, and to John's point, it's clearly an intelligent
>>> response, even if there are aspects of "cheating off of humans" in it. *
>>>
>>
>> Concerning the cheating off humans question; Isaac Newton was probably
>> the closest the human race ever got to producing a transcendental genius,
>> and nobody ever accused him of being overly modest, but even Newton
>> admitted that if he had seen further than others it was only because "he
>> stood on the shoulders of giants". Human geniuses don't start from absolute
>> zero, they expand on work done by others. Regardless of how brilliant an
>> AI's answer is, if somebody is bound and determined to belittle the AI they
>> can always find **something** in the training data that has some
>> relationship to the answer, however tenuous. Even if the AI wrote a sonnet
>> more beautiful than anything of Shakespeare's, they can still claim that
>> the sonnet, like everything in literature, concerns objects (and people)
>> and how they move, and there are certainly things in its training data
>> about the arrangement of matter and energy in spacetime, in fact EVERYTHING
>> in its training data is about the arrangement of matter and energy in
>> spacetime. And therefore writing a beautiful sonnet was not a creative act
>> but was just the result of "mere memorization".
>> John K Clark    See what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>>
>
>
> I would never claim that the works of trascendental geniuses like Newton &
> Einstein, or for that matter, Picasso & Dali, did not derive from earlier
> works. What I'm saying that *they* did, which I doubt very much current
> LLMs can do, is to break ground into novel territory in whatever territory.
> I'm not trying to belittle current LLMs, but it seems important to
> understand their limitations especially because nobody, not even their
> creators, seems to really understand why they're as good as they are. And
> just as importantly, why they're as bad as they are at some things given
> how smart they are in other ways.
>


There is a very close relationship between compression and prediction. For
any signal one tried to compress, the better you can predict it, the better
you can compress it.

Take the example of predicting the next letter in text. If you had
algorithm B which was 90% accurate, then you would only need to record the
errors (which happens 1 character in 10). If another algorithm, algorithm
A, was 99% accurate, then you would only need to record the errors that
happen 1 time in 1,000.

So the better one can predict, the better one can compress.

But this relationship works both ways: if you can compress very well, you
can predict very well.

Consider the ideal compression of a huge text corpus C, consisting of every
published book.

There is some smallest program, program X, that when executed outputs this
massive training corpus and halts.

In order for X to be the smallest representation possible, this ideal
compression of C must take advantage of every pattern that exists in the
text, and in every data set contained in C. If, for example, there was a
book that included observations of the planetary motion, then the ideal
compression in program X must contain the laws of planetary motion that
describe (and compress) those measurements.

The ideal compression of C, would even include laws of physics that remain
unknown to today's physicists and theorists, so long as the data necessary
to account for observations are recorded in some book within the corpus.
So, if for example, the corpus only had books up to 1904, before Einstein's
relativity, but you had books describing the observations that could only
best be accounted for by relativity, then the ideal compression of C must
include Einstein's theory of relativity.

Now for the interesting part:
It is possible to discover program X by brute force with a simple program:
execute all programs shorter than C, and find the shortest one that outputs
C. That process would find a program that has ideal models of our physical
reality, including laws not yet discovered.

True: it is not computationally feasible to do the brute force search in
this way, but there are heuristics we can use for finding better ways of
compressing datasets that we have. In fact this is what I see ourselves
doing with training models to be able to more accurately predict text (that
is making them better at compressing text) which is the same thing as
making them better understand the processes (the universe and the human
brain (that operates within and observes that universe) to writes those
works) that underlying them.

As to the limitations of LLMs, they have a finite and fixed depth. This
means they are only capable of computing functions they can complete within
that fixed time (unless you argument them with a loop and memory). This is
like considering the limits of a human brain that was.onky given, say, 10
seconds to solve any problem. This is why it fails at multiplying long
numbers, which we might consider easy for a computer, but if you have a
fixed-depth circuit, there are only so many times you can shift and add,
and thus only so big a multiplicand you can handle.

Jason



> Terren
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA99toTiJ9YahG-8BwJhWO%3DJ72u6oy2HFQuOa0ROo92VDg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA99toTiJ9YahG-8BwJhWO%3DJ72u6oy2HFQuOa0ROo92VDg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiGt_CTa6WPPp%3DFb2dHLjXtvyySUMyTwx3KN7t8UuxFGA%40mail.gmail.com.

Reply via email to