On Saturday, March 23rd, 2024 at 9:57 AM, Galen Seitz <gal...@seitzassoc.com> 
wrote:

> On 3/23/24 00:55, Ben Koenig wrote:
> 
> > Ideally, the whole point of an LLM is to create a problem that can
> > interpret human language so that we can interact with software in a
> > "human" way. It shouldn't matter if the data is garbage, as long as
> > the result is a program that understands english.
> > 
> > Once you have that, you can point it to a random pile (list of
> > websites, source code repository) of information that you know is
> > mostly decent, and it will go through the long and painful process of
> > dissecting it for you. This only works if the LLM is capable of doing
> > this without mixing in all of its childhood memories. You know, like
> > we do - we all spent a lot of time engaging in really stupid
> > conversations that were only intended to practice listening and
> > speaking. As adults we throw the subject matter from that phase away,
> > retaining the grammar and sentence structure. We don't really care
> > about that time a fox jumped over a lazy dog.
> 
> 
> I'm missing JJJ already. He would definitely have something to say
> about this, but I have no idea what it would be. And that's not meant
> as a criticism Ben, just that JJJ always provided an interesting
> perspective when it came to matters of language.
> 
> galen
> --
> Galen Seitz
> gal...@seitzassoc.com


Yeah, linguistics was his thing. Right now "AI" is really just about the 
interpretation and manipulation of human language. It's an aread of computer 
science people have already been working on for decades and I'm sure JJJ would 
have seeen some really fun edge cases when it comes to the stuff chatGPT is 
saying. 

-Ben

Reply via email to