Sep 14

In all the fuss and uproar about AI that's around us today, there are
probably only a handful of folks who remember that there was once another
way researchers thought about and approached AI. Yet even to me, just a
once-dabbler in some of all that, that other way always seemed more
"right", in some sense, than what underlies today's ChatGPT and Bard
systems.

Doug Lenat was one of the prominent figures of that other way, working on
his Cyc project for nearly 40 years - and early on, I actually knew him
slightly. I can't say he was a friend, or even a colleague. But when he
died on August 31, I found myself lost in thought and memories of another
time.

Here's my remembrance of Lenat and that time - my Mint column for September
8:
https://www.livemint.com/opinion/columns/doug-lenat-and-the-search-for-ai-11694110715940.html

Your reactions welcome.

yours,
dilip

---

Doug Lenat and the search for AI


My only contribution to the Cyc project was vanishingly small, and some 32
years on, I have no idea if it persists. It was a piece of code in the Lisp
programming language. It was set in motion when you clicked on an object on
screen and moved it around using your mouse. In short, it made it easier to
visualize that motion.

I had written code like that before, so I knew how to write it here. Now I
had to show it to the Cyc guys. I walked across the atrium to the office of
the man whose brainchild Cyc was, Doug Lenat. He and a colleague,
Ramanathan Guha, came back to my office wearing looks of serious
scepticism. I barely knew them, I wasn't part of the Cyc team, so I could
almost hear the question buzzing in their minds: "What's this dude going to
show us about our own effort that we don't already know?"

But they were charmed by my little utility. To their credit, they looked at
me with newfound respect, thanked me and said they would incorporate it
into Cyc. For the next several months, until I quit the company we all
worked at, MCC, I'd get a cheery "Hi" from them every time we crossed paths.

It's been three decades, and I have lost touch with Lisp, MCC, Cyc, Guha
and Lenat. Still, I felt a distinct pang on hearing that Doug Lenat died on
August 31, at nearly 73.

Artificial Intelligence is all the rage these days, of course, astonishing
people, raising worries, showing up everywhere. For just one example: as I
write these words, I'm occasionally checking highlights from the ongoing US
Open tennis tournament. To my surprise, these clips are embellished with
commentary that's clearly AI-generated. I'll say this: it's only about
adequate. There are giveaways that the speaker and the words aren't
actually human. First, the slightly wooden voice. Second, the slightly
awkward turns of phrase - like "at the crucial moment, Sinner drops the
match point", or "Sinner loses the first set after Zverev's electrifying
ace." No tennis observer speaks like this.

This strain of AI (usually called "generative") builds on so-called Large
Language Models: vast databases of text and rules about how text and speech
are constructed. As the tennis commentary and many other examples show,
these LLMs do a pretty good job of mimicking humans, of showing us what
looks very much like intelligence. Until they don't - for which the tennis
commentary, again, is itself is an example. The reason we sometimes find
our brows furrowing while reading or listening to something produced by
ChatGPT is that while it can look reasonably convincing and persuasive, it
often is not quite right.

Here's another example. I had this exchange with ChatGPT just now:

Me: "My bedroom has three people in it. I walk in to say hello. How many of
us are there?"

ChatGPT: "If your bedroom initially had three people in it, and then you
walk in to say hello, there would still be three people in the room. You
walking into the room does not change the number of people who were already
there."

As you see: it's a perfectly constructed answer that is also totally wrong
- one that you would never get from a human. So what happened? As Doug
Lenat and Gary Marcus explained in a recent paper ("Getting from Generative
AI to Trustworthy AI: What LLMs might learn from Cyc",
https://arxiv.org/pdf/2308.04445.pdf, 31 July 2023), ChatGPT's failure here
is in deduction. "A trustworthy AI," they write, "should be able to perform
the same types of deductions as people do, as deeply as people generally
reason."

And in fact it's not just deduction. Lenat and Marcus list 16 different
"desiderata" that they believe "a general AI which is trustworthy" must
have. Deduction is one; explanation, pro and con arguments and analogy are
three more. As you can tell, Lenat and Marcus set great store by that word
"trustworthy". For ChatGPT to be truly intelligent in a human sense, you
have to be able to trust its responses just as you would a human's.

As Lenat and Marcus write, "humans possess knowledge and reasoning
capabilities [unlike] today's generative AI."

These ideas about AI emerged from the nearly four decades that Lenat and
his team have worked on Cyc - that name excerpted from the word
"encyclopaedia". Cyc builds intelligence on top of a vast store of
information too. But it is profoundly different from LLMs in the way it
approaches AI. It seeks to "explicitly articulate the tens of millions of
pieces of common sense and general models of the world that people have
[and] represent those in a form that computers can reason over mechanically
[and] quickly."

In short, human intelligence is far deeper, broader, more profound, than
the AI we see today.

Still, this is not the place to tell you more about that, nor about Cyc's
innards.

Lenat and his colleagues started building Cyc in the late 1980s at the
Microelectronics and Computer Technology Corporation (MCC) in Austin. I
worked at MCC in those years, in another AI programme. There were both
tenuous links and a relatively friendly rivalry between the programmes. I
say "relatively" because Lenat also attracted his share of critics and
doubters. Look up the term "microLenat" sometime, enough said.

Yet the truth is that he was an AI pioneer in his own right. Something
about the way he approached and built Cyc was, to him, more "right" than
the ChatGPTs of today. It may seem that way to you too. After all, do you
go about your life by calling on and analyzing vast amounts of data? Or
because you apply common sense to the world around you? Think about it.

In 1994, Lenat started a company, Cycorp, to continue building Cyc. It was
never a commercial success. But as Marcus remarks in a tribute, it is still
operational all these years on, and there are hardly any other AI firms
that can say the same. In their paper, Lenat and Marcus suggest that future
work in AI will need to "hybridize" the LLM and Cyc approaches.

So Cyc lives on. That's Doug Lenat's legacy. And someday, perhaps I'll find
out if my own tiny contribution lives on too.

-- 
My book with Joy Ma: "The Deoliwallahs"
Twitter: @DeathEndsFun
Death Ends Fun: http://dcubed.blogspot.com

-- 
You received this message because you are subscribed to the Google Groups 
"Dilip's essays" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dilips-essays+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/dilips-essays/CAEiMe8owJ%3D4cfdwV07V4fXceE%3DyAMmdiVRsZzzZwMQWU2aekOw%40mail.gmail.com.

Reply via email to