I was interested to learn that transformers have now completely abandoned
the RNN aspect, and model everything as sequence "transforms" or
re-orderings.

That makes me wonder if some of the theory does not converge on work I like
by Sergio Pissanetzky, which uses permutations of strings to derive
meaningful objects:

"Structural Emergence in Partially Ordered Sets is the Key to Intelligence"
http://sergio.pissanetzky.com/Publications/AGI2011.pdf

Also interesting because Pissanetzky's original motivation was refactoring
code, and one of the most impressive demonstrations to come out of GPT-3
has been the demo which was created to express the "meaning" of natural
language in javascript.

This could give a sense in which transformers are actually stumbling on
true meaning representations.

-Rob

On Sat, Aug 1, 2020 at 3:45 AM Ben Goertzel <b...@goertzel.org> wrote:

> What is your justification/reasoning behind saying
>
> "However GPT-3 definitely is close-ish to AGI, many of the mechanisms
> under the illusive hood are AGI mechanisms."
>
> ?
>
> I don't see it that way at all...

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T21c073d3fe3faef0-Mc9f1fbb6be9850b9bab3d990
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to