I just saw the slideshow. The gazelle image is pretty. Do you know how many 
possible English sentences there are that can be written? You have to use 
partial match technology. You mention in the slides the word 'and' can be used 
to clarify a meaning behind bob etc, oh boy, and we can also say bob isA 
person. But you still have to teach it by hand every semantic relation using 
isA etc and what the meaning of the sentence is ex. "ate pizza with bob" = "me 
and bob ate pizza" not "bob tasted so good".

Let's step back. All we can do in text is syntax and semantics. Aka cat>ate or 
cat=dog. With that you can do cat....dog ate.....therefore cat ate (you didn't 
know that prior). All these questions above like who ate the pizza and was bob 
in the pizza as a food item and was a fork in my hand and who is 'they' refer 
to and are these words rearrangeable? and where did the dragon land or who is 
the dragon and why does rain fall, are syntax and semantics. GPT-2 does this. 
BERT does this. Transformers (the architecture behind gpt2 and bert), do this. 
You are looking for a match in memory to answer the question as accurate as 
can. And I imagine if GPT-2 does deep nesting commonsense reasoning it can 
predict even better! Currently the best data compressors are amazing but can't 
talk like GPT-2 still, hence should be improvable!
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T21bdc2c440c86db7-Mc17655bdd71507dda41fd768
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to