Stefan has 2 big problems: #1 Matt is very educated, #2 I also agree hand 
crafted rules is the wrong path. The experts are against you, Stefan! :-). Give 
it up. Unsupervised Learning is the ultimate way to soak up massive data, to 
power the AI short cut finding ability I spoke about by using analogies from 
all domains. Yes we must hardwire the AI a bit, but we just talk to it 
afterwards, no hardwiring each memory syntax or semantics, the world does that 
on its own! Oof!

I already explained to you above Stefan how it works is this:
"The cat ate food on the lawn outside"
Cat ate what?
Answers: Food/lawn/outside
"The cat ate food on the lawn outside"
You're just looking for the best matches in the story heard so far using 
Probability. GPT-2 is all about this, so is seq2seq and word2vec and BERT and 
Data Compressors. They GRAB the next word to predict - by finding in memory a 
MATCH/ES from recently seen text/data and the end of it has the word/letter 
that entails that piece; the answer. Frequency is used to choose which 
prediction that entails.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T21bdc2c440c86db7-Me0770d88568bd05908324f0a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to