"If you don't use deep learning (in AGI) you're missing out on the most powerful machine learning technique currently known."
Aren't Transformers better than Deep Learning? OpenAI.com shows me that it is.... What more could you want? "1) learn from a massive amount of data, AND 2) accomplish the learning task quickly." Mine can learn from massive data, my only problem to get a score of 15MB on the Hutter Prize is performance in the way, I know how to get 15MB. I am working on the performance implementation currently. "An AGI theory with tens of modules will never be implemented... no one will spend the time to read over all the specifications.... LOL" But GPT and BERT too have 10s of modules; BPE, Relational Embeds, Positional Encoding, Tokenization, GELU, softmax layer, Dropout, activation functions, biases, Backpropagation, masking.....the code is not small it is 600-3000 lines of code... No one can even simply explain how to code from scratch a simple Backprop algorithm or Transformer to me, so I think it is the other way around hehe. Look at 3blue1brown Backpropagation how it works, the video is complex and I still can't code it after watching it 5 times over the last 3 years. It's not funny. "I don't see a clear exposition of your theory so it's difficult for me to comment on it..." I assume you know how my AI works, and instead don't know how to make the """HMMs""" efficient. My theory is solid for how AI works. My theory for efficient implementation is what may be weaker but here is the basic theory: Keep the algorithm simple and directly handle the features that are most common. So far this seems very plausible, we will see as I try the next part now soon. My AI worse case scenario will be as complex and as inefficient, but more likely simpler code. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb5526c8a9151713b-M6ef89b3136945707160a0de2 Delivery options: https://agi.topicbox.com/groups/agi/subscription