On Tue, Nov 21, 2023, 8:45 PM James Bowery <jabow...@gmail.com> wrote:

> Please elucidate:
>
>
> Ideally a neural network should use one parameter per bit of compressed
> training data, or 1 billion
>
> Approximately, from information theory. A Hopfield associate memory
capacity is 0.3 bits per parameter.

Also I'm not convinced about transformers. Our brains are proof that it is
possible to learn in 1 pass. We also devote similar amounts of brain tissue
to low and high level language processing. My preprocessor runs in about a
minute on enwik9.

>
>
> <https://agi.topicbox.com/groups/agi/Tdc371ce11a040352-M92adabc6ed264bcd665e0fac>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdc371ce11a040352-Md511c73ecc9f0f38162ec04d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to