Matt should like this one even though it falls far short of his big think
<http://mattmahoney.net/agi2.html>:

Switch Transformers: Scaling to Trillion Parameter Models with Simple and
Efficient Sparsity <https://arxiv.org/abs/2101.03961>

In deep learning, models typically reuse the same parameters for all
inputs. Mixture of Experts (MoE) defies this and instead selects different
parameters for each incoming example. The result is a sparsely-activated
model -- with outrageous numbers of parameters -- but a constant
computational cost. However, despite several notable successes of MoE,
widespread adoption has been hindered by complexity, communication costs
and training instability -- we address these with the Switch Transformer.
We simplify the MoE routing algorithm and design intuitive improved models
with reduced communication and computational costs. Our proposed training
techniques help wrangle the instabilities and we show large sparse models
may be trained, for the first time, with lower precision (bfloat16)
formats. We design models based off T5-Base and T5-Large to obtain up to 7x
increases in pre-training speed with the same computational resources.
These improvements extend into multilingual settings where we measure gains
over the mT5-Base version across all 101 languages. Finally, we advance the
current scale of language models by pre-training up to trillion parameter
models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over
the T5-XXL model.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T10a03587f1fe2ac4-M8153ec66fa9721d6cdd59f3b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to