Here is a "gift copy" of an article with a working copy of a miniature
Large Language Model AI. You can see how the AI parses sample texts from
Jane Ausin, Shakespeare, the Federalist Papers and other sources. It parses
a sample 30,000 times. You can see the outcome at various stages. You can
generate as many sample outputs as you want. It produces gibberish at
first, and then text which looks a lot like the source, but still does not
make sense.

The article describes the simplicity of the main algorithm in this
technique:

"While the inner workings of these algorithms are notoriously opaque, the
basic idea behind them is surprisingly simple. They are trained by going
through mountains of internet text, repeatedly guessing the next few
letters and then grading themselves against the real thing."


You see there is no hint of actual human intelligence in the algorithm. It
is not imitating or simulating the mechanisms of human intelligence.

https://www.nytimes.com/interactive/2023/04/26/upshot/gpt-from-scratch.html?unlocked_article_code=Q4gvpJTb9E3YINp_ca4bgZovkWX4G1TiSclGTYsby_fUHiOUcmgMuivsdApz-JTH90er1fEaTX-9sE7IK5_EgbWbYJidtUMCOynDvzCC5l_6JhXaQWq83elkRIYLSTl5Daqd3pSb942K2hIFYeMw_xEPJkyaHobPQOjWFA5D7421wxSsEZfN4FvgO-qv-FJtrNI-E20kKdgFiH7PP9A9liu48jnKueJfVHQJNNKrmMlchcWA-0b47eDZxSVJ7eSpv1ceyir2kLp8P-CIfu_fqtPSYCGckK1AS2RHajIP0Ku6u-_p2NBL8VLvz-jzshxYZusLl4lSFUTMReXDYyv5wW_OpRISrDF4&smid=url-share

>

Reply via email to