I haven't seen an update from you in a while. How goes the work? What's the
state of Gazelle right now?
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T7656225e437e6c65-Mbf37483a2eab2a730ba467e4
Delivery
We will either work with 2 interns - or 2 full-time devs. Or both. Or both
consecutively. Then we can finally beat GPT-x
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T7656225e437e6c65-Me7a5fbbf481767771886e9a7
GPT-2 trained on TBC + an encyclopedia of some sort (93,044,341 bytes), seems
pretty shabby yes yes yes, it seems to copy often 3 or 4 words right in the
dataset, but yet still very creative and right on. There is no in the dataset a
'affected by the cost'. And this is during training, not
When an AGI, at its own volition, using a behavioural rule set for a
situation, which ruleset it developed from experience alone - would
recognize it's own mistake and be able to make corrections - it would have
demonstrated a notion of "understanding".
In practice, this would be 1 step short of
Actually I'm not sure it is taking 1.2 hours on 100KBs on colab too.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tb76967774bc4d8c2-M94a5b53b8a205e210e1938eb
Delivery options:
I didn't write the gpt2 training code, just found the link. The goal is to test
on small datasets so I can compare it to my AI trained on the same dataset,
since I won't be using chained expensive GPUs any time soon. I am however
writing my AI python code in the editor and don't use Blockly
Mike, I'll save you the trouble, I just typed it up this morning for
my zettelkasten. There may be some typos and I don't include the
references.
Cheers
Feeling, Thinking, Knowing
By [[louis-arnaud-reid]]
In an article on [[carl-jung]], James Hillman writes that at the end of the
century "