A question to all AGI professionals reading this:

What is your Evaluation method for testing/improving your AGI? Typically most 
use Perplexity or chatting with their generator to see if the 
results/predictions look good.

I use Lossless Compression instead of Perplexity though because 1) you can 
understand the same training data as test data and hence compress/understand 
the training data the most because of better feedback, 2) forces Online 
Learning, 3) can't accidentally get the test data in the training data, and 4) 
is funner compressing data.

AGI is all about Patterns, and compression is all about finding patterns. If 
there is only noise (random) left over, no further compression can be done, no 
hints. So yes, prediction/statistics is AGI; using past experience that shares 
contexts. The "recognition" part of AGI is used for this, to do induction 
during tasks like Entailment and Translation.

Needs: Often we want related or desired or common (popular) content/answers.
Tasks: And often we want entailments, translation, or summarization.
I don't know of any other need or task besides those above. Hence, for 
Evaluation, if you can add something more, I'd be very interested! Improving 
frame rate of videos falls under the above, and walking on a tight rope falls 
under Prediction because you want a specific outcome and know how to get it.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2a0cd9d392f9ff94-M831272b0908808532824a4b4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to