On Wednesday, July 08, 2020, at 10:30 AM, James Bowery wrote:
> The "surprise" simply means bits were added to the corpus of the intelligence.
I disagree. What if it already stored the exact phrase "thank you", and heard 
someone say it? It'd strengthen the connections / update the frequency, which 
adds bits.

On Wednesday, July 08, 2020, at 10:30 AM, James Bowery wrote:
> The less intelligent will either autistically regurgitate the passage or not 
> remember it at all.  The more intelligent will have compressed those bits 
> into consilience with the rest of its knowledge structure and _generate_ a 
> more intelligent interpretation of its meaning.
I disagree. A brain models its universe by compressing observations, it doesn't 
store every observation it sees, some are forgotten, some are already stored, 
some contain parts of ones already stored, some are similar. As a brain 
matures/ learns more, it will refuse to store anything new that is not 
predictable based on its wisdom (if it's not recent, popular, similar, or 
loved), for example dolphins use computers, we know they have no hands, no 
education, can't can't sit down on chairs, aren't as intelligent, can't live on 
land, or for example Stem cells are your favorite domain or the fastest way to 
immortality, but he loves/knows AGI is! You can definitely screw up this brain 
if you teach it from birth about some given religion frequently, recently, 
relate it to all things god made this god made that gods looks after this, and 
make it loved by saying it gives you infinite food and sex and immortality for 
future rewards, local optima. My point is that a brain picks a path and gets 
stuck there, mostly crystallized, and won't store "new things". Hence, if 
presented with Shakespeare's passage, and repeated it in its own words its 
understanding of it, it may actually parrot it if it already stores it and 
believe it like OpenAI believes their mission statement :)


Ben, so, 4 questions:

Is your OpenCog AI able to predict yet?

And is it text or image?

Do you understanding your AI (white box, clear mechanisms) much more than 
people understand Transformers or is it like a magic box may pop out answer no 
know why it works? Obviously generalization looks at hundreds of similar 
experiences but I that isn't very blackboxy.

And once/if it does predict, [what research goal] i.e. prompt/question do you 
install in it to [talk about all day]? You want to force it to rarely talk 
about clothing, unicorns, or politics. For example it says all day "I will cure 
death by...". And do you let it find its hobby/love? That'd let it switch the 
question/mission/prompt from curing death to say making stem cells or fixing 
cryonics or improving AGI, then it can pick AGI or a few and dive deeper into a 
sub domain of AGI, specialization. Doing so just makes it collect data from 
those sources more often than other domains, but that's exactly what you want. 
Exploitation. Diverse knowledge does equal generalness but survival our root 
mission/prompt causes us to have a specific area of that general space to 
favor, so only some data is more valuable, yes, clothing is not as cool as 
computers, sorry clothing nuts. But if it needs to make electricity before 
computers before AGI, yes, it cool then, but it still is focused on survival. 
Simply AGI picks next best available it can invent, electricity. This could be 
sticky web like I'm not sure it may need to go on and off small hobbies like 
need to refill battery so that's all it talks about, or need a measuring tape, 
or time to work on cryonics if working too long on one path (half the day 
devoted, or quits one then takes up other full time). But basically hobbies 
form slowly and are permanent really, sorta....80s car nut will always be an 
80s car nut.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T604ad04bc1ba220c-M9c41e3bbfb0cf72148b65d63
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to