Brad Paulsen wrote:
Valentina,
Well, the "LOL" is on you.
Richard failed to add anything new to the two previous responses that
each posited linguistic surface feature analysis as being responsible
for generate the "feeling of not knowing" with that *particular* (and,
admittedly poorly-chosen) example query. This mechanism will, however,
apply to only a very tiny number of cases.
In response to those first two replies (not including Richard's), I
apologized for the sloppy example and offered a new one. Please read
the entire thread and the new example. I think you'll find Richard's
and your explanation will fail to address how the new example might
generate the "feeling of not knowing."
Brad,
Isn't this response, as well as the previous response directed at me,
just a little more "annoyed-sounding" than it needs to be?
Both Valentina and I (and now Mark Waser also) have simply focused on
the fact that it is relatively trivial to build mechanisms that monitor
the rate at which the system is progressing in its attempt to do a
recognition operation, and then call it as a "not known" if the progress
rate is below a certain threshold.
In particular, you did suggest the idea of a system keeping lists of
things it did not know, and surely it is not inappropriate to give a
good-naturedly humorous response to that one?
So far, I don't see any of us making a substantial misunderstanding of
your question, nor anyone being deliberately rude to you.
Richard Loosemore
Valentina Poletti wrote:
lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain
concludes that we 'don't know'. that's why it takes no effort to
realize it. agi algorithms should be built in a similar way, rather
than searching.
Isn't this a bit of a no-brainer? Why would the human brain need to
keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated .... when
this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.
Keeping lists of "things not known" is wildly, outrageously
impossible, for any system! Would we really expect that the word
"ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw" is represented somewhere
as a "word that I do not know"? :-)
I note that even in the simplest word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in
a very different way than activation of a word: it would have been
easy to build something to trigger a "this is a nonword" neuron.
Is there some type of AI formalism where nonword recognition would
be problematic?
Richard Loosemore
------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/> | Modify
<https://www.listbox.com/member/?&> Your Subscription [Powered by
Listbox] <http://www.listbox.com>
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com