[email protected]> And when talking about language, when you focus only on
> "declarative knowledge", you miss more than half of what language even is.
>  That's another problem I have with people seeking "text AGI".  Missing so
> much of the point.
> andi
  Andi was replying to Matt's comments but I recently wrote a message where I 
pointed out that most computer programs do turn declarative knowledge into 
procedural events, so developing a learning-based method in which declarative 
statements can be correlated with some kind of action is not that difficult.  
The trick is getting it to develop these correlations wisely and to integrate 
knowledge effectively so that a kind of activity might become understood to be 
related to a kind of statement.  Of course this would require that the program 
have some kind of meta awareness, but we know from human experience that this 
awareness does not have to be perfect.  And I would say that this argument, 
once I can demonstrate it in simple AGI, could stand as a partial explanation 
for the question of where self-awareness comes from, or at least that self 
awareness (of some sort) has a necessary role in general intelligence.  People 
who are able to develop a good sense of the effectiveness of their ideas and 
behaviors are going to seem like they are thinking a little more clearly than 
those who aren't. I also have been saying that people in this group have not 
been able to understand what I am talking about and/or they just do not believe 
me.  This is a perfect example.  Neither Matt nor Andi have been able and/or 
willing to integrate what I was talking about fully into their thinking.  And I 
haven't even mentioned conceptual functions (or the fact that concepts can 
modify other concepts) or anything like that for a week now. Jim Bromer  > 
Date: Sat, 4 May 2013 07:42:30 -0400
> Subject: Re: [agi] What I Was Trying to Say.
> From: [email protected]
> To: [email protected]
> 
> Are we still just being negative, or do you have some positive
> architectural suggestion as to how to improve on bag of words?  More than
> BOW, Google uses n-grams, which capture some word ordering information,
> though it does not move very far in the direction of meaning.  What is
> needed is something that does what grammar and parsing do for people.  But
> what is that?  Parts of speech give indication of what is happening with
> the words, like how they are used to recreate a scene, like who is the
> agent, or should be the agent, depending on what Searle calls the
> direction of fit from the sentence to the world.  If you aren't familiar
> which his idea, a command is about wanting to fit the world to the word,
> that is make the world match the sentence, and a declarative sentence
> intends to fit the word to the world, ie get the sentence to match what is
> in the world.  And when talking about language, when you focus only on
> "declarative knowledge", you miss more than half of what language even is.
>  That's another problem I have with people seeking "text AGI".  Missing so
> much of the point.
> andi
> 
> 
> 
> 
> On Fri, May 3, 2013 12:20 pm, Matt Mahoney wrote:
> > On Fri, May 3, 2013 at 11:05 AM, Mike Tintner <[email protected]>
> > wrote:
> >>
> >> Matt:  The idea is that for a sentence like "Put the green ball on the
> >> red
> >> block.", a bag of words model is insufficient
> >>
> >> 1. "Bag of words model" is a good phrase - basically we're always
> >> talking about some kind of semantic network/database, no?
> >
> > What I mean by the bag of words model is that word order doesn't
> > matter. If I want to build a question answering machine, then I load
> > it with millions of sentences containing facts like "the world is
> > round". In the bag of words model, I answer your question my matching
> > words in the question like "what shape is the world?" to words in the
> > answer. Since "world" matches, I give the correct answer. But you
> > could have asked any other question containing the word "world" and I
> > would have given the same answer. For example, it would give the same
> > answer to "the world is flat" by correcting you. This techniques
> > sometimes fails, for example, "Who in the world is John Q. Public?"
> >
> > This is one of many techniques that Google uses. This, and many other
> > techniques would be used to assign points to each possible response,
> > and then the result would be ranked with the highest scoring responses
> > on top. Another technique is to match pairs of words, or to use a
> > thesaurus to match "shape" or "flat" to "round". Another is to have
> > users rank answers as more or less credible by counting links (the
> > PageRank algorithm). And probably lots of others that are trade
> > secrets. Combining lots of techniques is called an ensemble method. It
> > is generally quite effective, producing a result better than any of
> > its components.
> >
> >> 4. COGNITIVE SYNERGY (switching to other thread) is baloney. If you add
> >> three progs together and get three bags of building blocks [or words]
> >> what you end up with is three and only three kinds of structures.
> >
> > My criticism of cognitive synergy is that it is contrary to the known
> > good performance of ensemble methods for machine learning. Ben claims
> > that all of the parts have to be in place to show intelligence. But
> > that isn't the case. If some parts are missing, you just get inferior
> > results. There is a law of diminishing returns. Initially you get good
> > progress as you add components, regardless of the order in which you
> > add them. Adding more components gives you marginally better results.
> >
> > This has misled a lot of AI research down dead-end paths. It is
> > interesting to compare SHRDLU with OpenCog. Both use virtual words
> > that are simplified versions of reality. Both understand simple,
> > grammatically correct sentences from a small subset of natural
> > language that are relevant to those worlds. Note the similarity with
> > SHRDLU's Blocks World. http://www.youtube.com/watch?v=ii-qdubNsx0
> >
> > The video was made in 2009, but as far as I can tell, there has not
> > been much progress since then. Ben can claim cognitive synergy as an
> > excuse. There has been a lot of work, but all of the progress has been
> > invisible so far. Once the software is finished, everything will come
> > together. I don't buy it, and apparently investors don't either. No
> > major software project has ever succeeded without showing progress
> > along the way.
> >
> > It's not that they aren't trying. Ben obviously knows a lot about the
> > problem and has made it his life's work to solve it. But he needs to
> > be realistic about the difficulty of the problem and set the bar
> > lower. He is far away from what he needs in computing hardware,
> > software, training data, and dollars. Stop relying on wishful
> > intuition and do the math.
> >
> >
> > -- Matt Mahoney, [email protected]
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/3870391-266c919a
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
> 
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-f5817f28
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to