Would it only be costs and benefits or would we include a full fledged 
opportunity "object" with whatever traits that may entail? And does this notion 
differ fom Serendipity?  Erik T Mueller had a serendipity component/routine in 
his Daydreamer system, which recognized when goals were serendipitously 
attained.

~PM

> Date: Tue, 12 Jun 2012 07:56:50 -0600
> From: senil...@ghvalley.net
> To: a...@listbox.com
> CC: piagetmode...@hotmail.com
> Subject: Re: [agi] Attention
> 
> PM
> 
> A few months ago I mentioned to Jim Bromer that I would build the 
> intelligence around the idea of opportunity.  This fits directly with 
> the idea of goals, in a somewhat obscure way.
> 
> If you look at your candidates list below, you see part of what makes up 
> an opportunity.  It leaves out important factors that go into choosing 
> which opportunity is "greatest" for the moment.
> 
> These additional factors get complicated.  For example -
> - how reliable are the claims of this opportunity?  If we pursue this 
> opportunity, can we be somewhat certain we will complete the steps 
> needed to consummate and yield the benefit?
> 
> - what is the opportunity and how does it compare to other opportunity. 
>   In other words, there needs to be a vocabulary of benefit and cost 
> that goes with consideration of any opportunity.  Here is where the real 
> world is complex.  We venture and discover many benefits that we didn't 
> expect.  And we encounter costs that exceed our expectations.  Will 
> there be shortages we didn't anticipate...
> 
> - where did this opportunity come from?  If we want to evaluate 
> opportunity, we also need to consider the source.  Is this simply an 
> idea that we cooked up according to simple logic?  Is this an idea that 
> comes highly recommended from a reliable source - a master chef.
> 
> I understand that the ideas listed above seem off-base from looking at 
> goals, but the real world "intelligent" doesn't work via figuring 
> everything out.  It works from suggestion and observation of what works 
> and how it worked out.  The intelligence that we want to embody will one 
> day need to be able to choose between opportunity rather than simply 
> follow "sub-goals" that make up a recipe.
> 
> To get this intelligence going, build the database of opportunity.
> 
> Stan
> 
> (sorry if I missed a relevant prior post, I'm way behind in list reading 
> - busy time of year... yade yada .)
> 
> 
> 
> 
> On 06/11/2012 06:31 PM, Piaget Modeler wrote:
> > Abram, you've characterized it properly. In my vernacular subgoals = goals.
> >
> >
> > I would say that the job of this particular attention module is to
> > reprioritize the open goal set,
> > given all available information.
> >
> > So the question for me is what should all available information consist of?
> >
> > Some candidates are: (1) The current context, for sure, (2) alerts, (3)
> > expectation failures and mismatches,
> > (4) past prioritizations, (5) past episodes.
> >
> > Anything else?
> >
> > Your thoughts?
> >
> >
> > Date: Mon, 11 Jun 2012 11:11:58 -0700
> > Subject: Re: [agi] Attention
> > From: abramdem...@gmail.com
> > To: a...@listbox.com
> >
> > PM,
> >
> > OK. So, in this case, the goal selector is clearly selecting subgoals to
> > prioritize.
> >
> > It's a difficult question which needs a quickly computable answer, so
> > the system needs to somehow gather information over time which tells it
> > what subgoals have been most useful in the past, in what situations.
> > This process can use a wide variety of information; essentially
> > anything. However, to make an efficient choice, the information
> > considered at any particular time needs to be narrowed down somehow. The
> > space of possible sub-goals is also potentially difficult, and needs to
> > be narrowed down heuristically...
> >
> > Perhaps the best that I can say at the moment is, this seems like the
> > sort of problem which requires empirical testing to see what works and
> > what doesn't!
> >
> > --Abram
> >
> > On Fri, Jun 8, 2012 at 5:49 PM, Piaget Modeler
> > <piagetmode...@hotmail.com <mailto:piagetmode...@hotmail.com>> wrote:
> >
> >
> >     Ben
> >
> >     Yours is a sufficient response. Thank you.
> >
> >     Abram
> >
> >     Suppose we decompose a cognitive system down into a few components:
> >
> >     1. A planner (which is fed a goal, a current state and a set of
> >     possible actions (i.e., operators, methods, cases, etc.)),
> >     2. An action selector (which is fed the current state, a prioritized
> >     set of goals, and a set of methods to choose from),
> >     3. A goal selector / Attention module whose job is to prioritize or
> >     select goals for the cognitive system.
> >
> >     My question was what would you feed the goal selector to ensure it
> >     did its job (prioritizing goals) properly?
> >
> >     In a paper I read recently "A Case Study of Goal-Driven Autonomy in
> >     Domination Games" by Hector Munoz-Avila and David W. Aha
> >     the authors, in their CB-gda system, decompose the cognitive system
> >     into two case-based components (a) a planning component,
> >     and (b) a mismatch goal [selection] component. The purpose of the
> >     latter component was to correct for errors encountered by the
> >     planner. The input for the mismatch goal selection component is a
> >     mismatch (the difference between the expected state and the
> >     goal state).
> >
> >     Q: What else would be relevant input for a goal selector / Attention
> >     component?
> >
> >
> >
> >     Date: Fri, 8 Jun 2012 17:49:15 -0400
> >     Subject: Re: [agi] Attention
> >     From: b...@goertzel.org <mailto:b...@goertzel.org>
> >     To: a...@listbox.com <mailto:a...@listbox.com>
> >
> >
> >
> >     In the OpenCog framework, we supply some hard-coded "top level
> >     goals", and then the system learns how to achieve these, which may
> >     include learning subgoals...
> >
> >     The top level goals are generally of the form "keep so-and-such
> >     parameter within range [L,R]"
> >
> >     Experience of novelty and discovery of new things are good general
> >     top-level goals. For an character in a virtual 3D environment, we
> >     add in stuff like getting energy (e.g. from batteries or food),
> >     staying safe, and partaking in social interaction....
> >
> >     In reference to this sort of framework, I'm unsure if you're talking
> >     about top-level goals or learned subgoals...
> >
> >     -- Ben G
> >
> >
> >
> >     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> >     <https://www.listbox.com/member/archive/rss/303/7190161-766c6f07> |
> >     Modify <https://www.listbox.com/member/?&;> Your Subscription
> >     [Powered by Listbox] <http://www.listbox.com>
> >
> >
> >
> >
> > --
> > Abram Demski
> > http://lo-tho.blogspot.com/
> >
> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> > <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> > Modify <https://www.listbox.com/member/?&;> Your Subscription        [Powered
> > by Listbox] <http://www.listbox.com>
> >
> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> > <https://www.listbox.com/member/archive/rss/303/9320387-ea529a81> |
> > Modify
> > <https://www.listbox.com/member/?&;>
> > Your Subscription   [Powered by Listbox] <http://www.listbox.com>
> >
> 
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-5cfde295
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to