On Wed, Jan 24, 2024 at 8:08 PM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> Mike
>
> What yo might be searching  for is, what I would refer to, as 'ambiguity
> management'. It's still machine reasoning though, as algorithmic logic. I
> think it's vital to separate this area of reasoning from 'prediction
> management'.
>
> Most learning models take the approach that a semantic engine could
> resolve and manage ambiguity. As experience would teach, it cannot do so on
> its own. As a consequence, a lexicon and taxonomical (a tables nightmare)
> can result. Lookup tables for AGI? Go figure!
>
> For AGI, one has to step away from the notion of clever apps and think
> holistically in terms of seamlessly-integrated platform design.
> Effectively, one is designing a universe.  *In other words, in the least,
> a part of the "brain-to-be" would perform semantic functionality, while
> another feature would manage decision making.*
>

Robert, You mention "another component" and I've long thought there should
be a discrete oracle component in an AGI. The Oracle would be where the
bucks stops in decision making. Contemporary models like LLMs have no such
separate component. It's all just output!


>
>
> A specialized area would probably be termed 'judgment management'. Both
> these areas of expertise should theoretically fall into a class called
> 'ambiguity management'.
>
> If you revisited the, now-ancient publication, of my abstract-reasoning
> method on researchgate, you'll find mention of ambiguity. In physics terms,
> we may as well have called that: 'relativity management'. It's a great, but
> scientifically-intensive research area.
>
> Enjoy your quest
>
> Robert
> ------------------------------
> *From:* Mike Archbold <jazzbo...@gmail.com>
> *Sent:* Thursday, 25 January 2024 04:31
> *To:* AGI <agi@agi.topicbox.com>
> *Subject:* Re: [agi] The future of AGI judgments
>
> I suppose what I am looking for is really in that space beyond the
> benchmark tests, in which clearly more than one decision is arguably valid
> within acceptable boundaries. How does the machine gauge what such
> acceptable boundaries are? What does the machine judge in cases with a
> scarcity of evidence in multiple dimensions?
>
> Most of the emphasis on large model testing is on "understanding and
> reasoning" (two words appear repeatedly in papers) but not really judging.
> Judging is what we do about the output of the AI. But ultimately we want
> the machine to really judge within acceptable boundaries given a scarcity
> of objective evidence. Now the models usually output something like "I am
> not comfortable answering that" or "I am so and so model but don't do that"
> or such. Some of this comes down to intuition and gut feel in humans --
> that is, when faced with a novel situation.
>
> On Wed, Jan 24, 2024 at 1:31 PM Mike Archbold <jazzbo...@gmail.com> wrote:
>
> James,
>
> Thanks for the lead. I know the general nature of AIXI but haven't read
> the paper. Basically what you are arguing, I think, is that everything done
> by a machine is a judgment, since ultimately it's only subjective. So, we
> cannot readily distinguish "fact" from "judgment" in  a machine, and the
> point is argued by Brian Smith in "The Promise of AI Reckoning and
> Judgment."
>
> But the climate of opinion and practical nature of modern AI is in meeting
> benchmarks in test, so there is some objectivity anyway, like it or not...
> the benchmark tests are more or less inescapably "objective" I think.
>
> On Tue, Jan 23, 2024 at 2:55 PM James Bowery <jabow...@gmail.com> wrote:
>
> There are two senses in which "subjective" applies to AGI, and one must
> very carefully distinguish between them or you'll end up in the weeds:
>
> 1) One's observations (measurement instruments) are inescapably
> "localized" within the universe hence are, in that sense, "subjective".
> See Hutter's paper "A *Complete* Theory of Everything (will be
> subjective)".   But note that one may nevertheless speak of the "ToE" which
> one constructs from one's "subjective" experiences, as an "objective"
> theory in the sense that one may shift one's perspective and measurement
> instruments without losing what one might think of as the canonical
> knowledge about the world aka "world model" that is abstracted from such
> localization parameters.
>
> 2) One's "judgements" as you call them, or "decisions" as AIXI calls them
> via Sequential *Decision* Theory, are inescapably subjective in a the
> vernacular sense of "subjective" where one places *values* on one's
> experiences via the *utility function* that parameterizes SDT.
>
> If you're going to depart from AIXI or elaborate it in some way, then it
> is important to understand where, in its very concise formalization, one is
> performing one's amputation and/or enhancement.
>
>
> On Tue, Jan 23, 2024 at 3:55 PM Mike Archbold <jazzbo...@gmail.com> wrote:
>
> Hey everybody, I've been doing some research on the topic of judgments in
> AI. Looking for some leads on where the art/science of decision making is
> heading in AI/AGI. Note: by "judgment" I mean situations which have a
> decision that is open to values within boundaries, not that can be
> immediately and objectively correct or incorrect.
>
> Lately I have been studying LLM-as-a-Judge theory. I might do a survey or
> such, not sure... looking for leads, comments etc.
>
> Thanks Mike Archbold
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T5edfab21647324f7-M8cb764242169736077df46bc>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5edfab21647324f7-M9377796b5fbf6012d1b6ded2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to