On Sat, May 4, 2024 at 5:38 PM Matt Mahoney wrote:
>
>
> On Fri, May 3, 2024, 11:12 PM Nanograte Knowledge Technologies <
> nano...@live.com> wrote:
>
>> A very-smart developer might come along one day with an holistic enough
>> view - and the scientific knowledge - to surprise everyone here with a
>> workable model of an AGI.
>>
>
> Sam Altman?
>
>
"Where's Illya?"
https://www.youtube.com/watch?v=AKMuA_TVz3A
...he asked only half-rhetorically given Illya's apparent grasp of KC's
import.
More seriously, the notion of "workable" is at issue. I'd say that until a
world model, sometimes called a "*foundation*" model* at least as capable
as those in use by is available open source, the
power imbalance isn't "workable".
Despite what those in power would have us believe, the evolutionary
selection criteria for grabbing power has been far too biased toward
negative sum rent-seeking for far too long.
* but only sometimes "foundation" has occasionally been overloaded to
include "alignment" that used to be provided by that all-important lobotomy
alignment layer.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M12fe5d89a2cb878e27427b70
Delivery options: https://agi.topicbox.com/groups/agi/subscription