Right, please no more acronyms...  I think you short-change the work
that has been done in AGI this century saying "Given the lack of
effort and interest in HLAI over the past few decades".... section 5.2
is interesting.... I think there is a place for GOFAI in AGI.  I mean,
it does work -- it just doesn't do everything!, and won't get you to
that "understand" state.

It seems like a good paper.  I apologize for not reading the entire works.

On 3/18/17, Logan Streondj <[email protected]> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
>
>
> On 2017-03-18 02:08 AM, Ben Goertzel wrote:
>> Generally speaking I find most of your methodology and approach
>> agreeable...
>>
>> In your intro, you do sorta underestimate the diversity of the AGI
>> R&D community, and also of the (overlapping) symbolic systems and
>> cognitive modeling community... which gives the reader your
>> approach is more unique than it is.  But, whatever....  As a record
>> of your own approach and thinking, it's fine...
>>
>> I would say the general approach you're advocating, at a high
>> level, is not that far from the one I've followed....  I had a
>> period of "thinking about how the AGI should work" in the 90s, then
>> a period of building stuff in the late 90s and early aughts, then a
>> period of slowly building stuff while mostly thinking about how it
>> should work and designing from 2002 thru 2008 .. and recently more
>> building again....   I agree that jumping too fast into
>> implementing ideas that aren't fully baked can lead to pathologies
>> -- though it can ALSO be a very valuable way of helping one get
>> one's ideas refined...
>
> yeah making time for it can be difficult.
>
> I have to have bots answering my business phone and reading to my
> children, so I can free up some minutes for programming.
>
>
>>
>> Chapter 3 of "Engineering General Intelligence, Vol. 1" describes a
>> thought experiment regarding how one would make the
>> OpenCog/CogPrime architecture carry out the task "Build me
>> something I haven't seen before."   In that chapter it's a sketch,
>> and then in subsequent chapters (largely in Vol. 2) more and more
>> particulars are filled in...
>>
>> Once you get past the stage of figuring out how your AGI
>> architecture *would* carry out human-like cognition if it were
>> implemented, then you face the fact that implementing such a thing
>> is a very large job -- too big for one person to do in a feasible
>> amount of time ... and then you face the fact that it's extremely
>> hard to get any person or organization who controls a lot of
>> resources to put sufficient funds behind an R&D project of this
>> nature....   So then what?  My choice has been to start
>> implementing anyway, but following a path so that the partial
>> implementations can still serve some practical value, thus
>> obtaining at least some funding from available sources (who are
>> almost invariably more interested to fund near term practical
>> projects than AGI).   The approach seems to be generally working
>> but it's sure slower and more complicated than it would be if we
>> had a big block of funding just for AGI...
>>
>> Anyway I certainly don't want to discourage you from going down
>> the intellectual path you've outlined, on your own.   Thinking
>> stuff through for yourself has great merits.   OTOH, also you
>> should be aware that others have trodden reasonably similar paths
>> before and are arguably way further toward the destination...
>>
>> -- Ben G
>>
>> On Sat, Mar 18, 2017 at 9:46 AM, Sean Markan
>> <[email protected]> wrote:
>>
>>> Hi AGI folks,
>>>
>>> I've written a paper about strategy and methodology for AGI.  I
>>> would be interested in your thoughts/criticism!
>>>
>>> http://www.basicai.org/pubs/h2hlai.pdf
>>>
>>> - Sean
>>>
>>> *AGI* | Archives
>>> <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/19237892-5029d625>
>>> | Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>
> - --
> Logan Streondj,
> A dream of Gaia's future.
> twitter: https://twitter.com/streondj
>
> You can use encrypted email with me,
> how to: https://emailselfdefense.fsf.org/en/
> key fingerprint:
> BD7E 6E2A E625 6D47 F7ED 30EC 86D8 FC7C FAD7 2729
> -----BEGIN PGP SIGNATURE-----
>
> iQIwBAEBCAAaBQJYzS9PExxzdHJlb25kakBnbWFpbC5jb20ACgkQhtj8fPrXJymG
> Eg//fr1V1+fAd3dUaObJe/IKf1z9S3Nna9IsLtB0viAerX5aO5x//sDqsyXr/kZ8
> 7cwyi0USlJsxI2NCetM66mfDALpPe06IYTZvmVayI/OufY9dr5JpBK+whfAmQs6H
> yCIzsqC9DD+N5XhMTWyjUme65Ysximapvxj3pjGLQvspnDCQzEyV+Of/AOuFtCom
> k0QCRzW9B077zu/fYNhZxu6hSGu9Lj3MH5znb2+tldLmSmetK91NLjr4dWzCx7Rn
> 5iJtoz1O9R/VfXzrN6CSzQajyFuSKsODZY2yapRyjORYqPDSUnAVdIMhG7p+p90Q
> q37b4sGqtibRE1PW5022HHx9aXuHwFfJjHoQd8CJlKhqjJwjcHZdGI/SrfjLx0Bn
> FgJpTZQXTpHgRBniJL0fBwB6ibMWDF3GXLHzt7ndOha+wMCW78IYKaU8NIzPL3Bp
> mGThZOSxElwHmw6TCA7afjh9fEOkPgbP5nW9fCiQNIBzonEV4mZBuvYVDyzwBbw6
> 7k1I2MvhVrdqXK8BHwyDJ0Ggb1IMbhUOf2Q8/Q8bizmTnFeuiweyQDlcrKCzgE16
> VHIichR1QUoQ4F9fmxzS7aWH5QGBIam6YZLxYSfb6V7TV2ftyluoik0MEff3h5Mi
> +BfxhYektLGBSiyQTjS0hAXmgGXF1acbq15RmI4MYnnBb3I=
> =/uJt
> -----END PGP SIGNATURE-----
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to