I think that the Wright Brothers approach is appropriate for AI / Stronger
AI / AGI as well. However, I also think it is obvious that there is ample
evidence that digital programming has made numerous advances in AGI even
though the successes seem to lack many human-like methods of thought.

I have often wondered why the Wrights got so involved in control surfaces
before they had a successful powered flight. Was it just common sense to
realize that you needed to 'steer' the plane once it got off the ground, or
was it just ego - since they 'knew' they would succeed they designed it for
their flights of imagination. Or was it a common meme amongst aeronautical
enthusiasts at the time? Or, did they realize, based on their experiments
with gliders, that they would be able to extend their flights with
mechanisms to control the attack of the plane in the air even though the
plane would be heavier. (They decided to use wing warping to control the
turns. NASA just tested a jet that is capable of changing the shape of its
wings by the way.) Because this last possible reason might be related to
the design-experiment-modify the design experiment method as it can be
applied to AI / Stronger AI research.

I want to find some evidence that my design principles would work to
produce Stronger AI. So, by including some control mechanisms in my designs
I might be able to stretch the distance it can get with the designs I have
in mind. But, if I design for the some-day-in-the-future my control
mechanisms would get so heavy that they could become a hindrance to
any feasible programs that I might try now. But, by designing for a test I
could run in the near future I might find some essential control features
that could be lightweight and effective to stretch the capabilities of
the program.

But you have to have some feasible plan in mind to do that. If you want to
try to do something with AGI right now your program (or device) has to be
simple but effective - in some way. Even though you might not be able to
convince other people based on primitive experiments, you have to be able
to find some evidence that your ideas are going to do something different
than most contemporary AI programs.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to