Okay, lets consider the concept of the design space.

The design space for motorized vehicles consists of just about anything
that has enough wheels to be stable on the ground, has a structural
frame, has an energy source, and has an engine of some kind to apply
that energy source, in the form of torque, to one or more of the wheels.
We have seen many examples of motorized vehicles for more than two
centuries.  ( https://en.wikipedia.org/wiki/History_of_steam_road_vehicles )

These days just about any bozo can produce a basic vehicle using off the
shelf parts. The key there is that a massive amount of effort was
required to produce those off the shelf parts. Today, all of the parts
that are available were designed for purposes other than AGI, meaning
that it will require a massive amount of effort to adapt them to
purpose, compounded by the fact that we don't quite know what we are
building.

Nevertheless, we must suspect that there is a set of parameters that
define the set of workable AGI designs and that this space is fairly
large. We also assume that we have freedom within a basic design to swap
one thing for another and still end up with a working design just as you
can swap factory rims with pimp rims or race wheels or opt for some more
rubber to give you a more LX ride. Similarly you can swap a steam engine
for a gas engine for an electric motor without violating the basic
concept of the motor vehicle. All of them will move down the road.

The first thing that is required for an AGI is a system complex enough
to allow it to manifest it's general intelligence. That is either a
robot of a reasonable complexity, or a simulation of some kind with
parameters of sufficient complexity that it demands an agent be capable
of generating concepts on the fly. Here is a game that I've wasted far
far too much of my life on, it's similar to Minecraft but it lets you
build starships, here's one I remodeled:

https://steamcommunity.com/sharedfiles/filedetails/?id=929823165

The critical point here is that the game only provides a few dozen block
types. It is your job to assign them a meaning or purpose and to create
concepts such as wings, compartments, hulls, etc... I'm not 100% certain
this is sufficient, but it is certainly a necessary level of complexity.
This is not an easy problem. Google is using Atari games these days.
Building a more satisfactory simulation is probably a hundred million
dollar development project. =~(

( I am flat broke w/ no income).

The AI's avatar should be reasonably similar to human modalities, at
least not completely alien.


Secondly, AI is not a blank slate. It definitely isn't just a grey block
of empty computronium....

Now I'm not exactly sure what level of complexity is necessary, or
whether it is possible to start with a minimal kernel of some kind and
then bootstrap complexity from that, but there almost certainly does
need to be a framework sufficient enough for the AGI to start exhibiting
basic behaviors almost immediately. In many cases, the brain seems to be
cobbling together a solution for a high level algorithm using neurons
(the only available hammer...), in those cases, you can get a 1,000x+
speedup immediately from just implementing the high level code.

Normally, we would say that we just want to get it done, for the first
generation AGI. In this case, however, a 1,000x speed up means getting
to the singularity ten years sooner, so carefully chosen early
optimizations are a big part of getting it done.


Okay, my original idea was to base everything on abstractions of a form
similar to   A -> { W, X, Y, Z } and then construct a mind out of
several billion of those. The advantage to that concept is that it can
use perfectly conventional memory management techniques. The other
approach is to allocate a matrix that's Big Enough (tm) for your deep
neural network and basically accept that it has a learning-limit when it
hits its maximum entropy. At that point you have two options, really do
make it big enough to achieve enough intelligence to design version 2.0,
or design an introspection algorithm that can optimize and re-allocate
neural matricies, and basically patch up some of the pathological
conditions neural systems can get into.

Now the big problem that people are fretting about is how to "transfer
knowledge between tasks". There was even a [poorly constructed] contest
on this problem last year.   Want to know the secret? DON'T!  =P The
brain does not transfer knowledge between tasks, it simply re-uses
entire neural sub-units in different combinations to produce different
behaviors, or to solve different problems. While a linear neural model
seems to be sufficient to re-produce the quirks seen in human
psychology, it is clear that a switching network also exists that
dynamically reconfigures the brain for each task it performs.
Replicating this functionality is probably the most important problem
right now internal to the AGI itself.

Anyway, I'm getting off into the woods here, but the basic questions
about what functionality is actually needed remains.

-- 
Please report bounces from this address to a...@numentics.com

Powers are not rights.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T58c366e7715267bb-M9e8de2dc7b43fa646a8500f5
Delivery options: https://agi.topicbox.com/groups

Reply via email to