On Sun, Apr 7, 2013 at 6:31 PM, Piaget Modeler <[email protected]>wrote:

> Please expound.  Some more examples and explanation


As I have explained before, I keep a certain distance from domains
peripheral to AGI such as neuroscience, linguistics and psychology, because
a) they hardly ever bother with the engineering challenge of an
(infinitely?) complex world, rather they still debate how do you store an
apple and how do you signify a bicycle b) they hardly ever take into
account the extended context of their work which again will have to be
reverse-engineered and engineered, for example semiosis as something
involving two non-identical parties at least, and mind design that would
also have to find a compressed generative expression such as DNA, etc. We
have also seen a lot of AGI work being solipsistic, I will never tire
suggesting that work involving societies of agents, societies of mind etc
will be far more fruitful than working on a single input-process-output
loop, because the defining factor in most environments we care for is
agency, unpredictable external and possibly peer, nuanced non-identical
agency. In fact in my design the internal society and the external ones are
a single continuum, it is my design goal to make sure "two minds are better
than one" and "half a mind is not a crippled mind".

Specifically, and I have written this before, semiosis will work
differently depending how you engineered your "reality engine", your mind:

mind <----> intention <----> sign <----> intention <----> mind

Let me rephrase the above diagram. 1) I think I know how the world is 2) I
would rather counterparty B work with me to improve it 3) hope these words
will do it 4) counterparty B luckily is available to read the sign 5) mind
B will think it over and either sign back or actuate, You may be a bit
skeptical about the second "intention" above, surely I can sign "help!" and
get to your brain even if your intention was to simply sleep for 8 hours.
Well, if we want to be pedantic we can always assign you an intention after
my screaming, perhaps the intention "ignore", "shut up" or "I am the kind
of person that helps people, so action". Or we could consider rewiring my
communication scheme, I do not consider it either fixed or final.

Quite obviously most AGIers sweat the mind part while PM decided to look
closer at signs, for whatever reason (troubled by grounding, perhaps?).
Now, the reality engine can be constructed in loads of different ways,
symbolic or connectionist, with a little, some or a lot of build in biases
etc. It goes without saying that one of the biases in a social agent is to
look for signs. Again signing could be attempted in many different ways,
build a language from scratch or share a few building blocks or, impossibly
for human agents, share and replicate whole cognitive units between agents
"here is my bicycle recognizer, here's my Chinese speech synthesis module,
now let's move on".

Interestingly, I have certain thoughts (mostly prejudices) for, let's say,
Romanians. Of course it is an extremely fluid category that objectively
will be hard to tie down to any particular individual as they ones I do
know are all different, and certainly I am quite clueless about the
geographical borders of Romania and even if I were clued up it would be
indefensible to say that someone from the Romanian side of a hill is an
instance of a concept that is different from the concepts available on the
Hungarian side of the hill. The reason I am bringing this up is to both a)
more or less reject Platonic eternal concepts (except in the trivial sense
that everything expressed or impossible to express is a Platonic concept)
and b) suggest that there is no Romanian concept in my reality engine,
except to the degree I have integrated the linguistic engine into the
reality engine (Wittgenstein), and I can think all I want about the concept
of a Romanian (human individual) without really bothering with a reality
check, just as I could be pontificating about Klingons and dodos,
especially blue-beaked dodos.

Hopefully you agree with me that the concept of blue-beaked dodos exists
even though you have no idea how they may look and frankly someone will
have to disprove their actual existence (Popper). You may also agree that
without fuzzy concepts like Romanians in my mind I would live a very poor
intellectual life, perhaps one consisting exclusively of objects of
immediate gratification and survival, it is hard to argue that a can of
beer is a weaker concept than a quantum particle, and the hypothesis that
beer is a collection of quantum particles adds very little to the beer.
While beer may help a lot when dealing with quantum particles lol.

So, where does this all leave me - it depends on the entire cognitive
architecture end-to-end.

a) you created a "mute" reality engine based on a statistical technique,
and it can do impressively many things in its environment. Now it observes
a peer apparently and would like to harness the synergy potential, if only
you could sign your intentions. Given that another entity has had a
different "life" therefore created different statistical distributions,
decision trees etc you probably shouldn't rush into some heavy handed
manipulation like pushing its buttons they way you would push yours.
Inevitably you would start with pantomime/body language/audiovisual aids ,
which could take a very long time until, incrementally, you would be able
to compress it to "me Tarzan, you Jane, me left, you right, we kill
antelope" and perhaps you would take it from there. In some ancient
postings I inquired about possible cognitive primitives, things a
human-like intellect should know a priori, and "kill" is one of them
(regardless of how long it takes for a child to lean about death and all
the "your mom went on a very long trip" stuff).

b) you created a reality engine with a heavy dose of symbols built in, it
knows thousands of handcrafted categories like "dangerous snake" and "ripe
fruit" and is even experimenting with the world creating (inside the
reality engine) an avalanche of faux-concepts like the concept "Romanian"
for individuals that hold passports with that indication, and tries to
generalize their traits. Then any new Romanian  passport holder becomes an
instance of the type Romanian, and you allocate some resources to this data
survey and analysis of yours, depending on the relevance of this type to
your survival/gratification. Then you meet another reality engine that
knows about snakes and fruits, but among its peers it is mainly running a
different experiment, right-handed vs left-handed individuals. You want to
share your interim findings and retrieve your counterparty's, but you have
no clue what each other's work entail, even left-handed is unknown to you
as a term/sign. How are you going to proceed? You probably have to bring
the mystery sign down all the way to your reality engine, and allocate some
resources (depending on how much you love/trust/fear your peer) for its
elucidation.

I know that the subtleties of these descriptions and examples will go
unnoticed by most, it is not a matter of education or IQ, rather a matter
of tuning in. If you tuned in you may agree that semiosis is largely a
non-problem. How do you do your basic semiosis? You are hardwired to(and if
you are a machine you should be hardwired to) quickly acquire/agree upon an
everyday vocabulary. How do you acquire new signs, hopefully more advance
ones? You swallow them whole, try to make sense out of them or at least to
get value out of them, and ideally in good time you spit them out.

Of course I am not an expert into Peircean semiotics or most of the other
branches of science that are cited in the bibliography you shared, but
reading that they shied away from investigating language development and
instead looked into how to share or ground a few words, and that in 20 or
60 years of research, that is semiosis enough for me! And as much as I
applaud PM's attempt to categorize different AGI approaches, I think that
the world needs my taxonomy which is more focused on how agent societies
could develop language and how the workload inside a brain or inside a
society is distributed elastically and robustly (you don't want slow
learners, liars or plain "idiots" slowing down your society or brain to a
breaking point, do you?

I guess the world is not ready for my taxonomy yet so I haven't bothered
producing it ;) But it is in the works and I urge interested parties to
explore all pathways, develop all taxa and phyla, even the ones the do not
like.

PS a bit about grounding: I don't think I have all the answers but I do
have two answers a) an embodied autonomously surviving agent is de facto
grounded b) all agents are grounded in "their own way", grounding
resembling "life" and thus forever too subtle, controversial and possibly
impossible to define.

AT



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to