[agi] Probabilty Processor

2010-08-17 Thread Jan Klauck
--- quotes

The US Defense Advanced Research Projects Agency financed the basic
research necessary to create a processor that thinks in terms of
probabilities instead of the certainties of ones and zeros.
(...)
So we have been rebuilding probability computing from the gate level
all the way up to the processor.
(...)
The probability processing that Lyric has invented doesn't do the
on/off processing of a normal logic circuit, but rather makes
transistors function more like tiny dimmer switches, letting electron
flow rates represent the probability of something happening.
(...)
Reynolds says that a data center filled with servers that are
calculating probabilities for, say, a financial model, will be able
to consolidate from thousands of servers down to a single GP5 appliance
to calculate probabilities.
(...)
Digital logic that takes 500 transistors to do a probability multiply
operation, for instance, can be done with just a few transistors on
the Lyric chips. With an expected factor of 1,000 improvement over
general purpose CPUs running probability algorithms, the energy
savings of using GP5s instead of, say, x64 chips will be immense.
(...)
programming language, which is called Probability Synthesis to
Bayesian Logic, or PSBL for short.
---

Hm. Wow?

(DARPA funds Mr Spock on a Chip)
http://www.theregister.co.uk/2010/08/17/lyric_probability_processor/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-08-01 Thread Jan Klauck
Ian Parker wrote

 I would like your
 opinion on *proofs* which involve an unproven hypothesis,

I've no elaborated opinion on that.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-08-01 Thread Jan Klauck
Ian Parker wrote

 McNamara's dictum seems on
 the face of it to contradict the validity of Psychology as a
 science.

I don't think so. That in unforseen events people switch to
improvisation isn't suprising. Even an AGI, confronted with a novel
situation and lacking data and models and rules for that, has to
switch to ad-hoc heuristics.

 Psychology, if is is a valid science can be used for modelling.

True. And it's used for that purpose. In fact some models of
psychology are so good that the simulation's results are consistent
with what is empirically found in the real world.

 Some of what McNamara has to say seems to me to be a little bit
 contradictory. On the one hand he espouses *gut feeling*. On the other
 he says you should be prepared to change your mind.

I don't see the contradiction. Changing one's mind refers to one's
assumption and conceptual framings. You always operate under uncertainty
and should be open for re-evaluation of what you believe.

And the lower the probability of an event, the lesser are you prepared
for it and you switch to gut feelings since you lack empirical experience.
Likely that one's gut feelings operate within one's frame of mind.

So these are two different levels.

 John Prescott at the Chilcot Iraq inquiry said that the test of
 politicians was not hindsight, but courage and leadership. What the 
 does he mean.

Rule of thumb is that it's better to do something than to do nothing.
You act, others have to react. As long as you lead the game, you can
correct your own errors. But when you hesitate, the other parties will
move first and you eat what they hand out to you.

And don't forget that the people still prefer alpha-males that lead,
not those that deeply think. It's more important to unite the tribe
with screams and jumps against the enemy than to reason about budgets
or rule of law--gawd how boring... :)

 It seems that *getting things right* is not a priority
 for politicians.

Keeping things running is the priority.

--- Now to the next posting ---

 This is an interesting article.

Indeed.

 Google is certain to uncover the *real motivators.*

Sex and power.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-08-01 Thread Jan Klauck
Steve Richfield wrote

 Have you ever taken a dispute, completely deconstructed it to determine
 its structure, engineered a prospective solution, and attempted to
 implement it?

No.

 How can you, the participants
 on this forum, hope to ever bring stability

That depends on your definition of stability.

Progress is often triggered by instability and leads to new forms
of instability. There shouldn't be too much instability in the same
sense that too much stability is also bad.

 Similarly, I suspect that demonstrated skill in IR
 is a prerequisite to creating any sort of effective IR program.

There actually were and are successful IR people. It's not all war
and disaster out there. And BTW is IR more than just conflicts.
Successful trade agreements, migration policies, scientific and
technological cooperation are also in the domain of IR.

And I'm not looking for an autonomous IR program but ask whether
support systems are used and if yes of what sort.

 For example, the apparently obvious cure for global warming

This is now in competition for first place on my list of your world
improvement approaches with your idea of housing old men with young
women in abandoned mines for breeding a long-living human species. ;)

 YES. Some say that my proposal for bulldozing the upwind strips of the
 continents is irrational, not because it won't work, but because it hasn't
 been experimentally proven. Once past computer simulations, the only way
 to prove it is to try it.

My proposal is to nuke the US and China since they are the two top
polluters on Earth. Some say that my proposal is irrational, not because
it won't work, but because it hasn't been experimentally proven. The
only way to prove it is to try it.

 Judge for yourself which side of this argument is
 irrational.

Well... :)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-08-01 Thread Jan Klauck
Steve Richfield wrote

 I suspect that this tool could work better than any AGI in the absence of
 such a tool.

I see an AGI more as a support tool that collects and assesses data,
creates and evaluates hypotheses, develops goals and plans how to reach
them and assists people with advice. The logic stuff would already be
built into all that.

 My simple (and completely unacceptable) cure for this is to tax savings,
 to force the money back into the economy.

You have either consumption or savings. The savings are put back into
the economy in form of credits to those who invest the money.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] AGI Int'l Relations

2010-07-30 Thread Jan Klauck
(If you don't have time to read all this, scroll down to the
questions.)

I'm writing an article on the role of intelligent systems in the
field of International Relations (IR). Why IR? Because in today's
(and more so in tomorrow's) world the majority of national policies
is influenced by foreign affairs--trade, migration, technology,
global issues etc. (And because I got invited to write such an
article for the IR community.)

Link for a quick overview:
http://en.wikipedia.org/wiki/International_relations

The problem of foreign and domestic policy-making is to have
appropriate data sources, models of the world and useful goals.
Ideally both sides of the equation are brought into balance, which
is difficult of course.
Modern societies become more pluralistic, the world becomes more
polycentric, technologies and social dynamics change faster and
the overall scence becomes more complex. That's the trend.
To make sense of that all policy/decision-makers have to handle
this rising complexity.

I know of several (academic) approaches to model IR, conflicts,
macroeconomic and social processes. Only few are useful. And
fewer are actually used (e.g., tax policy, economic policy).
It's possible that some use even narrow AI for specific tasks.
But I'm not aware of intelligent systems used by the IR community.
From what I see do they rely more on studies done by analysts and
news/intelligence reports.

So my questions:

(1) Do you know of intelligent systems for situational awareness,
decision support, policy implementation and control that are used
by the IR community (in whatever country)?

(2) Or that are proposed to be used?

(3) Do you know of any trends into this direction? Like extended
C4ISR or ERP systems?

(4) Do you know of intelligent systems used in the business world
for strategic planning and operational control that could be used
in IR?

(5) Historical examples? Like
http://en.wikipedia.org/wiki/Project_Cybersyn
for the real-time control of the planned economy

(6) Do you think the following statement is useful?
Policy-making is a feedback loop which consists of awareness-
decision-planing-action, where every part requires experience,
trained cognitive abilites, high speed and precision of perception
and assessment.
(Background: ideal field for a supporting AGI to work in.)

(6) Further comments?

Thanks,
Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-30 Thread Jan Klauck
Ian Parker wrote

 Then define your political objectives. No holes, no ambiguity, no
 forgotten cases. Or does the AGI ask for our feedback during mission?
 If yes, down to what detail?

 With Matt's ideas it does exactly that.

How does it know when to ask? You give it rules, but those rules can
be somehow imperfect. How are its actions monitored and sanctioned?
And hopefully it's clear that we are now far from mathematical proof.

 No we simply add to the axiom pool.

Adding is simple, proving is not. Especially when the rules, goals,
and constraints are not arithmetic but ontological and normative
statements. Wether by NL or formal system, it's error-prone to
specify our knowledge of the world (much of it is implicit) and
teach it to the AGI. It's similar to law which is similar to math
with referenced axioms and definitions and a substitution process.
You often find flaws--most are harmless, some are not.

Proofs give us islands of certainty in an explored sea within the
ocean of the possible. We end up with heuristics. That's what this
discussion is about, when I remember right. :)

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-07-30 Thread Jan Klauck
Ian Parker wrote

 games theory

It produced many studies, many strategies, but they weren't used that
much in the daily business. It's used more as a general guide.
And in times of crisis they preferred to rely on gut feelings. E.g.,
see
http://en.wikipedia.org/wiki/The_Fog_of_War

 How do you cut
 Jerusalem? Israel cuts and the Arabs then decide on the piece they want.
 That is the simplest model.

For every complex problem there is an answer that is clear, simple,
and wrong. (H. L. Mencken)

SCNR. :)

 This brings me to where I came in. How do you deal with irrational
 decision
 making. I was hoping that social simulation would be seeking to provide
 answers. This does not seem to be the case.

Models of limited rationality (like bounded rationality) are already
used, e.g., in resource mangement  land use studies, peace and conflict
studies and some more.
The problem with those models is to say _how_much_ irrationality there
is. We can assume (and model) perfect rationality and then measure the
gap. Empirically most actors aren't fully irrational or behave random,
so they approach the rational assumptions. What's often more missing is
that actors lack information or the means to utilize them.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 There are the military costs,

Do you realize that you often narrow a discussion down to military
issues of the Iraq/Afghanistan theater?

Freeloading in social simulation isn't about guys using a plane for
free. When you analyse or design a system you look for holes in the
system that allow people to exploit it. In complex systems that happens
often. Most freeloading isn't much of a problem, just friction, but
some have the power to damage the system too much. You have that in
the health system, social welfare, subsidies and funding, the usual
moral hazard issues in administration, services a.s.o.

To come back to AGI: when you hope to design, say, a network of
heterogenous neurons (taking Linas' example) you should be interested
in excluding mechanisms that allow certain neurons to consume resources
without delivering something in return because of the way resource
allocation is organized. These freeloading neurons could go undetected
for a while but when you scale the network up or confront it with novel
inputs they could make it run slow or even break it.

 If someone were to come
 along in the guise of social simulation and offer a reduction in
 these costs the research would pay for itself many times over.

SocSim research into peace and conflict studies isn't new. And
some people in the community work on the Iraq/Afghanistan issue (for
the US).

 That is the way things should be done. I agree absolutely. We could in
 fact
 take steepest descent (Calculus) and GAs and combine them together in a
 single composite program. This would in fact be quite a useful exercise.

Just a note: Social simulation is not so much about GAs. You use
agent systems and equation systems. Often you mix both in that you
define the agent's behavior and the environment via equations, let
the sim run and then describe the results in statistical terms or
with curve fitting in equations again.

 One last point. You say freeloading can cause o society to disintegrate.
 One
 society that has come pretty damn close to disintegration is Iraq.
 The deaths in Iraq were very much due to sectarian blood letting.
 Unselfishness if you like.

Unselfishness gone wrong is a symptom, not a cause. The causes for
failed states are different.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Tweaking a few parameters

2010-07-28 Thread Jan Klauck
A. T. Murray wrote

 Robot:  I AM ANDRU
 Robot:  I AM ANDRU
 Robot:  ANDRU HELPS KIDS
 Robot:  KIDS MAKE ROBOTS
 Robot:  ROBOTS NEED ME
 Robot:  I IS I
 Robot:  I AM ANDRU
 Robot:  ANDRU HELPS KIDS
 Robot:  KIDS MAKE ROBOTS

 For the first time in our dozen-plus years of
 developing MindForth, the AI acts like an
 intelligence struggling to express itself,

An artificial retard?

 We seem to be dealing
 with a true artificial intelligence here.

Definitely.

 Now we
 upload the AI Mind to the World Wide Awakening Web.

Next stop Singularity Station.

:)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 What we would want
 in a *friendly* system would be a set of utilitarian axioms.

If we program a machine for winning a war, we must think well what
we mean by winning.

(Norbert Wiener, Cybernetics, 1948)

 It is also important that AGI is fully axiomatic
 and proves that 1+1=2 by set theory, as Russell did.

Quoting the two important statements from

http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms

Gödel's first incompleteness theorem showed that Principia could not
be both consistent and complete.

and

Gödel's second incompleteness theorem shows that no formal system
extending basic arithmetic can be used to prove its own consistency.

So in effect your AGI is either crippled but safe or powerful but
potentially behaves different from your axiomatic intentions.

 We will need morality to be axiomatically defined.

As constraints, possibly. But we can only check the AGI in runtime for
certain behaviors (i.e., while it's active), but we can't prove in
advance whether it will break the constraints or not.

Get me right: We can do a lot with such formal specifications and we
should do them where necessary or appropriate, but we have to understand
that our set of guaranteed behavior is a proper subset of the set of
all possible behaviors the AGI can execute. It's heuristics in the end.

 Unselfishness going wrong is in fact a frightening thought. It would in
 AGI be a symptom of incompatible axioms.

Which can happen in a complex system.

 Suppose system A is monitoring system B. If system Bs
 resources are being used up A can shut down processes in A. I talked
 about computer gobledegook. I also have the feeling that with AGI we
 should be able to get intelligible advice (in NL) about what was going
 wrong. For this reason it would not be possible to overload AGI.

This isn't going to guarantee that system A, B, etc. behave in all
ways as intended, except they are all special purpose systems (here:
narrow AI). If A, B etc. are AGIs, then this checking is just an
heuristic, no guarantee or proof.

 In a resource limited society freeloading is the biggest issue.

All societies are and will be constrained by limited resources.

 The fundamental fact about Western crime is that very little of it is
 to do with personal gain or greed.

Not that sure whether this statement is correct. It feels wrong from
what I know about human behavior.

 Unselfishness gone wrong is a symptom, not a cause. The causes for
 failed states are different.

 Axiomatic contradiction. Cannot occur in a mathematical system.

See above...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

 If we program a machine for winning a war, we must think well what
 we mean by winning.

 I wasn't thinking about winning a war, I was much more thinking about
 sexual morality and men kissing.

If we program a machine for doing X, we must think well what we mean
by X.

Now clearer?

 Winning a war is achieving your political objectives in the war. Simple
 definition.

Then define your political objectives. No holes, no ambiguity, no
forgotten cases. Or does the AGI ask for our feedback during mission?
If yes, down to what detail?

 The axioms which we cannot prove
 should be listed. You can't prove them. Let's list them and all the
 assumptions.

And then what? Cripple the AGI by applying just those theorems we can
prove? That excludes of course all those we're uncertain about. And
it's not so much a single theorem that's problematic but a system of
axioms and inference rules that changes its properties when you
modify it or that is incomplete from the beginning.

Example (very plain just to make it clearer what I'm talking about):

The natural numbers N are closed against addition. But N is not
closed against subtraction, since n - m  0 where m  n.

You can prove the theorem that subtracting a positive number from
another number decreases it:

http://us2.metamath.org:88/mpegif/ltsubpos.html

but you can still have a formal system that runs into problems.
In the case of N it's missing closedness, i.e., undefined area.
Now transfer this simple example to formal systems in general.
You have to prove every formal system as it is, not just a single
theorem. The behavior of an AGI isn't a single theorem but a system.

 The heuristics could be tested in an off line system.

Exactly. But by definition heuristics are incomplete, their solution
space is smaller than the set of all solutions. No guarantee for the
optimal solution, just probabilities  1, elaborated hints.

 Unselfishness going wrong is in fact a frightening thought. It would
 in
 AGI be a symptom of incompatible axioms.

 Which can happen in a complex system.

 Only if the definitions are vague.

I bet against this.

 Better to have a system based on *democracy* in some form or other.

The rules you mention are goals and constraints. But they are heuristics
you check during runtime.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-27 Thread Jan Klauck
Linas Vepstas wrote

First my answers to Antonio:

1) What is the role of Digital Evolution (and ALife) in the AGI context?

The nearest I can come up with is Goertzel's virtual pre-school idea,
where the environment is given and the proto-AGI learns within it.
It's certainly possible to place such a proto-AGI into an evolving
environment. I'm not sure how helpful this is, since now we also need
to make sense of the evolving environment in order to assess what the
agent does.

But that's far from the synthetic life approach, where environment and
agents are usually not that much pre-defined. And from those synth.
approaches I know about, they're mostly concerned with replicating
natural evolution, adaption, self-organization a.s.o. Some look into
the emergence and evolution of cooperation, but that's often very low
level and more interested in general properties; far from AGI.

2) Is it possible that some aspects of AGI could self-emerge from the
 digital evolution of intelligent autonomous agents?

I guess it's possible. But I guess one won't come up with a mechanism
that works in an AGI system but with interesting properties of an AGI
system. Most intelligent agents are faked, not really cognitive or
so. In a simulation you see how agents develop/select strategies and
what works in an (evolutionary) environment. Like (wild idea now) the
ability to assign parts of its cognitive capacity to memory or processing
depending on the environmental context (more memory in unchanging and
more processing in changing environments). Those properties could be
integrated later as a detail of a bigger framework.

3) Is there any research group trying to converge both approaches?

My best ad-hoc idea is to scan through the last year's alife conference
program, look for papers that are promising, contact the authors and
ask whether they are into AGI or know people who are.

http://www.ecal2009.org/documents/ECAL2009_program.pdf

One of the topics was artificial consciousness and I saw several
papers going into this direction, often indirectly. Like the Swarm
Cognition and Artificial Life paper on p.34 or the first poster on
p.47.

Now to Linas' part:

 Seems like there could be many many interesting questions.

Many of these are specialized issues that are researched in alife but
more in social simulation. The Journal of Artificial Societies and
Social Simulation

http://jasss.soc.surrey.ac.uk/JASSS.html

is a good starting point if anyone is interested.

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-22 Thread Jan Klauck
Mike Tintner trolled

 And maths will handle the examples given :

 same tunes - different scales, different instruments
 same face -  cartoon, photo
 same logo  - different parts [buildings/ fruits/ human figures]

Unfortunately I forgot. The answer is somewhere down there:

http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace
http://en.wikipedia.org/wiki/Pattern_recognition
http://en.wikipedia.org/wiki/Curve_fitting
http://en.wikipedia.org/wiki/System_identification

 revealing them to be the same  -   how exactly?

Why should anybody explain that mystery to you? You are not an
accepted member of the Grand Lodge of AGI Masons or its affiliates.

 Or you could take two arseholes -  same kind of object, but radically
 different configurations - maths will show them to belong to the same
 category, how?

How will you do it? By licking them?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-21 Thread Jan Klauck
Mike Tintner wrote

 You partly illustrate my point - you talk of artificial brains as if
 they actually exist

That's the magic of thinking in scenarios. For you it may appear as if
we couldn't differentiate between reality and a thought experiment.

 By implicitly pretending that artificial brains exist - in the form of
 computer programs -  you (and most AGI-ers), deflect attention away from
 all
 the unsolved dimensions of what is required for an independent
 brain-cum-living system, natural or artificial.

Then bring this topic up. But please in an educated way and not with
the same half-understanding of AGI and math you demonstrate here.
But to be honest I expect you to talk about this with your usual
misunderstandings and then wonder that nobody (positively) reacts on
that--and then you'll again run around and whine that we don't get it.

(And what's an artificial brain-cum-living system?)

 Yes you may know these things some times as you say, but most of the
 time
 they're forgotten.

There are other topics that often require more focus at this time.
People are working on details you usually don't understand and don't
care to understand.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking Is-a Functionality

2010-07-20 Thread Jan Klauck
Steve Richfield wrote

 maybe with percentages
 attached, so that people could announce that, say, I am 31% of the
 way to having an AGI.

Not useful. AGI is still a hypothetical state and its true composition
remains unknown. At best you can measure how much of an AGI plan is
completed, but that's not necessarily equal to actually having an AGI.

Of course, you could use a human brain as an upper bound, but that's
still questionable, because--as I see it--most AGI designs arent'
intended to be isomorphic and I don't know how good the brain is
understood today that we can use it as an invariant measure.

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Jan Klauck
Mike Tintner wrote

 No, the collective brain is actually a somewhat distinctive idea.

Just a way of looking at social support networks. Even social
philosophers centuries ago had similar ideas--they were lacking our
technical understanding and used analogies from biology (organicism)
instead.

 more like interdependently functioning with society

As I said it's long known to economists and sociologists. There's even
an African proverb pointing at this: It takes a village to raise a
child.
System researcher investigate those interdependencies since decades.

 Did you watch the talk?

No flash here. I just answer on what you're writing.

 The evidence of the idea's newness is precisely the discussions of
 superAGI's and AGI futures by the groups here

We talked about the social dimensions some times. It's not the most
important topic around here, but that doesn't mean we're all ignorant.

In case you haven't noticed I'm not building an AGI, I'm interested
in the stuff around, e.g., tests, implementation strategies etc. by
the means of social simulation.

 Your last question is also an example of cocooned-AGI thinking? Which
 brains?  The only real AGI brains are those of living systems

A for Artificial. Living systems don't qualify for A.

My question was about certain attributes of brains (whether natural or
artificial). Societies are constrained by their members' capacities.
A higher individual capacity can lead to different dependencies and
new ways groups and societies are working.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Mathematical models of autonomous life

2008-11-03 Thread Jan Klauck
Researchers from the German Max Planck Society claim to
have developed mathematical methods that allow (virtual and
robotic) embodied entities to evolve by their own.
They begin with a child-like state and develop by exploring
both their environment and their personal capabilities.

Well, not very new for this list. But it involves virtual
dogs... ;)

Their press release in German:

http://preview.tinyurl.com/62tgom

http://www.mpg.de/bilderBerichteDokumente/dokumentation/pressemitteilungen/2008/pressemitteilung20081031/index.html

The translation via babelfish:

http://preview.tinyurl.com/55k53t

Note the video gallery:

http://robot.informatik.uni-leipzig.de/research/videos/#SECTION20

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Jan Klauck
Matt,

 People who haven't studied
 logic or its notation can certainly learn to do this type of reasoning.

Formal logic doesn't scale up very well in humans. That's why this
kind of reasoning is so unpopular. Our capacities are that small and
we connect to other human entities for a kind of distributed problem
solving. Logic is just a tool for us to communicate and reason
systematically about problems we would mess up otherwise.

 So perhaps someone can explain why we need formal knowledge
 representations to reason in AI.

Using formal krep supports us in checking what an AI does when
it solves complex problems. So it should be convenient for _us_
and not necessarily for the AI. As I said, it's just a tool.

Just my thoughts...

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] any advice

2008-09-09 Thread Jan Klauck
 Dalle Molle Institute of Artificial Intelligence
 University of Verona (Artificial Intelligence dept)

If they were corporations, from which one would you buy shares?

I would go for IDSIA. I mean, hey, you have Schmidhuber around. :)

Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com