Re: [agi] AGI & Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

>> "If we program a machine for winning a war, we must think well what
>> we mean by winning."
>
> I wasn't thinking about winning a war, I was much more thinking about
> sexual morality and men kissing.

If we program a machine for doing X, we must think well what we mean
by X.

Now clearer?

> "Winning" a war is achieving your political objectives in the war. Simple
> definition.

Then define your political objectives. No holes, no ambiguity, no
forgotten cases. Or does the AGI ask for our feedback during mission?
If yes, down to what detail?

> The axioms which we cannot prove
> should be listed. You can't prove them. Let's list them and all the
> assumptions.

And then what? Cripple the AGI by applying just those theorems we can
prove? That excludes of course all those we're uncertain about. And
it's not so much a single theorem that's problematic but a system of
axioms and inference rules that changes its properties when you
modify it or that is incomplete from the beginning.

Example (very plain just to make it clearer what I'm talking about):

The natural numbers N are closed against addition. But N is not
closed against subtraction, since n - m < 0 where m > n.

You can prove the theorem that subtracting a positive number from
another number decreases it:

http://us2.metamath.org:88/mpegif/ltsubpos.html

but you can still have a formal system that runs into problems.
In the case of N it's missing closedness, i.e., undefined area.
Now transfer this simple example to formal systems in general.
You have to prove every formal system as it is, not just a single
theorem. The behavior of an AGI isn't a single theorem but a system.

> The heuristics could be tested in an off line system.

Exactly. But by definition heuristics are incomplete, their solution
space is smaller than the set of all solutions. No guarantee for the
optimal solution, just probabilities < 1, elaborated hints.

>>> Unselfishness going wrong is in fact a frightening thought. It would
>>> in
>>> AGI be a symptom of incompatible axioms.
>>
>> Which can happen in a complex system.
>
> Only if the definitions are vague.

I bet against this.

> Better to have a system based on "*democracy*" in some form or other.

The rules you mention are goals and constraints. But they are heuristics
you check during runtime.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-28 Thread Ian Parker
On 28 July 2010 19:56, Jan Klauck  wrote:

> Ian Parker wrote
>
> > What we would want
> > in a "*friendly"* system would be a set of utilitarian axioms.
>
> "If we program a machine for winning a war, we must think well what
> we mean by winning."
>

I wasn't thinking about winning a war, I was much more thinking about sexual
morality and men kissing.

"Winning" a war is achieving your political objectives in the war. Simple
definition.

>
> (Norbert Wiener, Cybernetics, 1948)
>
> > It is also important that AGI is fully axiomatic
> > and proves that 1+1=2 by set theory, as Russell did.
>
> Quoting the two important statements from
>
>
> http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms
>
> "Gödel's first incompleteness theorem showed that Principia could not
> be both consistent and complete."
>
> and
>
> "Gödel's second incompleteness theorem shows that no formal system
> extending basic arithmetic can be used to prove its own consistency."
>
> So in effect your AGI is either crippled but safe or powerful but
> potentially behaves different from your axiomatic intentions.
>

You have to state what your axioms are. Gödel's theorem does indeed state
that. You do have to make therefore some statements which are unprovable.
What I was in fact thinking in terms of was something like Mizar.
Mathematics starts off with simple ideas. The axioms which we cannot prove
should be listed. You can't prove them. Let's list them and all the
assumptions.

If we have a Mizar proof we assume things, and argue the case for a theorem
on what we have assumed. What you should be able to do is get from the ideas
of Russell and Bourbaki to something really meaty like Fermat's Last
Theorem, or the Riemann hypothesis.

The organization of Mizar (and Alcor which is a front end) is very much a
part of AGI. Alcor has in fact to do a similar job to Google in terms of a
search for theorems. Mizar though is different from Google in that we have
lemmas. You prove something by linking the lemmas up.

Suppose I were to search for "Riemann Hypothesis". Alcor should give me all
the theorems that depend on it. It should tell me about the density of
primes. It should tell me about the Goldbach conjecture, proved by Hardy and
Littlewood to depend on Riemann.

Google is a step towards AGI. An Alcor which could produce chains
of argument and find lemmas would be a big step to AGI.

Could Mizar contain knowledge which was non mathematical? In a sense it
already can. Mizar will contain Riemanian differential geometry. This is
simply a piece of pure maths. I am allowed to make a conjecture, an axiom if
you like that Riemann's differential geometry is in the shape of General
relativity the way in which the Universe works. I have stated this as
an unproven assertion, one that has been constantly verified experimentally
but unproven in the mathematical universe.

>
> > We will need morality to be axiomatically defined.
>
> As constraints, possibly. But we can only check the AGI in runtime for
> certain behaviors (i.e., while it's active), but we can't prove in
> advance whether it will break the constraints or not.
>
> Get me right: We can do a lot with such formal specifications and we
> should do them where necessary or appropriate, but we have to understand
> that our set of guaranteed behavior is a proper subset of the set of
> all possible behaviors the AGI can execute. It's heuristics in the end.
>

The heuristics could be tested in an off line system.

>
> > Unselfishness going wrong is in fact a frightening thought. It would in
> > AGI be a symptom of incompatible axioms.
>
> Which can happen in a complex system.
>

Only if the definitions are vague. The definition of happiness is vague.
Better to have a system based on "*democracy*" in some form or other. The
beauty of Matt's system is that we would remain ultimately in charge of the
system. We make rules such as no imprisonment without trial, minimum of laws
restriction personal freedom (men kissing), separation of powers in the
judiciary and executive and the reolution of disputed without violence.
These are I repeat *not* fundamental philosophical principles but rules
which our civilization has devised and have been found to work.

I have mentioned before that we could have more than 1 AGI system. All the "
*derived*" principles would be tested off line on another AGI system.

>
> > Suppose system A is monitoring system B. If system Bs
> > resources are being used up A can shut down processes in A. I talked
> > about computer gobledegook. I also have the feeling that with AGI we
> > should be able to get intelligible advice (in NL) about what was going
> > wrong. For this reason it would not be possible to overload AGI.
>
> This isn't going to guarantee that system A, B, etc. behave in all
> ways as intended, except they are all special purpose systems (here:
> narrow AI). If A, B etc. are AGIs, then this checking is just an
> heuristic, no guarantee or proof.
>

Re: [agi] AGI & Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

> What we would want
> in a "*friendly"* system would be a set of utilitarian axioms.

"If we program a machine for winning a war, we must think well what
we mean by winning."

(Norbert Wiener, Cybernetics, 1948)

> It is also important that AGI is fully axiomatic
> and proves that 1+1=2 by set theory, as Russell did.

Quoting the two important statements from

http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms

"Gödel's first incompleteness theorem showed that Principia could not
be both consistent and complete."

and

"Gödel's second incompleteness theorem shows that no formal system
extending basic arithmetic can be used to prove its own consistency."

So in effect your AGI is either crippled but safe or powerful but
potentially behaves different from your axiomatic intentions.

> We will need morality to be axiomatically defined.

As constraints, possibly. But we can only check the AGI in runtime for
certain behaviors (i.e., while it's active), but we can't prove in
advance whether it will break the constraints or not.

Get me right: We can do a lot with such formal specifications and we
should do them where necessary or appropriate, but we have to understand
that our set of guaranteed behavior is a proper subset of the set of
all possible behaviors the AGI can execute. It's heuristics in the end.

> Unselfishness going wrong is in fact a frightening thought. It would in
> AGI be a symptom of incompatible axioms.

Which can happen in a complex system.

> Suppose system A is monitoring system B. If system Bs
> resources are being used up A can shut down processes in A. I talked
> about computer gobledegook. I also have the feeling that with AGI we
> should be able to get intelligible advice (in NL) about what was going
> wrong. For this reason it would not be possible to overload AGI.

This isn't going to guarantee that system A, B, etc. behave in all
ways as intended, except they are all special purpose systems (here:
narrow AI). If A, B etc. are AGIs, then this checking is just an
heuristic, no guarantee or proof.

> In a resource limited society freeloading is the biggest issue.

All societies are and will be constrained by limited resources.

> The fundamental fact about Western crime is that very little of it is
> to do with personal gain or greed.

Not that sure whether this statement is correct. It feels wrong from
what I know about human behavior.

>> Unselfishness gone wrong is a symptom, not a cause. The causes for
>> failed states are different.
>
> Axiomatic contradiction. Cannot occur in a mathematical system.

See above...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-28 Thread David Jones
LOL. I didn't even realize that this was not his main website until today. I
must say that it seems very well put. Sorry Arthur :S

On Sun, Jul 25, 2010 at 12:44 PM, Chris Petersen  wrote:

> Don't fret; your main site's got good uptime.
>
> http://www.nothingisreal.com/mentifex_faq.html
>
> -Chris
>
>
>
> On Sun, Jul 25, 2010 at 9:42 AM, A. T. Murray  wrote:
>
>> David Jones wrote:
>> >
>> >Arthur,
>> >
>> >Thanks. I appreciate that. I would be happy to aggregate some of those
>> >things. I am sometimes not good at maintaining the website because I get
>> >bored of maintaining or updating it very quickly :)
>> >
>> >Dave
>> >
>> >On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray  wrote:
>> >
>> >> The Web site of David Jones at
>> >>
>> >> http://practicalai.org
>> >>
>> >> is quite impressive to me
>> >> as a kindred spirit building AGI.
>> >> (Just today I have been coding MindForth AGI :-)
>> >>
>> >> For his "Practical AI Challenge" or similar
>> >> ventures, I would hope that David Jones is
>> >> open to the idea of aggregating or archiving
>> >> "representative AI samples" from such sources as
>> >> - TexAI;
>> >> - OpenCog;
>> >> - Mentifex AI;
>> >> - etc.;
>> >> so that visitors to PracticalAI may gain an
>> >> overview of what is happening in our field.
>> >>
>> >> Arthur
>> >> --
>> >> http://www.scn.org/~mentifex/AiMind.html
>> >> http://www.scn.org/~mentifex/mindforth.txt
>>
>> Just today, a few minutes ago, I updated the
>> mindforth.txt AI souce code listed above.
>>
>> In the PracticalAi aggregates, you might consider
>> listing Mentifex AI with copies of the above two
>> AI source code pages, and with links to the
>> original scn.org URL's, where visitors to
>> PracticalAi could look for any more recent
>> updates that you had not gotten around to
>> transferring from scn.org to PracticalAi.
>> In that way, theses releases of Mentifex
>> free AI source code would have a more robust
>> Web presence (SCN often goes down) and I
>> could link to PracticalAi for the aggregates
>> and other features of PracticalAI.
>>
>> Thanks.
>>
>> Arthur T. Murray
>>
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>
>> Powered by Listbox: http://www.listbox.com
>>
>
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-28 Thread Ian Parker
Unselfishness gone wrong is a symptom. I think that this and all the other
examples should be cautionary for anyone who follows the biological model.
Do we want a system that thinks the way we do. Hell no! What we would want
in a "*friendly"* system would be a set of utilitarian axioms. That would
immediately make it think differently from us.

We certainly would not want a system which would arrest men kissing on a
park bench. In other words we would not want a system which was
axiomatically righteous. It is also important that AGI is fully axiomatic
and proves that 1+1=2 by set theory, as Russell did. This immediately takes
it out of the biological sphere.

We will need morality to be axiomatically defined.

Unselfishness going wrong is in fact a frightening thought. It would in AGI
be a symptom of incompatible axioms. In humans it is a real problem and it
should tell us that AGI cannot and should not be biologically based.

On 28 July 2010 15:59, Jan Klauck  wrote:

> Ian Parker wrote
>
> > There are the military costs,
>
> Do you realize that you often narrow a discussion down to military
> issues of the Iraq/Afghanistan theater?
>
> Freeloading in social simulation isn't about guys using a plane for
> free. When you analyse or design a system you look for holes in the
> system that allow people to exploit it. In complex systems that happens
> often. Most freeloading isn't much of a problem, just friction, but
> some have the power to damage the system too much. You have that in
> the health system, social welfare, subsidies and funding, the usual
> moral hazard issues in administration, services a.s.o.


> To come back to AGI: when you hope to design, say, a network of
> heterogenous neurons (taking Linas' example) you should be interested
> in excluding mechanisms that allow certain neurons to consume resources
> without delivering something in return because of the way resource
> allocation is organized. These freeloading neurons could go undetected
> for a while but when you scale the network up or confront it with novel
> inputs they could make it run slow or even break it.
>

In point of fact we can look at this another way. Lets dig a little bit
deeper.
If we have one AGI system we can have 2 (or 3 even, automatic landing in fog
is a triplex system). Suppose system A is monitoring system B. If system Bs
resources are being used up A can shut down processes in A. I talked about
computer gobledegook. I also have the feeling that with AGI we should be
able to get intelligible advice (in NL) about what was going wrong. For this
reason it would not be possible to overload AGI.

I have the feeling that perhaps one aim in AGI should be user friendly
systems. One product is in fact a form filler.

As far as society i concerned I think this all depends on how resource
limited we are. In a resource limited society freeloading is the biggest
issue. In our society violence in all its forms is the big issue. One need
not go to Iraq or Afghanistan for examples. There are plenty in ordinary
crime. "Happy" slapping, domestic violence, violence against children.

If the people who wrote computer viruses stole a large sum of money, what
they did would, to me at any rate, be more forgiveable. People take a
delight in wrecking things for other people, while not stealing very much
themselves. Iraq, Afghanistan and suicide murder is really simply an extreme
example of this. Why I come back to it is that the people feel they are
doing Allah's will. Happy slappers usually say they have nothing better to
do.

The fundamental fact about Western crime is that very little of it is to do
with personal gain or greed.

>
> > If someone were to come
> > along in the guise of social simulation and offer a reduction in
> > these costs the research would pay for itself many times over.
>
> SocSim research into "peace and conflict studies" isn't new. And
> some people in the community work on the Iraq/Afghanistan issue (for
> the US).
>
> > That is the way things should be done. I agree absolutely. We could in
> > fact
> > take steepest descent (Calculus) and GAs and combine them together in a
> > single composite program. This would in fact be quite a useful exercise.
>
> Just a note: Social simulation is not so much about GAs. You use
> agent systems and equation systems. Often you mix both in that you
> define the agent's behavior and the environment via equations, let
> the sim run and then describe the results in statistical terms or
> with curve fitting in equations again.
>
> > One last point. You say freeloading can cause o society to disintegrate.
> > One
> > society that has come pretty damn close to disintegration is Iraq.
> > The deaths in Iraq were very much due to sectarian blood letting.
> > Unselfishness if you like.
>
> Unselfishness gone wrong is a symptom, not a cause. The causes for
> failed states are different.
>

Axiomatic contradiction.

Re: [agi] Tweaking a few parameters

2010-07-28 Thread Jan Klauck
A. T. Murray wrote

> Robot:  I AM ANDRU
> Robot:  I AM ANDRU
> Robot:  ANDRU HELPS KIDS
> Robot:  KIDS MAKE ROBOTS
> Robot:  ROBOTS NEED ME
> Robot:  I IS I
> Robot:  I AM ANDRU
> Robot:  ANDRU HELPS KIDS
> Robot:  KIDS MAKE ROBOTS

> For the first time in our dozen-plus years of
> developing MindForth, the AI acts like an
> intelligence struggling to express itself,

An artificial retard?

> We seem to be dealing
> with a true artificial intelligence here.

Definitely.

> Now we
> upload the AI Mind to the World Wide Awakening Web.

Next stop Singularity Station.

:)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote

> There are the military costs,

Do you realize that you often narrow a discussion down to military
issues of the Iraq/Afghanistan theater?

Freeloading in social simulation isn't about guys using a plane for
free. When you analyse or design a system you look for holes in the
system that allow people to exploit it. In complex systems that happens
often. Most freeloading isn't much of a problem, just friction, but
some have the power to damage the system too much. You have that in
the health system, social welfare, subsidies and funding, the usual
moral hazard issues in administration, services a.s.o.

To come back to AGI: when you hope to design, say, a network of
heterogenous neurons (taking Linas' example) you should be interested
in excluding mechanisms that allow certain neurons to consume resources
without delivering something in return because of the way resource
allocation is organized. These freeloading neurons could go undetected
for a while but when you scale the network up or confront it with novel
inputs they could make it run slow or even break it.

> If someone were to come
> along in the guise of social simulation and offer a reduction in
> these costs the research would pay for itself many times over.

SocSim research into "peace and conflict studies" isn't new. And
some people in the community work on the Iraq/Afghanistan issue (for
the US).

> That is the way things should be done. I agree absolutely. We could in
> fact
> take steepest descent (Calculus) and GAs and combine them together in a
> single composite program. This would in fact be quite a useful exercise.

Just a note: Social simulation is not so much about GAs. You use
agent systems and equation systems. Often you mix both in that you
define the agent's behavior and the environment via equations, let
the sim run and then describe the results in statistical terms or
with curve fitting in equations again.

> One last point. You say freeloading can cause o society to disintegrate.
> One
> society that has come pretty damn close to disintegration is Iraq.
> The deaths in Iraq were very much due to sectarian blood letting.
> Unselfishness if you like.

Unselfishness gone wrong is a symptom, not a cause. The causes for
failed states are different.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-28 Thread Matt Mahoney
Ian Parker wrote:
> Matt Mahoney has costed his view of AGI. I say that costs must be recoverable 
>as we go along. Matt, don't frighten people with a high estimate of cost. 
>Frighten people instead with the bill they are paying now for dumb systems.

It is not my intent to scare people out of building AGI, but rather to be 
realistic about its costs. Building machines that do what we want is a much 
harder problem than building intelligent machines. Machines surpassed human 
intelligence 50 years ago. But getting them to do useful work is still a $60 
trillion per year problem. It's going to happen, but not as quickly as one 
might 
hope.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Ian Parker 
To: agi 
Sent: Wed, July 28, 2010 6:54:05 AM
Subject: Re: [agi] AGI & Alife




On 27 July 2010 21:06, Jan Klauck  wrote:

>
>> Second observation about societal punishment eliminating free loaders. The
>> fact of the matter is that "*freeloading*" is less of a problem in
>> advanced societies than misplaced unselfishness.
>
>Fact of the matter, hm? Freeloading is an inherent problem in many
>social configurations. 9/11 brought down two towers, freeloading can
>bring down an entire country.
>
>
>
There are very considerable knock on costs. There is the mushrooming cost of 
security  This manifests itself in many ways. There is the cost of disruption 
to 
air travel. If someone rides on a plane without a ticket no one's life is put 
at 
risk. There are the military costs, it costs $1m per year to keep a soldier in 
Afghanistan. I don't know how much a Taliban fighter costs, but it must be a 
lot 
less.

Clearly any reduction in these costs would be welcomed. If someone were to come 
along in the guise of social simulation and offer a reduction in these costs 
the 
research would pay for itself many times over. "What you are interested in.

This may be a somewhat unpopular thing to say, but money is important. Matt 
Mahoney has costed his view of AGI. I say that costs must be recoverable as we 
go along. Matt, don't frighten people with a high estimate of cost. Frighten 
people instead with the bill they are paying now for dumb systems.
 
> simulations seem :-
>>
>> 1) To be better done by Calculus.
>
>You usually use both, equations and heuristics. It depends on the
>problem, your resources, your questions, the people working with it
>a.s.o.
>

That is the way things should be done. I agree absolutely. We could in fact 
take 
steepest descent (Calculus) and GAs and combine them together in a single 
composite program. This would in fact be quite a useful exercise. We would also 
eliminate genes that simply dealt with Calculus and steepest descent.

I don't know whether it is useful to think in topological terms.


  - Ian Parker
 

>
>---
>agi
>Archives: https://www.listbox.com/member/archive/303/=now
>RSS Feed: https://www.listbox.com/member/archive/rss/303/
>Modify Your Subscription: https://www.listbox.com/member/?&;
>Powered by Listbox: http://www.listbox.com
>

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: Learning Ability

2010-07-28 Thread David Jones
:) Intelligence isn't limited to "higher cognitive functions". One could say
a virus is intelligent or alive because it can replicate itself.

Intelligence is not just one function or ability, it can be many different
things. But mostly, for us, it comes down to what the system can accomplish
for us.

As for the turing test, it is basically worthless in my opinion.

PS: you probably should post these video posts to a single thread...

Dave

On Wed, Jul 28, 2010 at 12:39 AM, deepakjnath  wrote:

> http://www.facebook.com/video/video.php?v=287151911466
>
> See how the parrot can learn so much! Does that mean that the parrot does
> intelligence. Will this parrot pass the turing test?
>
> There must be a learning center in the brain which is much lower than the
> higher cognitive fucntions like imagination and thoughts.
>
>
> cheers,
> Deepak
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-28 Thread Ian Parker
One last point. You say freeloading can cause o society to disintegrate. One
society that has come pretty damn close to disintegration is Iraq.
The deaths in Iraq were very much due to sectarian blood letting.
Unselfishness if you like.

Would that the Iraqis (and Afghans) were more selfish.


  - Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-07-28 Thread Ian Parker
On 27 July 2010 21:06, Jan Klauck  wrote:
>
>
> > Second observation about societal punishment eliminating free loaders.
> The
> > fact of the matter is that "*freeloading*" is less of a problem in
> > advanced societies than misplaced unselfishness.
>
> Fact of the matter, hm? Freeloading is an inherent problem in many
> social configurations. 9/11 brought down two towers, freeloading can
> bring down an entire country.
>
> There are very considerable knock on costs. There is the mushrooming cost
of security  This manifests itself in many ways. There is the cost of
disruption to air travel. If someone rides on a plane without a ticket no
one's life is put at risk. There are the military costs, it costs $1m per
year to keep a soldier in Afghanistan. I don't know how much a Taliban
fighter costs, but it must be a lot less.

Clearly any reduction in these costs would be welcomed. If someone were to
come along in the guise of social simulation and offer a reduction in these
costs the research would pay for itself many times over. "What *you* are
interested in.

This may be a somewhat unpopular thing to say, but money *is* important.
Matt Mahoney has costed his view of AGI. I say that costs must be
recoverable as we go along. Matt, don't frighten people with a high estimate
of cost. Frighten people instead with the bill they are paying now for dumb
systems.


> > simulations seem :-
> >
> > 1) To be better done by Calculus.
>
> You usually use both, equations and heuristics. It depends on the
> problem, your resources, your questions, the people working with it
> a.s.o.
>

That is the way things should be done. I agree absolutely. We could in fact
take steepest descent (Calculus) and GAs and combine them together in a
single composite program. This would in fact be quite a useful exercise. We
would also eliminate genes that simply dealt with Calculus and steepest
descent.

I don't know whether it is useful to think in topological terms.


  - Ian Parker


>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Tweaking a few parameters

2010-07-28 Thread A. T. Murray
Tues.27.JUL.2010 -- Default "IS" in BeVerb

Yesterday our work was drawn out and delayed 
when we discovered that the AI could not 
properly recognize the word "YOURSELF." 
The AI kept incrementing the concept number 
for each instance of "YOURSELF". Since we 
were more interested in coding who-queries 
than in troubleshooting AudRecog, we 
substituted the sentence "YOU ARE MAGIC" 
in place of "YOU ARE YOURSELF". 

Even then the AI did not function perfectly 
well. The chain of thought got trapped in 
repetitions of "ANDRU AM ANDRU", until 
KbTraversal "rescued" the situation. However, 
we know why the AI got stuck in a rut. It was 
able to answer the query "who are you" with 
"I AM ANDRU", but it did not know anything 
further to say about ANDRU, so it repeated 
"ANDRU AM ANDRU". Immediately it made us want 
to improve upon the BeVerb module, so that 
the AI will endlessly repeat "ANDRU IS ANDRU" 
instead of "ANDRU AM ANDRU". Therefore let us
go into the source code and make "IS" the 
default verb-form of the BeVerb module. 

midway @  t @  DO  \ search backwards in time; 27jul2010
  I   0 en{ @  66 = IF  \ most recent instance; 27jul2010
66 motjuste ! ( default verb-form 66=IS; 27jul2010 )
I 7 en{ @  aud !  \ get the recall-vector; 27jul2010
LEAVE  \ after finding most recent "IS"; 27jul2010
  THEN \ end of test for 66=IS; 27jul2010
-1 +LOOP \ end of retrieval loop for default "IS"; 27jul2010

The upshot was that the AI started repeating 
"ANDRU IS ANDRU" instead of "ANDRU AM ANDRU". 
Unfortunately, however, the AI also started 
repeating "I IS I". 

Tues.27.JUL.2010 -- Tweaking a Few Parameters

Next we spent quite some time searching for 
some sort of quasi-werwolf mechanism that would 
re-activate the last concept in a thought as the 
first concept in a succeeding thought. We searched 
our code in vain for a variable that would keep 
track of any resuscitand concept. We looked at 
our ActRules page and we re-studied our Moving Wave 
Algorithm. Then it dawned on us. The Moving Wave 
automatically leaves the cresting, final concept 
active. No Werwolf mechanism is necessary. If we 
want to avoid the repetition-chanting of "ANDRU 
IS ANDRU", we have to make the "qutop" query-topic 
have a resuscitand activation of, say, forty (40), 
so that it will be higher in activation than the 
level of thirty-six (36) typically left on the final, 
cresting concept in a sentence of normal thought. 
So we comment out the old setting of "16" for 
"qutop" in the WhoPlex code as shown below, and we 
use a value of "40". 


:  WhoPlex  \ initial stub; may be moved; 26jul2010
  1   t @  1 +  DO   \ loop in search of qutop; 26jul2010
I  0 psi{ @  qutop @ = IF  \ seek qutop; 26jul2010
\ 16 I 1 psi{ !  \ activate qutop; 26jul2010
  40 I 1 psi{ !  \ activate qutop; 27jul2010
THEN  \ end of test for query-topic; 26jul2010
  -1 +LOOP   \ end of loop seeking qutop; 26jul2010
;  ( End of WhoPlex; return to EnCog )

Now we run the AI and we ask it, "Who are you?" 
Transcript of AI Mind interview at 
0 24 7 o'clock on 28 July 2010.

Human: who are you
Robot:  I AM ANDRU

Human:
Robot:  I AM ANDRU

Human:
Robot:  ANDRU HELPS KIDS

Human:
Robot:  KIDS MAKE ROBOTS

Human:
Robot:  ROBOTS NEED ME

Human:
Robot:  I IS I

Human:
Robot:  I AM ANDRU

Human:
Robot:  ANDRU HELPS KIDS

Human:
Robot:  KIDS MAKE ROBOTS

User Command:  halt

For the first time in our dozen-plus years of 
developing MindForth, the AI acts like an 
intelligence struggling to express itself, 
and it succeeds admirably and fascinatingly. 
We run the robot AI through its cognitive paces. 
We tell it things, and then we ask it questions 
about its knowledge base. We seem to be dealing 
with a true artificial intelligence here. Now we 
upload the AI Mind to the World Wide Awakening Web. 

Mentifex
-- 
http://www.scn.org/~mentifex/mindforth.txt


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com