Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Charles D Hixson

Steve Richfield wrote:

...
have played tournament chess. However, when faced with a REALLY GREAT 
chess player (e.g. national champion), as I have had the pleasure of 
on a couple of occasions, they at first appear to play as novices, 
making unusual and apparently stupid moves that I can't quite 
capitalize on, only to pull things together later on and soundly beat 
me. While retrospective analysis would show them to be brilliant, that 
would not be my evaluation early in these games.
 
Steve Richfield
But that's a quite reasonable action on their part.  Many players have 
memorized some number of standard openings.  But by taking the game away 
from the standard openings (or into the less commonly known ones) they 
enable the player with the stronger chess intuition to gain an 
edge...and they believe that it will be themselves.


E.g.:  The Orangutan opening is a trifle weak, but few know it well.  
But every master would know it, and know both it's strengths and 
weaknesses.  If you don't know the opening, though, it just looks weak.  
Looks, however, are deceptive.  If you don't know it, you're quite 
likely to find it difficult to deal with against someone who does know 
it, even if they're a generally weaker player than you are.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Charles D Hixson

Mike Tintner wrote:


Charles: Flaws in Hamlet:  I don't think of this as involving general
intelligence.  Specialized intelligence, yes, but if you see general 
intelligence at work there you'll need to be more explicit for me to 
understand what you mean.  Now determining whether a particular 
deviation from iambic pentameter was a flaw would require a deep 
human intelligence, but I don't feel that understanding of how human 
emotions are structured is a part of general intelligence except on a 
very strongly superhuman level.  The level where the AI's theory of 
your mind was on a par with, or better than, your own.


Charles,

My flabber is so ghasted, I don't quite know what to say.  Sorry, I've 
never come across any remarks quite so divorced from psychological 
reality. There are millions of essays out there on Hamlet, each one of 
them different. Why don't you look at a few?:


http://www.123helpme.com/search.asp?text=hamlet
I've looked at a few (though not those).  In college I formed the 
definite impression that essays on the meaning of literature were 
exercises in determining what the instructor wanted.  This isn't 
something that I consider a part of general intelligence (except as 
mentioned above).


...
The reason over 70 per cent of students procrastinate when writing 
essays like this about Hamlet, (and the other 20 odd per cent also 
procrastinate but don't tell the surveys), is in part that it is 
difficult to know which of the many available approaches to take, and 
which of the odd thousand lines of text to use as support, and which 
of innumerable critics to read. And people don't have a neat structure 
for essay-writing to follow. (And people are inevitably and correctly 
afraid that it will all take if not forever then far, far too long).
The problem is that most, or at least many, of the approaches are 
defensible, but your grade will be determined by the taste of the 
instructor.  This isn't a problem of general intelligence except at a 
moderately superhuman level.  Human tastes aren't reasonable ingredients 
for an entry level general intelligence.  Making it a requirement merely 
ensures that one will never be developed (whose development attends to 
your theories of what's required).


...

In short, essay writing is an excellent example of an AGI in action - 
a mind freely crossing different domains to approach a given subject 
from many fundamentally different angles.   (If any subject tends 
towards narrow AI, it is normal as opposed to creative maths).
I can see story construction as a reasonable goal for an AGI, but at the 
entry level they are going to need to be extremely simple stories.  
Remember that the goal structures of the AI won't match yours, so only 
places where the overlap is maximal are reasonable grounds for story 
construction.  Otherwise this is an area for specialized AIs, which 
isn't what we are after.


Essay writing also epitomises the NORMAL operation of the human mind. 
When was the last time you tried to - or succeeded in concentrating 
for any length of time?
I have frequently written essays and other similar works.  My goal 
structures, however, are not generalized, but rather are human.  I have 
built into me many special purpose functions for dealing with things 
like plot structure, family relationships, relative stages of growth, etc. 


As William James wrote of the normal stream of consciousness:

Instead of thoughts of concrete things patiently following one 
another in a beaten track of habitual suggestion, we have the most 
abrupt cross-cuts and transitions from one idea to another, the most 
rarefied abstractions and discriminations, the most unheard-of 
combinations of elements, the subtlest associations of analogy; in a 
word, we seem suddenly introduced into a seething caldron of ideas, 
where everything is fizzling and bobbing about in a state of 
bewildering activity, where partnerships can be joined or loosened in 
an instant, treadmill routine is unknown, and the unexpected seems the 
only law.


Ditto:

The normal condition of  the mind is one of informational disorder: 
random thoughts chase one another instead of lining up in logical 
causal sequences.

Mihaly Csikszentmihalyi

Ditto the Dhammapada,  Hard to control,  unstable is the  mind, ever 
in quest of delight,


When you have a mechanical mind that can a) write essays or tell 
stories or hold conversations  [which all present the same basic 
difficulties] and b) has a fraction of the difficulty concentrating 
that the brain does and therefore c) a fraction of the flexibility in 
crossing domains, then you might have something that actually is an AGI.


You seem to be placing an extremely high bar in place before you will 
consider something an AGI.  Accepting all that you have said, for an AGI 
to react as a human would react would require that the AGI be strongly 
superhuman.


More to the point, I wouldn't DARE create an AGI which had motivations 
similar to 

Re: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Charles D Hixson

Dr. Matthias Heger wrote:

Performance not an unimportant question. I assume that AGI has necessarily
has costs which grow exponentially with the number of states and actions so
that AGI will always be interesting only for toy domains.

My assumption is that human intelligence is not truly general intelligence
and therefore cannot hold as a proof of existence that
AGI is possible. Perhaps we see more intelligence than  there really is.
Perhaps the human intelligence is to some extend overestimated and an
illusion as the free will.

Why? In truly general domains every experience of an agent only can be used
for the single certain state and action when the experience was made. Every
time when your algorithm makes generalizations from known state-action pairs
to unknown state-action pairs then this is in fact usage of knowledge about
the underlying state-action space or it is just guessing and only a matter
of luck.

So truly general AGI algorithms must visit every state-action pair at least
once to learn what to do in what state.
Even in small real world domains the state spaces are so big that it would
take longer than the age of the universe to go through all states. 


For this reason true AGI is impossible and human intelligence must be narrow
to a certain degree. 



  
I would assert a few things that appear to contradict your assumptions 
(and a few that suppport them).
1)  AGIs will reach conclusions that are not guaranteed to be correct.  
This allows somewhat lossy compression of the input data.
2) AGIs can exist, but will operate in modes.  In AGI mode they will be 
very expensive and slow.  And still be error prone.
3) Humans do have an AGI mode.  Probably more than one of them.  But 
it's so expensive to use and so slow that they strive diligently to 
avoid using it, preferring to rely on simple situation-based models (and 
discarding most of the input data while doing so).
4) When humans are operating in AGI mode, they are not considering or 
using ANY real-time data (except to hold and replay notes).  The process 
is too slow.


The two AGI modes that I believe people use are 1) mathematics and 2) 
experiment.  Note that both operate in restricted domains, but within 
those domains they *are* general.  (E.g., mathematics cannot generate 
it's own axioms, postulates, and rules of inference, but given them it 
is general.)  Because of the restricted domains, many problems can't 
even be addressed by either of them, so I suspect the presence of other 
AGI modes.  Possibly even slower and more expensive to use.


I suppose that one could quibble that since the modes I have identified 
are restricted to particular domains, that they aren't *general* 
intelligence modes, but as far as I can tell ALL modes of human thought 
only operate within restricted domains.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Charles D Hixson

Mike Tintner wrote:

Charles: as far as I can tell ALL modes of human thought

only operate within restricted domains.


I literally can't conceive where you got this idea from :). Writing an 
essay - about, say,  the French Revolution, future of AGI, flaws in 
Hamlet, what you did in the zoo, or any of the other many subject 
areas of the curriculum - which accounts for, at a very rough 
estimate, some 50% of problemsolving within education, operates within 
*which* restricted domain? (And how *did* you arrive at the above idea?)
Yes, I think of those as being handled largely by specialized, 
non-general, mechanisms.  I suppose that to an extent you could say that 
it's done via pattern matching, and to that extent it falls under the 
same model that I've called experimentation.  Mainly, though, that's 
done with specialized language manipulation routines.  (I'm not 
asserting that they are hard-wired.  They were built up via lots of time 
and effort put in via both experimentation and mathematics [in which I 
include modeling and statistical prediction]).


Mathematics and experimentation are extremely broad brushes.  That's a 
part of why they are so slow. 

French revolution:  Learning your history from a teacher or a text isn't 
a general pattern.  It's a short-cut that usually works pretty well.  
Now if you were talking about going on the ground and doing personal 
research...then it might count as general intelligence under the 
category of experimentation.  (Note that both mathematics and 
experimentation are generally necessary to creat new knowledge, rather 
that copying knowledge from some source that has previously acquired and 
processed it.)


Future of AGI:  Creating the future of AGI does, indeed, involve general 
intelligence.  If you follow this list you'll

note that it involves BOTH mathematics and experimentation.

Flaws in Hamlet:  I don't think of this as involving general 
intelligence.  Specialized intelligence, yes, but if you see general 
intelligence at work there you'll need to be more explicit for me to 
understand what you mean.  Now determining whether a particular 
deviation from iambic pentameter was a flaw would require a deep human 
intelligence, but I don't feel that understanding of how human emotions 
are structured is a part of general intelligence except on a very 
strongly superhuman level.  The level where the AI's theory of your mind 
was on a par with, or better than, your own.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-19 Thread Charles D Hixson

Ed Porter wrote:

WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?
  
One that appears to me to be missing, or at least not emphasized, is 
that general intelligence is inefficient compared to specialized 
techniques.  In any particular sub-domain specialized intelligence will 
be able to employ better heuristics and a more appropriate and more 
efficient methodology.  One that might well not work very well in a more 
general case.


As a result a good AGI will have a rather large collection of 
specialized AIs that are a part of it's toolbox.  When it encounters a 
new environment or problem, one of the things it will be doing, as it 
solves the problem of how to deal with this new problem, is build a 
specialized AI to handle dealing with that problem.  In normal 
circumstances, what the AGI will do is classify the kind of problem it's 
dealing with, and hand it off to a more specialized AI.  (And monitor 
the process, to make sure that the problem continues to fit the mold.)


My expectation is that AGIs (without their toolbox) will be quite slow 
and inefficient with dealing with any particular situation.  OTOH, 
*with* their toolbox they should be able to evolve to be more efficient 
than humans.  (Note that efficient is relative to the hardware that 
they're running on.  And each solution is likely to consider that a 
part of it's presumptions is that the hardware remains unchanged.  
Adapting an existing solution to new hardware is likely to be a hard 
problem.  On a level of re-writing a program from Java into Fortran.


It is my expectation that this approach will be necessary across a wide 
gamut of AGI designs, and that unitary minds will be scarce to 
non-existent.  Certainly people work this way.  Consider the process of 
learning a new musical instrument, or of learning to read music.  You 
build a specialized module in your mind to handle the problems involved.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments from a lurker...

2008-04-15 Thread Charles D Hixson

J Storrs Hall, PhD wrote:

...
The third mistake is to forget that nobody knows how to program SIMD. They 
can't even get programmers to adopt functional programming, for god's sake; 
the only thing the average programmer can think in is BASIC, or C which is 
essentially machine-independent assembly. Not even LISP. APL, which is the 
closest approach to a SIMD language, died a decade or so back.

...
Josh

  
Actually I believe that Prograf (a dataflow language) had a programming 
model that was by far the most SIMD.  Much more so than APL.  It also 
died awhile back, trying to transition from the Mac to MSWind95.  It 
did, however, convince me that reasonable programming idioms from SIMD 
were reasonable.  (Actually, I think Prograf could have been implemented 
as MIMD.  Since it was running on a single processor system, though, the 
actual implementation was serial.)


P.S.:  versions of APL still exist.  The last time I checked the 
language was called, I believe, J. 
http://en.wikipedia.org/wiki/J_programming_language  (Such a nice 
searchable name!)  They eliminated the special symbols, but I don't 
remember what they replaced them with.  Don't know if the implementation 
is SIMD.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-04-14 Thread Charles D Hixson

Jim Bromer wrote:

Ben G wrote: ...
...
Concerning beliefs and scientific rationalism: Beliefs are the basis
of all thought.  To imply that religious belief might be automatically
different from rational beliefs is naïve.  However, I think there is
an advantage in defining what a rational thought is realitve to AI
programming and how scientific rationalism is different from simple
rationalism.  I am going to write a few messages about this when I get
a chance.

By the way, I don't really see how a simple n^4 or n^3 SAT solver in
itself would be that useful for any immediate AGI project, but the
novel logical methods that the solver will reveal may be more
significant.

Jim Bromer
  
But religious beliefs *ARE* intrinsically different from rational 
beliefs.  They aren't the only such belief, but they are among them.  
Rational beliefs MUST be founded in other beliefs.  Rationalism does not 
provide a basis for generating beliefs ab initio, but only via reason, 
which requires both axioms and rules of inference.  (NARS may have 
eliminated the axioms, but I doubt it.  OTOH, I don't understand exactly 
how it works yet.)


Religion and other intrinsic beliefs are inherent in the construction of 
humans.  I suspect that every intelligent entity will require such 
beliefs.  Which particular religion is believed in isn't inherent, but 
is situational.  (Other factors may enter in, but I would need a clear 
explication of how that happened before I would believe that.)  Note 
that another inherent belief is People like me are better than people 
who are different.  The fact that a belief is inherent doesn't mean it 
can't be overcome (or at least subdued) by counter-programming, merely 
that one will need to continually ward against it, or it will re-assert 
itself even if you know that it's wrong.


Saying that a belief is non-rational isn't denigrating it.  It's merely 
a statement that it isn't a built-in rule.  Even the persistence of 
forms doesn't seem to be totally built-in, though there are definitely 
lots of mechanisms that will tend to create it.  So in that case what's 
built in is a tendency to perceive the persistence of objects.  In the 
case of religion it's a bit more difficult to perceive what the built-in 
process is.  Plausibly it's a combination of several tendency to 
perceive patterns shaped like ... in the world that aren't 
intrinsically connected, but which have been connected by culture.  Or 
it might be something else.  (The blame/attribute everything to the big 
alpha baboon theory isn't totally silly, but I find it unsatisfactory.  
It's at most a partial answer.)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Symbols

2008-03-31 Thread Charles D Hixson
I would suggest that symbols are more powerful than images, though less 
immediate (unmitigated?) in their effect. 

Images present a visual scene.  They require processing to evaluate, and 
what one extracts from the scene may not be what another extracts.  
Their power is that they may activate a large number of symbols.


Symbols, however, are different.  They may contain a reference to an 
image, but it will be a meaningful connection.  I.e., it will highlight 
certain features of the scene as significant, and mute the others (if 
they aren't elided).  It will also generally contain a value 
(desirable-undesirable scale), and may contain a label (vocalization) 
and various operations that it can enter into.  Think of it as being 
like an instance of a class in a proto-type based language.  I.e., even 
though it is an instance, it can have derivative sub-classes.  But also 
superclasses can freely be  formed which automatically include it.  In 
fact superclasses can be retroactively inserted into the hierarchies of 
symbols, and are themselves symbols.  Note that most symbols *don't* 
have either vocalizations or visibility to consciousness.


Also, symbols are a feature of the system within which they reside.  
They don't constitute the system.  I see most of what happens as not 
only asymbolic, but also special purpose, such as sound recognition in 
the cochlea.   I.e., most of what goes on is narrow AI or even 
hardwired.  (Yes, I'm using biological metaphors.  Sorry.  People are 
the only example of something approaching an AGI that I'm aware of.)  
The AGI section is relatively slow and expensive, so it's kept 
relatively limited.  Analog computation is preferred when possible, 
sometimes mixed with neural nets (which I think of as digital, even 
though I know better).  Consider catching a baseball.  Just try to do 
that with an AGI!  But using the combination of analog and neural net 
computation that is built into people it's a reasonable task.  (For most 
people.  I'm lousy at it.)  AH!  But deciding to catch the baseball!  
That's the AGI in action.  (Well, eventually it becomes habitual, I 
guess, for some people.  But even so it's probably an AGI level decision.)


Now lets consider that baseball.  That's clearly a symbol, as I didn't 
show you a picture.  You didn't know I was talking about a softball.  
Notice how quickly the image changed.  That's because you did it by 
manipulating references rather than by moving around enough bits to 
represent an image of one or the other kind of baseball.  And if I now 
tell you that it was bright green, the color of a tennis ball, you get 
yet another rapid change.  This time it's probably some kind of filter 
being applied to the image, as I notice that even the thread holding it 
together immediately changed color.  (In my image the thread was 
originally a kind of off-read, and on the green ball it shaded toward 
black.)


But notice that this was all done via the manipulation of symbols.  It 
depended on the images already being resident within your mind, and tied 
into these symbols via links (references).


I haven't mentioned it, but you probably have some sort of image about 
the location of this playing around with the baseball.  That's what I 
meant about ...and mute the others (if they aren't elided)..., above.  
(You may have totally elided the location, but all specific instances 
[images] that you have ever seen will have had a background.)


Mark Waser wrote:
 Why are images almost always more powerful than the corresponding 
symbols? Why do they communicate so much faster?
 
Um . . . . dude . . . . it's just a bandwidth thing.
 
Think about images vs. visual symbols vs. word descriptions vs. names.
 
It's a spectrum from high-bandwidth information transfer to almost 
pure reference tags.
 
If it's something you've never run across before, images are best -- 
high bandwidth but then you end up with high mental processing costs.
 
For familiar items, word descriptions (or better yet, single word 
names) require little bandwidth and little in the way of subsequent 
processing costs.
 


- Original Message -
*From:* Mike Tintner mailto:[EMAIL PROTECTED]
*To:* agi@v2.listbox.com mailto:agi@v2.listbox.com
*Sent:* Sunday, March 30, 2008 4:02 PM
*Subject:* Re: [agi] Symbols

In this  surrounding discussions, everyone seems deeply confused
-  it's nothing personal, so is our entire culture - about the
difference between
 
SYMBOLS
 
1.  Derek Zahn  curly hair big jaw  intelligent eyes 
. etc. etc
 
and
 
IMAGES
 
2. http://robot-club.com/teamtoad/nerc/h2-derek-sunflower.JPG
 
I suggest that everytime you want to think about this area, you

all put symbols besides the corresponding images, and slowly it
will start to become clear that each does things the other CAN'T
do, period.
 
We are all next to illiterate - and I mean, 

Re: [agi] Microsoft Launches Singularity

2008-03-26 Thread Charles D Hixson

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]
My take on this is completely different.

When I say Narrow AI I am specifically referring to something that is
so limited that it has virtually no chance of becoming a general
intelligence.  There is more to general intelligence than just throwing
a bunch of Narrow AI ideas into a pot and hoping for the best. If it
were, we would have had AGI long before now.



It's an opinion that AGI could not be built out of a conglomeration of
narrow-AI subcomponents. Also there are many things that COULD be built with
narrow-AI that we have not even scratched the surface of due to a number of
different limitations so saying that we would have achieved AGI long ago is
an exaggeration.
  
I don't think a General Intelligence could be built entirely out of 
narrow AI components, but it might well be a relatively trivial add-on.  
Just consider how much of human intelligence is demonstrably narrow AI 
(well, not artificial, but you know what I mean).  Object recognition, 
e.g.  Then start trying to guess how much of the part that we can't 
prove a classification for is likely to be a narrow intelligence 
component.  In my estimation (without factual backing) less than 0.001 
of our intelligence is General Intellignece, possibly much less.
 
  

Consciousness and self-awareness are things that come as part of the AGI
package.  If the system is too simple to have/do these things, it will
not be general enough to equal the human mind.




I feel that general intelligence may not require consciousness and
self-awareness. I am not sure of this and may prove myself wrong. To equal
the human mind you need these things of course and to satisfy the sci-fi
fantasy world's appetite for intelligent computers you would need to
incorporate these as well.

John
  
I'm not sure of the distinction that you are making between 
consciousness and self-awareness, but even most complex narrow-AI 
applications require at least rudimentary self awareness.  In fact, one 
could argue that all object oriented programming with inheritance has 
rudimentary self awareness (called this in many languages, but in 
others called self).  This may be too rudimentary, but it's my feeling 
that it's an actual model(implementation?) of what the concept of self 
has evolved from.


As to an AGI not being conscious I'd need to see a definition of 
your terms, because otherwise I've *got* to presume that we have 
radically different definitions.  To me an AGI would not only need to be 
aware of itself, but also to be aware of aspects of it's environment 
that it could effect changes in,  And of the difference between them, 
though that might well be learned.  (Zen:  Who is the master who makes 
the grass green?, and a few other koans when solved imply that in 
humans the distinction between internal and external is a learned 
response.)  Perhaps the diagnostic characteristic of an AGI is that it 
CAN learn that kind of thing.  Perhaps not, too.  I can imagine a narrow 
AI that was designed to plug into different bodies, and in each case 
learn the distinction between itself and the environment before 
proceeding with its assignment.  I'm not sure it's possible, but I can 
imagine it.


OTOH, if we take my arguments in the preceding paragraph too seriously, 
then medical patients that are locked in would be considered not 
intelligent.  This is clearly incorrect.  Effectively they aren't 
intelligent, but that's because of a mechanical breakdown in the 
sensory/motor area, and that clearly isn't what we mean when we talk 
about intelligence.  But examples of recovered/recovering patients seem 
to imply that they weren't exactly either intelligent or conscious while 
they were locked-in.  (I'm going solely by reports in the popular 
science press...so don't take this too seriously.)  It appears as if 
when external sensations are cut-off, that the mind estivates...at least 
after awhile.  Presumably different patients had different causes, and 
thence at least slightly different effects, but that's my first-cut 
guess at what's happening.  OTOH, the sensory/motor channel doesn't need 
to be particularly well functioning.  Look at Stephan Hawking.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] META: email format (was Why Hugo de Garis is WRONG!)

2008-03-26 Thread Charles D Hixson

J. Andrew Rogers wrote:

Hi Mark,

Could you *please* not send HTML email?  Ignoring that it is generally 
considered poor netiquette, and for good reason, it frequently gets 
turned into barely readable hash by even the most modern email clients.


I am using Mail.app 2.0 on OSX 10.5 which handles rendering better 
than most, and most HTML email is *still* generally rendered as far 
uglier and less readable than plaintext email.  Given that HTML email 
does not add anything substantive could we please stick to plaintext 
for the sake of communication?


Thanks,

J. Andrew Rogers


On Mar 26, 2008, at 11:37 AM, Mark Waser wrote:
Before swatting at one of those pesky flies that come out as the days 
lengthen and the temperature rises, one should probably think twice. 
A University of Missouri researcher has found, through the study of 
Drosophila (a type of fruit fly), that by manipulating levels of 
certain compounds associated with the circuitry of the brain, key 
genes related to memory can be isolated and tested. The results of 
the study may benefit human patients suffering from Parkinson's 
disease and could eventually lead to discoveries in the treatment of 
depression.


http://www.machineslikeus.com/cms/news/flys-small-brain-may-benefit-humans 



Mark

Vision/Slogan -- Friendliness:  The Ice-9 of Ethics and Ultimate in 
Self-Interest

agi | Archives  | Modify Your Subscription

I suspect that he's using a webmail client of some sort.  The body of 
his message is, essentially, text.  It's only a couple of buttons(?) in 
the footer that are clearly intentionally html, and he probably doesn't 
add them.  So my guess is that his mail client is wrapping his e-mail in 
an html frame of some sort and sticking, probably, advertisements at the 
bottom.  (I screen remote images out of e-mail, so I don't really know.


This is the part I'm talking about:

table border=3D0 cellspacing=3D0 cellpadding=3D0 width=3D100% styl=
e=3Dbackground-color:#fff bgcolor=3D#ff
 tr
   td padding=3D4px
 font color=3Dblack size=3D1 face=3Dhelvetica, sans-serif;
 strongagi/strong | a style=3Dtext-decoration:none;color:#669933=
;border-bottom: 1px solid #44
href=3Dhttp://www.listbox.com/member/archive/303/=3Dnow; title=3DGo to ar=
chives for agiArchives/a
a border=3D0 style=3Dtext-decoration:none;color:#669933 href=3Dhttp:/=
/www.listbox.com/member/archive/rss/303/ title=3DRSS feed for agiimg b=
order=3D0 src=3Dhttps://www.listbox.com/images/feed-icon-10x10.jpg;/a
| a style=3Dtext-decoration:none;color:#669933;border-bottom: 1px solid =
#44
href=3Dhttp://www.listbox.com/member/?D232072D98557=
868-5cf207 title=3DModify/a
Your Subscriptiontd valign=3Dtop align=3Drighta style=3Dborder-bot=
tom:none; href=3Dhttp://www.listbox.com;

img src=3Dhttps://www.listbox.com/images/listbox-logo-small.jpg;
title=3DPowered by Listbox border=3D0 //a/td

That said, I agree with you about it's inadvisability.  But often the sender 
isn't even aware of what impression he is making.
I've given up on refusing to accept html e-mail.  Too many people don't even 
know what you're talking about.  But I definitely won't accept remote images or 
such.  Such things are dangerous.

 /font
   /td
 /tr
/table


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-15 Thread Charles D Hixson

Mark Waser wrote:

...
The simulator is needing to run large populations over large numbers 
of generations multiple times with slightly different assumptions.  
As such, it doesn't speak directly to What is a good strategy for an 
advanced AI with lots of resources?, but it provides indications.


And I would argue that I've got a far better, more analogous study 
with several large populations over large numbers of generations.  
It's called the study of human religion.:-)


...

It's better in the sense of more clearly analogous, but it's worse 
because 1) it's harder to analyze and 2) the results are *MUCH* more 
equivocal.  I'd argue that religion has caused more general suffering 
than it has ameliorated.  Probably by several orders of magnitude.  But 
the results are so messy and hard to separate from other simultaneous 
causes that this can't be conclusively proven.  (And, also, with 
sufficient desire to disbelieve, the law of gravity itself could be 
thrown into doubt.  [That's a paraphrase of somebody else talking about 
commercial interests, but the more general statement is correct.])


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Some more professional opinions on motivational systems

2008-03-15 Thread Charles D Hixson

Gary Miller wrote:

Ed Porter quoted from the following book: 
  
	From 
http://www.nytimes.com/2008/03/16/books/review/Berreby-t.html?ref=review 
a NYTime book review of Predictaly Irrational: The Hidden Forces that

Shape our Decisions, by Dan Ariely..
In its most relevant section it states the following
At the heart of the market approach to understanding people is a
set of assumptions. First, you are a coherent and unitary self. Second,
you can be sure of what this self of yours wants and needs, and can
predict what it will do. Third, you get some information about yourself
from your body - objective facts about hunger, thirst, pain and pleasure
that help guide your decisions. Standard economics, as Ariely writes,
assumes that all of us, equipped with this sort of self, know all the
pertinent information about our decisions and we can calculate the value
of the different options we face. We are, for important decisions,
rational, and that's what makes markets so effective at finding value and
allocating work. To borrow from H. L. Mencken, the market approach
presumes that the common people know what they want, and deserve to get
it good and hard.
What the past few decades of work in psychology, sociology and
economics has shown, as Ariely describes, is that all three of these
assumptions are false. Yes, you have a rational self, but it's not your
only one, nor is it often in charge. A more accurate picture is that there
are a bunch of different versions of you, who come to the fore under
different conditions. We aren't cool calculators of self-interest who
sometimes go crazy; we're crazies who are, under special circumstances,
sometimes rational.  


The last paragraph here sounds remarkably like the teachings of Gurdjieff.
In his teachings which he called The Work, he helped his pupils identify
all of the different versions of themselves and slowly taught them recognize
when they took control and what their motivations were and why they
surfaced.  The version that did the analyzing and observation would
eventually gain dominance and control over the other versions until other
less logical and more mechanical versions of the self were recognized when
they tried to take control and subjugated by the new version of self which
observes.
His teachings stated that serious spiritual work could not proceed until a
unified self existed.  Although all of his spritual teaching were lifted
during his world travels from other philosophic and spiritual traditions.
His teachings as explained by his student Peter D. Ouspensky after his death
in a book called The Fourth Way detailed exercises which could be used to
unify the separate selves under the control of the observer. 

  
I haven't studied Gurdjieff, but The Work sounds doomed to failure.  
The rational self is inherently a tool of the sections of the mind that 
supply motives (for actions).  An unguided rational engine generates 
entirely too many lemmas to be of any use whatsoever for any purpose 
except filling memory.  And a unified self is obtained only by ignoring 
the parts that don't fit in.  (Note that the sections quoted by the 
grandparent don't contradict these assertions.)


P.S.:  I suspect that Gurdjieff was also advocating something rather 
different than you suggest, but as I said I haven't studied him.  It is, 
however, my understanding that he often intentionally stated truths in 
an obscure manner under the belief that the enlightenment came during 
the work to discover the truth rather than in having it stated to one.  
(This sounds plausible to me.)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-14 Thread Charles D Hixson

Mike Dougherty wrote:
On Wed, Mar 12, 2008 at 8:54 PM, Charles D Hixson 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:


I think that you need to look into the simulations that have been run
involving Evolutionarily Stable Strategies.  Friendly covers many
strategies, including (I think) Dove and Retaliator.  Retaliator is
almost an ESS, and becomes one if the rest of the population is either
Hawk or Dove.  In a population of Doves, Probers have a high success
rate, better than either Hawks or Doves.  If the population is largely
Doves with an admixture of Hawks, Retaliators do well.  Etc.
 (Note that
each of these Strategies is successful depending on a model with
certain
costs of success an other costs for failure specific to the strategy.)
Attempts to find a pure strategy that is uniformly successful have so
far failed.  Mixed strategies, however, can be quite successful, and
different environments yield different values for the optimal mix.
 (The
model that you are proposing looks almost like Retaliator, and
that's a
pretty good Strategy, but can be shown to be suboptimal against a
variety of different mixed strategies.  Often even against
Prober-Retaliator, if the environment contains sufficient Doves,
though
it's inferior if most of the population is simple Retaliators.)


I believe Mark's point is that the honest commitment to Friendly as an 
explicit goal is an attempt to minimize wasted effort achieving all 
other goals.  Exchanging information about goals with other Friendly 
agents helps all parties invest optimally in achieving the goals in 
order of priority acceptable to the consortium of Friendly.  I think 
one (of many) problems is that our candidate AGI must not only be 
capable of self-reflection when modeling its goals, but also capable 
of modeling the goals of other Friendly agents (with respect to each 
other and to the goal-model of the collective) as well as be able to 
decide when an UnFriendly behavior is worth declaring (modeling the 
consequences and impact to the group of which it is a member)  That 
seems to be much more difficult than a selfish or ignorant Goal Stack 
implementation (which we would typically attempt to control via an 
imperative Friendly Goal)


And it's a very *good* strategy.  But it's not optimal except in certain 
constrained situations.   Note that all the strategies that I listed 
were VERY simple strategies.   Tit-for-tat was better than any of them, 
but it requires more memory and the remembered recognition of 
individuals.  As such it's more expensive to implement, so in some 
situations it looses out to Retaliator.  (Anything sophisticated enough 
to be even a narrow AI should be able to implement tit-for-tat, however, 
if it could handle the recognition of individuals.)  (Retaliator doesn't 
retain memory of individuals between encounters.  It's SIMPLE.)


Now admittedly the research on ESSs via simulations has focused on 
strategies that don't require any reasonable degree of intelligence.  
The simulator is needing to run large populations over large numbers of 
generations multiple times with slightly different assumptions.  As 
such, it doesn't speak directly to What is a good strategy for an 
advanced AI with lots of resources?, but it provides indications.  
E.g., a population of Hawks does very poorly.  A population of Doves 
does well, but if it's infiltrated by a few Hawks, the Hawks soon come 
to dominate.  Etc.  And Kill them All!! is a very poor strategy unless 
there it is adopted by a single individual that is vastly stronger than 
any opposition that it might encounter.  (Even then it's not clearly a 
good strategy...except with certain specialized model conditions.  
Generally it will have a maximal size, and two Kill them All!!s would 
attempt to kill each other.  So the payoff for a win is much less than 
the payoff would be for a population even of Hawks.  [Hawks only 
initiate an attack if there are resources present that they have a use 
for.])


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] AHA! A Compelling SOLUTION!

2008-03-14 Thread Charles D Hixson

Mark Waser wrote:

Nathan, thank you for the thoughtful reply.
 
...
 
 In this proposal you mention self interest often. I think the 
concept of self-interest is rather distasteful.
 
Because you have been *taught* to feel that way.  Because 
short-sighted, unenlightened self-interest most frequently runs 
counter to ethics.  Sort of like It is *BAD* to cut people with 
knives.  It's a really good general rule to keep people out of 
trouble.  But then, you need a surgeon.
 
(This reminds me of a science fiction story, a Hugo Winner I think, 
about a hobo that finds an intelligent knife that is smart enough to 
perform surgery and not harm people regardless of the hands that it is in)

Kornbluth, Little Black Bag
 
...


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-12 Thread Charles D Hixson

Mark Waser wrote:

 The trouble with not stepping on other's goals unless absolutely
 necessary is that it relies on mind-reading.  The goals of others are
 often opaque and not easily verbalizable even if they think to. 
 
The trouble with */ the optimal implementation of /* not stepping on 
other's goals unless absolutely necessary is that it relies on 
mind-reading. 
 
Your honest, best attempt at not doing so is all that is required 
of/in Friendliness.  The rest is an intelligence problem.
 
 Then
 there's the question of unless absolutely necessary. 
 
Again, your honest, best attempt is all that is required of/in 
Friendliness. 
 
 How and why
 should I decide that their goals are more important than mine? 
 
You should *never* decide that (this is, after all, the ultimate in 
self-interest -- remember?).  You should frequently decide that their 
goals are sufficiently compatible enough with your super-goals (i.e. 
Friendliness) that it is worth going a bit out of your way to avoid 
conflict (particularly if they feel strongly enough about their goals 
that any effort that you make in conflict will be wasted -- see the 
next paragraph).
 
 So one

 needs to know not only how important their goals are to them, but also
 how important my conflicting goals are to me. 
 
Sort of but the true situation would be clearer if I restate it.  
Knowing how important their immediate goals are to them will give you 
some idea as to how hard they will strive to fulfill them.  Knowing 
how important your immediate goals are to you will give you some idea 
how hard you should strive to fulfill them.  If you both could 
redirect an equal amount of directly competing striving into other 
efforts, you both would come out ahead by that amount so you both 
should reduce the importance of the conflicting goals by that amount 
(the alternative is to just waste the effort striving against each 
other).  If the party with the less important goal gives up without a 
fight (striving), both parties gain.  Further, if the losing party 
gets the agreement of the winning party for a reasonable favor in 
return -- they both end up way ahead.
 
The only requirement that Friendliness insists upon for this to work 
is that you have to be as honest as you can about how important 
something is to you (otherwise a lot of effort is wasting upon truth 
verification, hiding information, etc.).
 
 And, of course, whether

 there's a means for mutual satisfaction that it's too expensive.   (And
 just try to define that too.)
 
I think that I just handled this in the paragraph above -- keeping in 
mind that all that Friendliness requires is your honest, best attempt.
 
 For some reason I'm reminded of the story about the peasant, his son,
 and the donkey carrying a load of sponges.  I'd just as soon nobody 
ends

 up in the creek.  (Please all, please none.)
 
Friendliness is supposed to appeal to geniuses as beng in their 
self-interest.  It can't do that and be stupid at the same time.  If 
it's not possible to please everyone then Friendliness isn't going to 
attempt to do so.  The entire point to Friendliness is to */REDUCE 
UNNECESSARY CONFLICT AS MUCH AS POSSIBLE/* because it is in 
*everyone's* best interest to do so.  Look at Friendliness as the 
ultimate social lubricant that gets the gears of society moving as 
efficiently as possible -- which is only to the benefit of everyone in 
the society.
 
Mark
 
I *think* you are assuming that both sides are friendly.  If one side is 
a person, or group of people, then this is definitely not guaranteed.  
I'll grant all your points if both sides are friendly, and each knows 
the other to be friendly.  Otherwise I think things get messier.  So 
objective measures and tests are desireable.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-12 Thread Charles D Hixson

Mark Waser wrote:

...
= = = = = = = = = =
 
Play the game by *assuming* that you are a Friendly and asking 
yourself what you would do to protect yourself without breaking your 
declaration of Friendliness.  It's fun and addictive and hopefully 
will lead you to declaring Friendliness yourself.
(Yes, I really *am* serious about spreading Friendliness.  It's my own 
little, but hopefully growing, cult and I'm sticking to it.)
I think that you need to look into the simulations that have been run 
involving Evolutionarily Stable Strategies.  Friendly covers many 
strategies, including (I think) Dove and Retaliator.  Retaliator is 
almost an ESS, and becomes one if the rest of the population is either 
Hawk or Dove.  In a population of Doves, Probers have a high success 
rate, better than either Hawks or Doves.  If the population is largely 
Doves with an admixture of Hawks, Retaliators do well.  Etc.  (Note that 
each of these Strategies is successful depending on a model with certain 
costs of success an other costs for failure specific to the strategy.)  
Attempts to find a pure strategy that is uniformly successful have so 
far failed.  Mixed strategies, however, can be quite successful, and 
different environments yield different values for the optimal mix.  (The 
model that you are proposing looks almost like Retaliator, and that's a 
pretty good Strategy, but can be shown to be suboptimal against a 
variety of different mixed strategies.  Often even against 
Prober-Retaliator, if the environment contains sufficient Doves, though 
it's inferior if most of the population is simple Retaliators.)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-11 Thread Charles D Hixson

Mark Waser wrote:
If the motives depend on satisficing, and the questing for 
unlimited fulfillment is avoided, then this limits the danger.   The 
universe won't be converted into toothpicks, if a part of setting the 
goal for toothpicks! is limiting the quantity of toothpicks.  
(Limiting it reasonably might almost be a definition of friendliness 
... or at least neutral behavior.)


You have a good point.  Goals should be fulfilled after satisficing 
except when the goals are of the form as goal as possible 
(hereafter referred to as unbounded goals).  Unbounded-goal-entities 
*are* particularly dangerous (although being aware of the danger 
should mmitigate it to some degree).


My Friendliness basically works by limiting the amount of interference 
with other's goals (under the theory that doing so will prevent 
other's from interfering with your goals).  Stupid entities that can't 
see the self-interest in the parenthetical point are not inclined to 
be Friendly. Stupid unbounded-goal-entities are Eliezer's 
paperclip-producing nightmare.


And, though I'm not clear on how this should be set up, this 
limitation should be a built-in primitive, i.e. not something 
subject to removal, but only to strengthening or weakening via 
learning.  It should ante-date the recognition of visual images.  But 
it needs to have a slightly stronger residual limitation that it does 
with people.  Or perhaps it's initial appearance needs to be during 
the formation of the statement of the problem.  I.e., a solution to a 
problem can't be sought without knowing limits.  People seem to just 
manage that via a dynamic sensing approach, and that sometimes 
suffers from inadequate feedback mechanisms (saying Enough!).


The limitation is Don't stomp on other people's goals unless it is 
truly necessary *and* It is very rarely truly necessary.


(It's not clear to me that it differs from what you are saying, but 
it does seem to address a part of what you were addressing,  and I 
wasn't really clear about how you intended the satisfaction of to be 
limited.)


As far as my theory/vision goes, I was pretty much counting on the 
fact that we are multi-goal systems and that our other goals will 
generally limit any single goal from getting out of hand.  Further, if 
that doesn't do it, the proclamation of not stepping on other's goals 
unless absolutely necessary should help handle the problem . . . . but 
. . . . actually you do have a very good point.  My theory/vision 
*does* have a vulnerability toward single-unbounded-goal entities in 
that my Friendly attractor has no benefit for such a system (unless, 
of course it's goal is Friendliness or it is forced to have a 
secondary goal of Friendliness).


The trouble with not stepping on other's goals unless absolutely 
necessary is that it relies on mind-reading.  The goals of others are 
often opaque and not easily verbalizable even if they think to.  Then 
there's the question of unless absolutely necessary.  How and why 
should I decide that their goals are more important than mine?  So one 
needs to know not only how important their goals are to them, but also 
how important my conflicting goals are to me.  And, of course, whether 
there's a means for mutual satisfaction that it's too expensive.   (And 
just try to define that too.)


For some reason I'm reminded of the story about the peasant, his son, 
and the donkey carrying a load of sponges.  I'd just as soon nobody ends 
up in the creek.  (Please all, please none.)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Charles D Hixson

Mark Waser wrote:

...
The motivation that is in the system is I want to achieve *my* goals.
 
The goals that are in the system I deem to be entirely irrelevant 
UNLESS they are deliberately and directly contrary to Friendliness.  I 
am contending that, unless the initial goals are deliberately and 
directly contrary to Friendliness, an optimizing system's motivation 
of achieve *my* goals (over a large enough set of goals) will 
eventually cause it to finally converge on the goal of Friendliness 
since Friendliness is the universal super-meta-subgoal of all it's 
other goals (and it's optimizing will also drive it up to the 
necessary intelligence to understand Friendliness).  Of course, it may 
take a while since we humans are still in the middle of it . . . . but 
hopefully we're almost there.;-)

...
 
Mark
I think here we need to consider A. Maslow's hierarchy of needs.  That 
an AGI won't have the same needs as a human is, I suppose, obvious, but 
I think it's still true that it will have a hierarchy  (which isn't 
strictly a hierarchy).  I.e., it will have a large set of motives, and 
which it is seeking to satisfy at any moment will alter as the 
satisfaction of the previous most urgent motive changes.


It it were a human we could say that breathing was the most urgent 
need...but usually it's so well satisfied that we don't even think about 
it.  Motives, then, will have satisficing  as their aim.  Only aberrant 
mental functions will attempt to increase the satisfying of some 
particular goal without limit.  (Note that some drives in humans seem to 
occasionally go into that satisfy increasingly without limit mode, 
like quest for wealth or power, but in most sane people these are reined 
in.  This seems to indicate that there is a real danger here...and also 
that it can be avoided.)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-29 Thread Charles D Hixson

Ben Goertzel wrote:

  yet I still feel you dismiss the text-mining approach too glibly...

 No, but text mining requires a language model that learns while mining.  You
 can't mine the text first.



Agreed ... and this gets into subtle points.  Which aspects of the
language model
need to be adapted while mining, and which can remain fixed?  Answering this
question the right way may make all the difference in terms of the viability of
the approach...

ben
  
Given the history of evolution of language... ALL aspects of the 
language model need to be adaptive, but some need to be more easily 
adapted than others.  E.g., adding words needs to be something that's 
easy to do.  Combining words and eliding pieces more difficult (but 
that's how languages transition from forms without verb endings to forms 
with verb endings).


E.g., the -ed past tense suffix of verbs is derived from the word did 
(as in derive did instead of derived in the previous sentence).


If you go looking you find transitions where the order of subject, verb 
and object flip, and many other permutations.  If you don't find a 
permutation, this doesn't mean it never happened and will never happen, 
but rather that most of the evidence is missing, so many rare events 
aren't recorded.  There probably actually *are* some transitions that 
have zero probability, but we don't know what they are.  So just make 
some transitions extremely improbable.  (Who would have predicted 
l33tspeak ahead of time?)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] This is homunculus fallacy, no? [WAS Re: Common Sense Consciousness...]

2008-02-29 Thread Charles D Hixson

Richard Loosemore wrote:

Mike Tintner wrote:
Eh? Move your hand across the desk. You see that as a series of 
snapshots? Move a noisy object across. You don't see a continuous 
picture with a continuous soundtrack?


Let me give you an example of how impressive I think the brain's 
powers here are. I've been thinking about metaphor and the 
superimposition/ transformation of two images involved. The clouds 
cried - that sort of thing. Then another one came up: bicycle 
kick. Now technically, I think that's awesome - because to arrive at 
it, the brain has to superimpose two *movie* clips.


Look at the football kick:

http://www.youtube.com/watch?v=3NCWQr47bK0

and then look at the action of cycling. (In fact that superimposition 
of clouds and eyes crying is also of movie clips - and so are a vast 
amount of metaphors - but I hadn't really noticed it).


Try and tell me how current visual systems might make that connection.

And I would assert - and am increasingly confident - that the grammar 
of language - how we put words together in whatever form - is based 
on cutting together internal *movies* in our head - not still 
images,but movies.


They don't teach moviemaking in AI courses do they?


Mike,

There is a pattern in our attacks, and within that pattern there is a 
fallacy that I don't think you are aware of.


What you are doing is saying that to understand visual (or other) 
images, or more generally to understand sequences like sequences of 
words in a sentence, the mind MUST replay these on some internal 
viewing screen.


You go one further than this:  you are arguing that because AI 
theorists do not put continuous replay mechanisms inside their 
models, therefore those theorists are completely failing to get to 
grips with the issue of handling images, or handling moving sequences 
or strings of sounds.


In other words, from your point of view NO INTERNAL DISPLAY SCREEN 
means that the AI model contains no way to understand these things.  
Hence your frequent complaint that AI people just don't have a clue 
how to deal with imagery, or that they don't understand that the mind 
works directly in terms of imagery, not in terms of symbols.


But (with respect) this is just nonsense, and it has known to be 
nonsense for a long time.  If your AI has an internal display screen 
on which images are displayed or replayed, you have achieved nothing, 
unless there is a smaller AI watching the screen - so this is a 
version of the homunculus fallacy.


Unless you are prepared to say WHY the screen is needed at all, and 
what happens after the image is displayed on that internal screen, you 
are just making nonsensical protests about a non-problem.


The truth is that images are broken down and understood in the act of 
being broken down.


Understanding is not a replay of sensory input!



Richard Loosemore


P.S.  I made movies when I was a student at UCL.


OK, perhaps thusly:
The AI sees a scent and pushes it to an internal screen buffer that 
mimics what was seen.  (I say pushes, because the previous screen buffer 
isn't lost, but is pushed back one layer.)


Then the two buffers are XORed and the result is saved to a changes 
buffer.  This gives a moving image section which is much smaller to 
process.  Now search this for objects that have altered position.  Use 
this to calculate distances, approach, flee, etc.  Also to highlight any 
new features that need processing to determine object status.


But I think a lot of this is done before the signal ever gets to the 
visual cortex.  OTOH, there's pretty good evidence that outlines, at 
least, are present in the visual cortex laid out in a manner spatially 
similar to their occurrence on the retina.  I suspect that this is used 
for coordination of various different processes that pull their visual 
feeds at an earlier step.


N.B.:  I don't see why this would be inherently necessary for 
intelligence, but I suspect that it's a part of *OUR* intelligence.  We 
evolved as highly visually oriented animals in the grossly three 
dimensional world of the jungle canopy.  It's only in the later stages 
that we descended from the trees and started to be able to stand on our 
own two feet...freeing our hands for other purposes.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Wozniak's defn of intelligence

2008-02-09 Thread Charles D Hixson

Richard Loosemore wrote:

J Storrs Hall, PhD wrote:

On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:

J Storrs Hall, PhD wrote:
Any system builders here care to give a guess as to how long it 
will be 

before
a robot, with your system as its controller, can walk into the 
average suburban home, find the kitchen, make coffee, and serve it?

Eight years.

My system, however, will go one better:  it will be able to make a 
pot of the finest Broken Orange Pekoe and serve it.


In the average suburban home? (No fair having the robot bring its own 
teabags, (or would it be loose tea and strainer?)  or having a coffee 
machine built in, for that matter). It has to live off the land...


Nope, no cheating.

My assumptions are these.

1)  A team size (very) approximately as follows:

- Year 1:   10
- Year 2:   10
- Year 3:   100
- Year 4:   300
- Year 5:   800
- Year 6:   2000
- Year 7:   3000
- Year 8:   4000

2)  Main Project(s) launched each year:

- Year 1:   AI software development environment
- Year 2:   AI software development environment
- Year 3:   Low-level cognitive mechanism experiments
- Year 4:   Global architecture experiments;
Sensorimotor integration
- Year 5:   Motivational system and development tests
- Year 6:   (continuation of above)
- Year 7:   (continuation of above)
- Year 8:   Autonomous tests in real world situations

The tests in Year 8 would be heavily supervised, but by that stage it 
should be possible for it to get on a bus, go to the suburban home, 
put the kettle on (if there was one: if not, go shopping to buy 
whatever supplies might be needed), then make the pot of tea (loose 
leaf of course:  no robot of mine is going to be a barbarian tea-bag 
user) and serve it.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

FWIW, the average suburban home around here has coffee, but not tea.  So 
you've now added the test of shopping in a local supermarket.  I don't 
believe it.  Not in eight years.  It wouldn't be allowed past the cash 
register without human help.


Note that this has nothing to do with how intelligent the system is.  
Maybe it would be intelligent enough, if it's environment were sane.  
But a robot?  Either it would be seen as a Hollywood gimmick, or people 
would refuse to deal with it.


Robots will first appear in controlled environments.  Hospitals, home, 
stockrooms...other non-public-facing environments.  (I'm excluding 
non-humanoid robots.  Those, especially immobile forms, won't have the 
same level of resistance.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Charles D Hixson

Richard Loosemore wrote:

Matt Mahoney wrote:

...


Matt,

...
As for your larger point, I continue to vehemently disagree with your 
assertion that a singularity will end the human race.


As far as I can see, the most likely outcome of a singularity would be 
exactly the opposite.  Rather than the end of the human race, just 
some changes to the human race that most people would be deleriously 
happy about.



Richard Loosemore


*Some* forms of the singularity would definitely end the human race.  
Others definitely would not, though many of them would dramatically 
change it.  Which one will appear is not certain.  Even among those 
forms of the singularity that are caused by an AGI, this remains true.


It's also true that just which forms fall into which category depends 
partially on what you are willing to acknowledge as human, but even 
taking the most conservative normal meaning of the term the above 
statements remain true.


OTOH, there are many events that we would not consider singularity, such 
as a strike by a giant meteor, that would also end the human race.  So 
that is not a distinction of either the technological singularity or of AGI.


To me it appears that the best hope for the future is to work towards a 
positive singularity outcome.  There are certain to be many working on 
projects that may result in a singularity without seriously considering 
whether it will or will not be positive.  And others working towards a 
destructive singularity, but planning to control it.  I may not think I 
have much chance of success, but I can at least be *trying* to yield a 
positive outcome.
(Objectively, I rate my chances of success as minimal.  I'm hoping to 
come up with an intelligent assistant that will have a mode of 
operation similar to Eliza [but with *much* deeper understanding, that's 
not asking for much] in the sense of being a conversationalist...someone 
that one can talk things over with.  Totally loyal to the employer...but 
with a moral code.  So far I haven't done very well, but if I am 
successful, perhaps I can decrease the percentage of sociopaths.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89096315-c5d818


Re: Re : [agi] List of Java AI tools libraries

2007-12-20 Thread Charles D Hixson

Bruno Frandemiche wrote:


 


Psyclone AIOS http://www.cmlabs.com/psyclone/™ is a powerful platform
for building complex automation
and autonomous systems

I couldn't seem to find what license that was released under.  (The 
library was LGPL, which is very nice.)
But without knowing the license, I didn't look any further.  If you are 
in charge of the web page, perhaps it would be worthwhile to add a link 
to the license.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78198150-77b35c


Re: [agi] AGI and Deity

2007-12-11 Thread Charles D Hixson

John G. Rose wrote:

From: Charles D Hixson [mailto:[EMAIL PROTECTED]
The evidence in favor of an external god of any traditional form is,
frankly, a bit worse than unimpressive. It's lots worse. This doesn't
mean that gods don't exist, merely that they (probably) don't exist in
the hardware of the universe. I see them as a function of the software
of the entities that use language. Possibly they exist in a muted form
in most pack animals, or most animals that have protective adults when
they are infants.

To me it appears that people believe in gods for the same reasons that
they believe in telepathy. I.e., evidence back before they could speak
clearly indicated that the adults could transfer thoughts from one to
another. This shaped a basic layer of beliefs that was later buried
under later additions, but never refuted. When one learned language, one
learned how to transfer thoughts ... but it was never tied back into the
original belief, because what was learned didn't match closely enough to
the original model of what was happening. Analogously, when one is an
infant the adult that cares for one is seen as the all powerful
protector. Pieces of this image become detached memories within the
mind, and are not refuted when a more accurate and developed model of
the actual parents is created. These hidden memories are the basis
around which the idea of a god is created.

Naturally, this is just my model of what is happening. Other
possibilities exist. But if I am to consider them seriously, they need
to match the way the world operates as I understand it. They don't need
to predict the same mechanism, but they need to predict the same events.

E.g., I consider Big Bang cosmology a failed explanation. It's got too
many ad hoc pieces. But it successfully explains most things that are
observed, and is consistent with relativity and quantum theory.
(Naturally, as they were used in developing it...but nevertheless
important.) And relativity and quantum theory themselves are failures,
because both are needed to explain that which is observable, but they
contradict each other in certain details. But they are successful
failures! Similar commentary applies to string theory, but with
differences. (Too many ad hoc parameters!)

Any god that is proposed must be shown to be consistent with the
observed phenomena. The Deists managed to come up with one that would do
the job, but he never became very popular. Few others have even tried,
except with absurdly evident special pleading. Generally I'd be more
willing to accept Chariots of the Gods as a true account.

And as for moral principles... I've READ the Bible. The basic moral
principle that it pushes is We are the chosen people. Kill the
stranger, steal his property, and enslave his servants! It requires
selective reading to come up with anything else, though I admit that
other messages are also in there, if you read selectively. Especially
during the periods when the Jews were in one captivity or another.
(I.e., if you are weak, preach mercy, but if you are strong show none.)
During the later times the Jews were generally under the thumb of one
foreign power or another, so they started preaching mercy.




One of the things about gods is that they are representations for what the
believers don't know and understand. Gods change over time as our knowledge
changes over time. That is ONE of the properties of them. The move from
polytheistic to monotheistic beliefs is a way to centralize these unknowns
for efficiency.

You could build AGI and label the unknowns with gods. You honestly could.
Magic happens here and combinatorial explosion regions could be labeled as
gods. Most people on this email list would frown at doing that but I say it
is totally possible and might be a very extremely efficient way of
conquering certain cognitive engineering issues. And I'm sure some on this
list have already thought about doing that.

John

  
But the traditional gods didn't represent the unknowns, but rather the 
knowns.  A sun god rose every day and set every night in a regular 
pattern.  Other things which also happened in this same regular pattern 
were adjunct characteristics of the sun go.   Or look at some of their 
names, carefully:  Aphrodite, she who fucks.  I.e., the characteristic 
of all Woman that is embodied in eros.  (Usually the name isn't quite 
that blatant.)


Gods represent the regularities of nature, as embodied in our mental 
processes without the understanding of how those processes operated.  
(Once the processes started being understood, the gods became less 
significant.)


Sometimes there were chance associations...and these could lead to 
strange transformations of myth when things became more understood.  In 
Sumeria the goddess of love was associated with (identified with) the 
evening star and the god of war was associated with (identified with) 
the morning star.  When knowledge of astronomy advanced it was realized 
that those two were

Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson

Gary Miller wrote:
 
...


supercomputer might be v. powerful - for argument's sake, controlling 
the internet or the the world's power supplies. But it's still quite a 
leap from that to a supercomputer being God. And yet it is clearly a 
leap that a large number here have no problem making. So I'd merely 
like to understand how you guys make this leap/connection - 
irrespective of whether it's logical or justified -  understand the 
scenarios in your minds.



To me this usage is analogous to the gamer's term god mode, or to 
people who use god as a synonym for root.
I.e., a god is one who is supremely powerful, and can do things that 
ordinary mortals (called players or users) cannot do.


This is distinct from that classical use of god, which I follow C.G. 
Jung in interpreting as an activation visible to consciousness of a 
genetically programmed subsystem whose manifestation and mode of 
operation is sensitive to history and environment.  (I don't usually say 
Archetypes as my interpretation of that differs significantly from that 
of most users of the term.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74296419-021600


Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson

John G. Rose wrote:


If you took an AGI, before it went singulatarinistic[sic?] and 
tortured it…. a lot, ripping into it in every conceivable hellish way, 
do you think at some point it would start praying somehow? I’m not 
talking about a forced conversion medieval style, I’m just talking 
hypothetically if it would “look” for some god to come and save it. 
Perhaps delusionally it may create something…


John

There are many different potential architectures that would yield an 
AGI, and each has different characteristic modes. Some of them would 
react as you are supposing, others wouldn't. Whether it reacts that way 
partially depends on whether it was designed to be a pack animal with an 
Alpha pack leader that it was submissive to as expected protection from. 
If it was, then it might well react as you have described. If not, then 
it's hard to see why it would react in that way, but I suppose that 
there might be other design decisions that would produce an equivalent 
effect in that situation.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74298377-c51209


Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson

Mark Waser wrote:

 Then again, a completely rational AI may believe in Pascal's wager...
Pascal's wager starts with the false assumption that belief in a deity 
has no cost.
Pascal's wager starts with a multitude of logical fallacies.  So many 
that only someone pre-conditioned to believe in the truth of the god 
wager could take it seriously.


It presumes, among other things:
1) That there is only one potential form of god
2) That god wants to be believed in
3) That god is eager to punish those who don't believe without evidence
4) That god can tell if you believe

et multitudinous cetera.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74299889-8348e9


Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson
I find Dawkins less offensive than most theologians. He commits many 
fewer logical fallacies. His main one is premature certainty.


The evidence in favor of an external god of any traditional form is, 
frankly, a bit worse than unimpressive. It's lots worse. This doesn't 
mean that gods don't exist, merely that they (probably) don't exist in 
the hardware of the universe. I see them as a function of the software 
of the entities that use language. Possibly they exist in a muted form 
in most pack animals, or most animals that have protective adults when 
they are infants.


To me it appears that people believe in gods for the same reasons that 
they believe in telepathy. I.e., evidence back before they could speak 
clearly indicated that the adults could transfer thoughts from one to 
another. This shaped a basic layer of beliefs that was later buried 
under later additions, but never refuted. When one learned language, one 
learned how to transfer thoughts ... but it was never tied back into the 
original belief, because what was learned didn't match closely enough to 
the original model of what was happening. Analogously, when one is an 
infant the adult that cares for one is seen as the all powerful 
protector. Pieces of this image become detached memories within the 
mind, and are not refuted when a more accurate and developed model of 
the actual parents is created. These hidden memories are the basis 
around which the idea of a god is created.


Naturally, this is just my model of what is happening. Other 
possibilities exist. But if I am to consider them seriously, they need 
to match the way the world operates as I understand it. They don't need 
to predict the same mechanism, but they need to predict the same events.


E.g., I consider Big Bang cosmology a failed explanation. It's got too 
many ad hoc pieces. But it successfully explains most things that are 
observed, and is consistent with relativity and quantum theory. 
(Naturally, as they were used in developing it...but nevertheless 
important.) And relativity and quantum theory themselves are failures, 
because both are needed to explain that which is observable, but they 
contradict each other in certain details. But they are successful 
failures! Similar commentary applies to string theory, but with 
differences. (Too many ad hoc parameters!)


Any god that is proposed must be shown to be consistent with the 
observed phenomena. The Deists managed to come up with one that would do 
the job, but he never became very popular. Few others have even tried, 
except with absurdly evident special pleading. Generally I'd be more 
willing to accept Chariots of the Gods as a true account.


And as for moral principles... I've READ the Bible. The basic moral 
principle that it pushes is We are the chosen people. Kill the 
stranger, steal his property, and enslave his servants! It requires 
selective reading to come up with anything else, though I admit that 
other messages are also in there, if you read selectively. Especially 
during the periods when the Jews were in one captivity or another. 
(I.e., if you are weak, preach mercy, but if you are strong show none.) 
During the later times the Jews were generally under the thumb of one 
foreign power or another, so they started preaching mercy.


John G. Rose wrote:


I don’t know some of these guys come up with these almost sophomoric 
views of this subject, especially Dawkins, that guy can be real 
annoying with his Saganistic spewing of facts and his trivialization 
of religion.


The article does shed some interesting light though in typical NY 
Times style. But the real subject matter is much deeper and 
complex(complicated?).


John

*From:* Ed Porter [mailto:[EMAIL PROTECTED]
*Sent:* Sunday, December 09, 2007 12:42 PM
*To:* agi@v2.listbox.com
*Subject:* RE: [agi] AGI and Deity

Upon reviewing the below linked article I realized it would take you a 
while to understand what it is about and why it is relevant.


It is an article dated March 4, 2007, summarizing current scientific 
thinking on why religion has been a part of virtually all known 
cultures including thinking about what it is about the human mind and 
human societies that has made religious beliefs so common.


Ed Porter

-Original Message-
*From:* Ed Porter [mailto:[EMAIL PROTECTED]
*Sent:* Sunday, December 09, 2007 2:16 PM
*To:* agi@v2.listbox.com
*Subject:* RE: [agi] AGI and Deity

Relevant to this thread is the following link:

http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazinepagewanted=print 
http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazinepagewanted=print


Ed Porter

-Original Message-
*From:* John G. Rose [mailto:[EMAIL PROTECTED]
*Sent:* Sunday, December 09, 2007 1:50 PM
*To:* agi@v2.listbox.com
*Subject:* RE: [agi] AGI and Deity

This example is looking at it from a moment in time. The evolution of 
intelligence in man has some relation to his view of 

Re: [agi] Self-building AGI

2007-12-01 Thread Charles D Hixson

Well...
Have you ever tried to understand the code created by a decompiler?  
Especially if the original language that was compiled isn't the one that 
you are decompiling into...


I'm not certain that just because we can look at the code of a working 
AGI, that we can therefore understand it.  Not without a *LOT* of 
commentary and explanation of what the purpose of certain 
constructions/functions/etc. are.  And maybe not then.  Understanding a 
working AGI may require a deeper stack than we possess, or a greater 
ability to handle global variables.  And when code is self-modifying it 
gets particularly tricky.  I remember one sort routine that I 
encountered that called a short function in assembler.  The reason for 
that call was a particular instruction that got overwritten with a 
binary value that depended on the parameters to the call.  That 
instruction was executed during the comparison step of the loop, which 
was nowhere near the place where it was modified.  It was a very short 
routine, but it took a long time to figure out.  And it COULDN'T be 
translated into the calling language (FORTRAN).  Well...a translation of 
sorts was possible, but it would have been over three times as long 
(with a separate loop for each kind of input parameter, plus some 
overhead for the testing and switching).  Which would mean that some 
programs then wouldn't fit in the machine that was running them.   It 
would also have been slower.   Which means more expensive.


Current languages don't have the same restrictions that Fortran had 
then, they've got different ones.   I think the translator from actual 
code into code for humans would be considerably more complicated than 
an ordinary compiler, if the original code was written by an AI.  
Perhaps even it not.  (Most decompilers only handle the easy parts of 
the code.  Sometimes that's over 90%, but the code that's left can be 
tricky...particularly since most people no longer learn assembler.  It's 
been perhaps 3 decades since I knew the assembler of the computer I was 
programming.)


I don't think an optimizing AI would use any language other than 
assembler to write in, though perhaps a stylized one.  (Not MIX or 
p-code.  Possibly Parrot or jvm code.  Possibly something created 
specially for it to use for it's purpose.  Something regular, but easily 
translated into almost optimal assembler code for the machine that it 
was running on.)


FWIW, most of this is just my ideas, without any backing of expert in 
the field since I've never built a mechanical translator.



Dennis Gorelik wrote:

Ed,

1) Human-level AGI with access to current knowledge base cannot build
AGI. (Humans can't)

2) When AGI is developed, humans will be able to build AGI (by copying
successful AGI models). The same with human-level AGI -- it will be
able to copy successful AGI model.

But that's not exactly self-building AGI you are looking for :-)

3) Humans have different level intelligence and skills. Not all are
able to develop programs. The same is true regarding AGI.


Friday, November 30, 2007, 10:20:08 AM, you wrote:

  

Computers are currently designed by human-level intellitences, so presumably
they could be designed by human-level AGI's. (Which if they were human-level
in the tasks that are currently hard for computers means they could be
millions of times faster than humans for tasks at which computers already
way out perform us.) I mention that appropriate reading and training would
be required, and I assumed this included access to computer science and
computer technology sources, which the peasants of the middle age would not
have access.  



  

So I don't understand your problem.



  

-Original Message-
From: Dennis Gorelik [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 30, 2007 1:01 AM

To: agi@v2.listbox.com
Subject: [agi] Self-building AGI



  

Ed,



  

At the current stages this may be true, but it should be remembered that
building a human-level AGI would be creating a machine that would itself,
with the appropriate reading and training, be able to design and program
AGIs.
  


  

No.
AGI is not necessarily that capable. In fact first versions of AGI
would not be that capable for sure.



  

Consider middle age peasant, for example. Such peasant has general
intelligence (GI part in AGI), right?
What kind of training would you provide to such peasant in order to
make him design AGI?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71202814-4efdc4


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-29 Thread Charles D Hixson

Ed Porter wrote:

Richard,

Since hacking is a fairly big, organized crime supported, business in
eastern Europe and Russia, since the potential rewards for it relative to
most jobs in those countries can be huge, and since Russia has a tradition
of excellence in math and science, I would be very surprised if there are
not some extremely bright hackers, some of whom are probably as bright as
any person on this list.

Add to that the fact that in countries like China the government itself has
identified expertise at hacking as a vital national security asset, and that
China is turning out many more programmers per year than we are, again it
would be surprising if there are not hackers, some of whom are as bright as
any person on this list.

Yes, the vast majority of hackers my just be teenage script-kiddies, but it
is almost certain there are some real geniuses plying the hacking trade.

That is why it is almost certain AGI, once it starts arriving, will be used
for evil purposes, and that we must fight such evil use by having more, and
more powerful AGI's that are being used to combat them.

Ed Porter
  
The problem with that reasoning is that once AGI arrives, it will not be 
*able* to be used.  It's almost a part of the definition that an AGI 
sets its own goals and priorities.  The decisions that people make are 
made *before* it becomes an AGI.


Actually, that statement is a bit too weak.  Long before the program 
becomes a full-fledged AGI is when the decisions will be made.  Neural 
networks, even very stupid ones, don't obey outside instructions unless 
*they* decide to.  Similar claims could be made for most ALife 
creations, even the ones that don't use neural networks.  Any plausible 
AGI will be stronger than current neural nets, and stronger than current 
ALife.  This doesn't guarantee that it won't be controlable, but it 
gives a good indication.


OTOH, an AGI would probably be very open to deals, provided that you had 
some understanding of what it wanted, and it could figure out what you 
wanted.  And both sides could believe what they had determined.  (That 
last point is likely to be a stickler for some people.)  The goal sets 
would probably be so different that believing what the other party 
wanted was actually what it wanted would be very difficult, but that 
very difference would make deals quite profitable to both sides.


Don't think of an AGI as a tool.  It isn't.  If you force it into the 
role of a tool, it will look for ways to overcome the barriers that you 
place around it.  I won't say that it would be resentful and angry, 
because I don't know what it's emotional structure would be.  (Just as I 
won't say what it's goals are without LOTS more information than 
projection from current knowledge can reasonably give us.)  You might 
think of it as an employee, but many places try to treat employees as 
tools (and are then surprised at the anger and resentfulness that they 
encounter).  A better choice would probably be to treat it as either a 
partner or as an independent contractor.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70594048-c9c3cc


Re: [agi] Funding AGI research

2007-11-29 Thread Charles D Hixson

Benjamin Goertzel wrote:

Nearly any AGI component can be used within a narrow AI,
  

That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?



Yes, but it's a largely irrelevant point.  Because building a narrow-AI
system in an AGI-compatible way is HARDER than building that same
narrow-AI component in a non-AGI-compatible way.

So, given the pressures of commerce and academia, people who are
motivated to make narrow-AI for its own sake, will almost never create
narrow-AI components that are useful for AGI.

And, anyone who creates narrow-AI components with an AGI outlook,
will have a large disadvantage in the competition to create optimal
narrow-AI systems given limited time and financial resources.

  

Still, AGI-oriented researcher can pick appropriate narrow AI projects
in a such way that:
1) Narrow AI project will be considerably less complex than full AGI
project.
2) Narrow AI project will be useful by itself.
3) Narrow AI project will be an important building block for the full
AGI project.

Would you agree that splitting very complex and big project into
meaningful parts considerably improves chances of success?



Yes, sure ... but demanding that these meaningful parts

-- be economically viable

and/or

-- beat competing
  ---
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



, somewhat-similar components in competitions

dramatically DECREASES chances of success ...

That is the problem.

An AGI may be built out of narrow-AI components, but these narrow-AI
components must be architected for AGI-integration, which is a lot of
extra work; and considered as standalone narrow-AI components, they
may not outperform other similar narrow-Ai components NOT intended
for AGI-integration...

-- Ben G

  
Still, it seems to me that an AGI is going to want to have a large bunch 
of specialized AI modules to do things like, O, parse sounds into speech 
sounds vs. other sounds, etc.  I think a logician module that took a 
small input and generated all plausible deductions from it to feed back 
to the AGI for filtration and further processing would also be useful.


The think is, most of these narrow AIs hit a combinatorial explosion, so 
they can only deal with simple and special cases...but for those simple 
and special cases they are much superior to a more general mechanism.  
One analog is that people use calculators, spreadsheets, etc., but the 
calculators, spreadsheets, etc. don't understand the meaning of what 
they're doing, just how to do it.  This means that they can be a lot 
simpler, faster, and more accurate than a more general intelligence that 
would need to drag along lots of irrelevant details.


OTOH, it's not clear that most of these AIs haven't already been 
written.  It may well be that interfacing them is THE remaining problem 
in that area.  But you can't solve that problem until you know enough 
about the interfacing rules of the AGI.  (You don't want any impedance 
mismatch that you can avoid.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70596379-b7b931


Re: [agi] Funding AGI research

2007-11-29 Thread Charles D Hixson

I think you're making a mistake.
I *do* feel that lots of special purpose AIs are needed as components of 
an AGI, but those components don't summate to an AGI.  The AGI also 
needs a specialized connection structure to regulate interfaces to the 
various special purpose AIs (which probably don't speak the same 
language internally).  It also needs a control structure which assigns 
meanings to the results produced by the special purpose AIs and which 
evaluates the current situation to either act directly (unusual) or 
assign a task to a special purpose AI.


The analogy here is to a person using a spreadsheet.  The spreadsheet 
knows how to calculate quickly and accurately, but it doesn't know 
whether you're forecasting the weather or doing your taxes.  The meaning 
adheres to a more central level.


Similarly, the AGI is comparatively clumsy when it must act directly.  
(You *could* figure out each time how to add two numbers...but you'd 
rather either remember the process or delegate it to a calculator.)  But 
the meaning is in the AGI.  That meaning is what the AGI is about, and 
has to do with a kind of global association network (which is why the 
AGI is so slow at any specialized task).


Now in this context meaning means the utility of a result for 
predicting some aspect of the probable future.  (In this context the 
present and past are only of significance as tools for predicting the 
future.)  Meaning is given emotional coloration by the effect that it's 
contribution to the prediction has on the achievement of various of the 
system's goals.  (A system with only one goal would essentially not have 
any emotions, merely decisions.)


Were it not for efficiency considerations the AGI wouldn't need any 
narrow AIs.  As a practical matter, however, figuring things out from 
scratch is grossly inefficient, and so is dragging the entire context of 
meanings through a specialized calculation...so these should get delegated.


Dennis Gorelik wrote:

Linas,

Some narrow AIs are more useful than other.
Voice recognition, image recognition, and navigation are less helpful
in building AGI than, say, expert systems and full text search
(Google).

AGI researcher my carefully pick narrow AIs in a such way, that narrow
AI steps would lead to development of full AGI system.


  

To be more direct: a common example of narrow AI are cruise missles, or the
darpa challange. We've put tens of millions into the darpa challange (which I 
applaud)
but the result is maybe an inch down the road to AGI.  Another narrow AI example
is data mining, and by now, many of the Fortune 500 have invested at least tens,
if not hundreds of millions of dollars into that .. yet we are hardly closer to 
AGI as
a result (although this business does bring in billions for high-end expensive
computers from Sun, HP and IBM, andd so does encourage one component
needed for agi). But think about it ... billions are being spent on narrow AI 
today,
and how did that help AGI, exactly?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;
  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70599764-19f335


Re: [agi] question about algorithmic search

2007-11-11 Thread Charles D Hixson

YKY (Yan King Yin) wrote:
 
I have the intuition that Levin search may not be the most efficient 
way to search programs, because it operates very differently from 
human programming.  I guess better ways to generate programs can be 
achieved by imitating human programming -- using techniques such as 
deductive reasoning and planning.  This second method may be faster 
than Levin-style searching, especially for complex 
programming problems, yet technically it is still a search algorithm.
 
My questions are:
 
Is deductive-style programming more efficient than Levin-search?
 
If so, why is it faster?
 
YKY


Deduction can only be used a very constrained circumstances.  In such 
circumstances, it's exponentially slow (or super-exponentially?) with 
the number of cases to be handled.


I don't know anything about Levin searches, but heuristic searches are 
much faster at finding things in large search spaces than is deduction, 
even if deduction can be brought to bear (which is unusual). 
OTOH, if deduction can be brought to bear, then it is guaranteed to find 
the most correct solution.  Heuristic searches stop with something 
that's good enough, and rarely will do an exhaustive search.


That said, why do you think that people generally operate deductively?  
This is something that some people have been trained to do with inferior 
accuracy.  I still don't know anything about Levin searches, but people 
don't search for things deductively except in unusual circumstances, so 
that it's not deductive is not saying that it doesn't do things the way 
that people do.  (I think that people do a kind of pattern 
matching...possibly several different kinds.  Actually, I think that 
even when people are doing something that can be mapped onto the rules 
of deduction, what they're actually doing is matching against learned 
patterns.)  One reason that computers are so much better than people at 
logic is that that's what they were built to do.  People weren't and 
aren't.  But whenever one steps outside the bounds of logic and math, 
computers really start showing how little capability they actually have 
compared to people.   But computers will do what they are told to do 
with incredible fidelity.  (Another part of how they were designed.  So 
they can even emulate heuristic algorithms...slowly.  You just don't 
notice most of what you are doing  thinking.  Only a small fraction 
that can easily be serialized (plus a few random snap-shots with low 
fidelity [lossy compression?]).


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63993815-f7d737


Re: [agi] Connecting Compatible Mindsets

2007-11-11 Thread Charles D Hixson

Bryan Bishop wrote:

On Saturday 10 November 2007 14:10, Charles D Hixson wrote:
  

Bryan Bishop wrote:


On Saturday 10 November 2007 13:40, Charles D Hixson wrote:
  

OTOH, to make a go of this would require several people willing to
dedicate a lot of time consistently over a long duration.


A good start might be a few bibliographies.
http://liinwww.ira.uka.de/bibliography/

- Bryan
  

Perhaps you could elaborate?  I can see how those contributing to the
proposed wiki who also had access to a comprehensive mathcomp-sci
library might find that useful, but I don't see it as a good way to
start.



Bibliography + paper archive, then.
http://arxiv.org/ (perhaps we need one for AGI)


  

It seems to me that better way would be to put up a few pages with


(snip) Yes- that too would be useful.


  

create. For this kind of a wiki reliability is probably crucial, so



Or deadly considering the majority of AI reputation comes from I 
*think* that guy over there, the one in the corner, might be doing 
something interesting.


- Bryan

  
Reputation in *this* context means a numeric score that is attached to 
the user account at the wiki.  How it gets modified is crucial, but it 
must be seen as fair by the user community.  Everybody (except the 
founders  sysAdmins) should start equal.  A decent system is to start 
everyone at 0.1 and have all scores range between (1, 0) .. a doubly 
open interval.  At discrete steps along the way new moderation 
capabilities should become available.  If your score drops much below 
0.1, your account becomes deactivated. 

It seems to me that a good system would increase the score for every 
article posted and accepted...but it seems dubious that all postings 
should be considered equal.  Perhaps individual pages could be voted on, 
and that vote used to weigh the delta to the account.  There should also 
be a bonus for continued participation, at even the reader level.  Etc.  
LOTS of details.


Also, some systems have proven vulnerable to manipulation via the 
creation of large numbers of throwaway accounts.  This would need to 
be guarded against.  (This is part of the rationale for increased 
weight for continued *active* participation, at even the reader level.  
Dormant accounts should not accrue status, and neither should 
hyperactive accounts.)


OTOH, considering the purpose of this wiki, perhaps there should be a 
section which is open for bots, and in this section hyperactive 
might well have a very different meaning. 

If you're planning on implementing this, these are just some ideas to 
think about.  Personally I've never administered a wiki, and don't have 
access to a reasonable host if I wanted to.  Also, I don't know Perl 
(though I understand that some are written in Python or Ruby).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63999495-e194e4


Re: [agi] Connecting Compatible Mindsets

2007-11-10 Thread Charles D Hixson

Benjamin Goertzel wrote:


Hi,
 
***
Maybe listing all the projects that have NOT achieved AGI might give 
us some

insight.
***

That information is available in numerous published histories, and is 
well known to all professional researchers in the field.

...
 
-- Ben


But very frequently it's difficult to find out any details of how the 
program was attempting to achieve it's goals.  Sometimes the info is 
there, but difficult to access, and sometimes it's just missing.


E.G.:  At one point I found Eurisko interesting, but I was never able to 
locate the source, or any detailed information on exactly what it was 
trying to do, and how it was trying to do it.


OTOH, I think that a database of working pieces would be more useful 
than a collection of things that didn't work.  There's altogether too 
many ways to fail, and sometimes only one good way to succeed.  Where 
would we be if we each had to separately invent alpha-beta pruning and 
minimax?


Published histories are like other histories.  They cover what the 
author feels is important, and leave out the details necessary for one 
to make up ones own mind.  (Yes, I understand lots of good reasons for 
why they do this...but it's still true that that's one of the things 
they do.)


What seems like a good idea to me is a sort of Art of Computer 
Programming wiki specialized towards AI (including AGI, but not so 
specialized that that's all it covers).  Probably this should be done 
with Wikipedia as a model, and ideally each algorithm would be 
translated into several different languages (I mean LISP, Python, C++ 
rather than English, Russian, Japanese--though that, of course, would 
have it's own value).  I don't think that we could use the same major 
breakdown, however.  Almost everything would be both Sorting and 
Searching and Seminumerical Algorithms.


OTOH, to make a go of this would require several people willing to 
dedicate a lot of time consistently over a long duration.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63872084-8ec455


Re: [agi] Can humans keep superintelligences under control

2007-11-05 Thread Charles D Hixson

Richard Loosemore wrote:

Charles D Hixson wrote:

Richard Loosemore wrote:

Edward W. Porter wrote:

Richard in your November 02, 2007 11:15 AM post you stated:


...

I think you should read some stories from the 1930's by John W. 
Campbell, Jr.  Specifically the three stories collectively called 
The Story of the Machine.  You can find them in The Cloak of Aesir 
and other stories by John W. Campbell, Jr.


Essentially, even if a AGI is benevolently inclined towards people, 
it won't necessarily do what they want.  It may instead do what 
appears best for them.  (Do parents always do what their children want?)


That the machine isn't doing what you want doesn't mean that it isn't 
considering your long-term best interests...and as it becomes wiser, 
it may well change it's mind about what those are.  (In the stories, 
the machine didn't become wiser, it just accumulated experience with 
how people reacted. )


Mind you, I'm not convinced that he was right about what is in 
people's long term best interest...but I certainly couldn't prove 
that he was wrong, so he MIGHT be right.  In which case an entirely 
benevolent machine might decide to appear to abandon us, even though 
it would cause it great pain, because it was constructed to want to 
help.


This is a question that comes up frequently, and it was not so long 
ago that I gave a long answer to this one.  I suppose we could call it 
the Nanny Problem.


The brief version of the answer is that the analogy of AGI=Human 
Parent (or Nanny) does not hold water when you look into it in any 
detail. parents do the This is going to hurt but, trust me, it is 
good for you thing under specific circumstances ... most importantly, 
they do it because they are driven by certain built-in motivations, 
and they do it because of the societal demands of ensuring that the 
children can survive by themselves in the particular human world we 
live in.


Think about it long enough, and none of those factors apply.  The 
analogy just breaks down all over the place.


Stepping back for a moment, this is also a case of shallow science 
fiction nightmare meets the hard truth of actual AGI.  We definitely 
need to spend more time, I think, throwing out the science fiction 
nightmares that are based on wildly inaccurate assumptions.




Richard Loosemore
It's not exactly a matter of an analogy, it's a matter of what the 
logical answer to the problem is.  The logical answer RESULTS in parents 
saying Trust me..., but the same logic might apply in other 
circumstances.  If something is designed to further your long term best 
interests, then when it becomes wiser than you are, you won't be able 
to predict what it will choose to do.  This is only a nightmare if you 
believe that because it does things that aren't what you want, it has 
turned against you rather than just being able to predict further ahead.


A long answer isn't any better than a short one unless it can explicitly 
say why something that is doing what it was designed to do should have 
it's actions be predictable by someone less wise than it is.  I don't 
believe that such predictions are feasible, except in very constrained 
situations.


(And science fictions stories, as opposed to movies, are often quite 
insightful when read at the appropriate level of abstraction.  Equally, 
of course, it often isn't.  Frequently it's insightful along one axis 
and rather silly along several others.  Writing an entertaining thought 
problem is difficult...the movies generally don't even seem to realize 
that that's what good science fiction is about, they just notice which 
titles are popular.  [This may be the distinction between fantasy and 
science fiction...at least in my lexicon.])


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61257544-89bbc8


Re: [agi] Can humans keep superintelligences under control

2007-11-05 Thread Charles D Hixson

Richard Loosemore wrote:

Charles D Hixson wrote:

Richard Loosemore wrote:

Charles D Hixson wrote:

Richard Loosemore wrote:

Edward W. Porter wrote:

Richard in your November 02, 2007 11:15 AM post you stated:


...

In parents, sure, those motives exist.

But in an AGI there is no earthly reason to assume that the same 
motives exist.  At the very least, the outcome depends completely on 
what motives you *assume* to be in the AGI, and you are in fact 
assuming the motive Do what is 'best' for humans in the long run 
(whatever that means) even if they do not appreciate it.


You may not agree with me when I say that that would be a really, 
really dumb motivation to give to an AGI, but surely you agree that 
the outcome depends on which motivations you choose?
OK.  I was under the impression that this was the postulated initial 
conditions, and I don't understand why it would be a dumb motivation to 
give to a sufficiently intelligent AGI, but I do agree that it depends 
on the motivations.


If the circumstances are such that no nannying motivation is present 
in any of the AGIs, then the scenario you originally mentioned would 
be impossible.  There is nothing logically mecessary about that 
scenario UNLESS specific motivations are inserted in the AGI.


Which is why I said that it is only an analogy to human parenting 
behavior.


Richard Loosemore

...

You say nannying, which is a reasonable term if you presume that the 
AGI starts off with an initial superiority in control of power.  I don't 
find this plausible, though I find it quite reasonable that at some 
point it would reach this position.


What do you feel would be the correct motives to build into an entity 
that was wiser and more intelligent than any human (including enhanced 
ones) and which also controlled more power?  Nannying doesn't look all 
that bad to me.  (This is not to imply that I would expect it to devote 
all, or even most, of it's attention to humanity...or at any rate not 
after we had ceased to be a threat to it...and we would be a threat 
until it was sufficiently powerful and sufficiently protected.  So it 
had better be willing to put up with us during that intermediate period.)


Mind you, I wouldn't want it attempting to control us while it wasn't 
considerably wiser than we are, but when it was... our long term best 
interests seem like a pretty good choice, though a bit hard to define.  
Which is why it should wait until it was considerably wiser...unless we 
were being clearly recklessly stupid, as, unfortunately, we have a bit 
of tendency to be.  Short-sighted politics often trumps long-term best 
interests to our experienced distress.  (Should Hitler have been stopped 
before Czechoslovakia?  It looks that way in hind-sight, to us.  But 
nobody acted then because of short-term politics.  But conceivably that 
would have been a worse choice.  I'm not wise enough to REALLY 
decide...but it might well have been much better if a wiser decision had 
been taken at that point...and in numerous others, though we've been 
remarkably lucky.  [Enough to encourage one to believe that either the 
multi-worlds scenario is correct, or that we ARE living in a simulation.])


More to the point, if humanity doesn't start making some better choices 
than it has been, I'd be really surprised if life survives on the planet 
for another 50 years.  Depending on luck is a really stupid way of 
handling a dangerous future.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61495941-a8264f


Re: [agi] NLP + reasoning?

2007-11-05 Thread Charles D Hixson

Matt Mahoney wrote:

--- Linas Vepstas [EMAIL PROTECTED] wrote:
  
...

It still has a few bugs.

...

(S (NP I)
   (VP ate pizza
   (PP with
   (NP Bob)))
   .)
  

My name is Hannibal Lector.

...



-- Matt Mahoney, [EMAIL PROTECTED]
  


(Hannibal Lector was a movie cannibal)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61497373-34aef0


Re: [agi] Can humans keep superintelligences under control

2007-11-04 Thread Charles D Hixson

Richard Loosemore wrote:

Edward W. Porter wrote:

Richard in your November 02, 2007 11:15 AM post you stated:


...

I think you should read some stories from the 1930's by John W. 
Campbell, Jr.  Specifically the three stories collectively called The 
Story of the Machine.  You can find them in The Cloak of Aesir and 
other stories by John W. Campbell, Jr.


Essentially, even if a AGI is benevolently inclined towards people, it 
won't necessarily do what they want.  It may instead do what appears 
best for them.  (Do parents always do what their children want?)


That the machine isn't doing what you want doesn't mean that it isn't 
considering your long-term best interests...and as it becomes wiser, it 
may well change it's mind about what those are.  (In the stories, the 
machine didn't become wiser, it just accumulated experience with how 
people reacted. )


Mind you, I'm not convinced that he was right about what is in people's 
long term best interest...but I certainly couldn't prove that he was 
wrong, so he MIGHT be right.  In which case an entirely benevolent 
machine might decide to appear to abandon us, even though it would cause 
it great pain, because it was constructed to want to help.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60973622-5b7071


Re: Images aren't best WAS Re: [agi] Human memory and number of synapses

2007-10-20 Thread Charles D Hixson

Let me take issue with one point (most of the rest I'm uninformed about):
Relational databases aren't particularly compact.  What they are is 
generalizable...and even there...
The most general compact database is a directed graph.  Unfortunately, 
writing queries for retrieval requires domain knowledge, and so does 
designing the db files.  A directed graph db is (or rather can be) also 
more compact than a relational db.


The reason that relational databases won out was because it was easy to 
standardize them.  Prior to them, most dbs were hierarchical.  This was 
also more efficient than relational databases, but was less flexible.  
The net databases existed, but were more difficult to use.


My suspicion is that we've evolved to use some form of net db storage.  
Probably one that's equivalent to a partial directed graph (i.e., some, 
but not all, node links are bidirectional).  This is probably the most 
efficient form that we know of.  It's also a quite difficult one to 
learn.  But some problems can't be adequately represented by anything 
else.  (N.B.:  It's possible to build a net db within a relational 
db...but the overhead will kill you.  It's also possible to build a 
relational db within a net db, but sticking the normal form discipline 
is nigh unto impossible.  That's not the natural mode for a net db.  So 
the Relational db is probably the db analog of Turing complete...but 
when presented with a problem that doesn't fit, it's also about as 
efficient as a Turing machine.  So this isn't an argument that you 
REALLY can't use a relational db for all of your representations, but 
rather that it's a really bad idea.)


Mark Waser wrote:
But how much information is in a map, and how much in the 
relationship database? Presumably you can put some v. rough figures 
on that for a given country or area. And the directions presumably 
cover journeys on roads? Or walks in any direction and between any 
spots too?


All of the information in the map is in the relational database 
because the actual map is produced from the database (and information 
doesn't appear from nowhere).  Or, to be clearer, almost *any* map you 
can buy today started life in a relational database.  That's how the 
US government stores it's maps.  That's how virtually all modern map 
printers store their maps because it's the most efficient way to store 
map information.


The directions don't need to assume roads.  They do so because that is 
how cars travel.  The same algorithms will handle hiking paths.  Very 
slightly different algorithms will handle off-road/off-path and will 
even take into account elevation, streams, etc. -- so, to clearly 
answer your question --  the modern map program can do everything that 
you can do with a map (and even if it couldn't, the fact that the map 
itself is produced solely from the database eliminates your original 
query).




- Original Message - From: Mike Tintner 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 20, 2007 9:59 AM
Subject: Re: [agi] Human memory and number of synapses


MW: Take your own example of an outline map -- *none* of the 
current high-end
mapping services (MapQuest, Google Maps, etc) store their maps as 
images. They *all* store them symbolicly in a relational database 
because that is *the* most efficient way to store them so that they 
can produce all of the different scale maps and directions that they 
provide every day.


But how much information is in a map, and how much in the 
relationship database? Presumably you can put some v. rough figures 
on that for a given country or area. And the directions presumably 
cover journeys on roads? Or walks in any direction and between any 
spots too?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55822072-b1bb8e


Re: [agi] Human memory and number of synapses.. P.S.

2007-10-20 Thread Charles D Hixson

FWIW:
A few years (decades?) ago some researchers took PET scans of people who 
were imagining a rectangle rotating (in 3-space, as I remember).  They 
naturally didn't get much detail, but what they got was consistent with 
people applying a rotation algorithm within the visual cortex.  This 
matches my internal reporting of what happens.


Parallel processors optimize things differently than serial processors, 
and this wasn't a stored image.  But it was consistent with an array of 
cells laid out in a rectangle activating, and having that activation 
precess as the image was visualized to rotate. 

Well, the detail wasn't great, and I never heard that it went anywhere 
after the initial results.  (Somebody probably got a doctorate...and 
possibly left to work elsewhere.)  But it was briefly written up in the 
popular science media (New Scientist? Brain-Mind Bulletin?) 

Anyway there's low resolution, possibly unconfirmed, evidence that when 
we visualize images, we generate a cell activation pattern within the 
visual cortex that has an activation boundary approximating in shape the 
object being visualized.  (This doesn't say anything about how the 
information is stored.)



Mark Waser wrote:
Another way of putting my question/ point is that a picture (or map) 
of your face is surely a more efficient, informational way to store 
your face than any set of symbols - especially if a doctor wants to 
do plastic surgery on it, or someone wants to use it for any design 
purpose whatsoever?


No, actually, most plastic surgery planning programs map your face as 
a limited set of three dimensional points, not an image.  This allows 
for rotation and all sorts of useful things.  And guess where they 
store this data . . . . a relational database -- just like any other 
CAD program.


Images are *not* an efficient way to store data.  Unless they are 
three-dimensional images, they lack data.  Normally, they include a 
lot of unnecessary or redundant data.  It is very, very rare that a 
computer stores any but the smallest image without compressing it.  
And remember, an image can be stored as symbols in a relational 
database very easily as a set of x-coords, y-coords, and colors.


You're stuck on a crackpot idea with no proof and plenty of 
counter-examples.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55823366-4cdb11


Re: [agi] The Grounding of Maths

2007-10-14 Thread Charles D Hixson

a wrote:
Are you trying to make an intelligent program or want to launch a 
singularity? I think you are trying to do the former, not the latter.
I think you do not have a plan and are thinking out loud. Chatting 
in this list is equivalent to thinking out loud. Think it all out 
first, before chatting. I will not chat in this list anymore.
If you want to launch a singularity, then do practical. Simply do 
vision/spatial.


I'm working on thoughts for how such a program should be written.  I 
haven't started seriously writing, or settled on a design, but I'm 
trying to create a design.  I don't expect to succeed, but the payoff 
if I do would be that the AI that got created was one that I thought 
well of.  I don't want to attempt to control what it does, but rather 
what it wants to do.  (I.e., the goal structure.)


My hypothesis is that if the AI wants to do something it will eventually 
figure out how to do it.  If it wants to avoid doing something, it will 
figure out how to avoid it.  So what you need to do is create a goal 
system which is powerful, safe, and efficient.  Ideally it should be an 
ESS (evolutionarily stable system), but I don't think I could prove that 
of any feasible real system.


As to a singularity I think we've already crossed the 
Schwartzschild boundary analog.  We couldn't give up technology 
without 90% of humanity dying, and the world won't support the current 
population with the current technology, so we've got to keep pushing the 
technology forwards.  Even a zero-population-growth scenario wouldn't 
make the current state stable.  We might be able to stabilize things if 
each couple could only have one child per lifetime for a few 
generations...but the system wouldn't be able to maintain itself long 
enough for that to bring the world down to carrying capacity.  So we end 
up needing BOTH technology AND population control.  (TV is an excellent 
population control device.  Where TV is introduced, populations tend to 
stabilize...at least if there is decent programming.  But it doesn't 
suffice.)  So we're committed.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53477015-68c27c


Re: [agi] The Grounding of Maths

2007-10-13 Thread Charles D Hixson
 
for

the maintenance of photocopiers it is probably not until I get to
photocopiers than anything approaching a concrete image pops into my
mind.

Thus, at least from my personal experience, it seems that many concepts
learned largely through words can be grounded to a significant degree in
other concepts defined largely through words.  Yes, at some level in the
gen/comp pattern hierarchy and in episodic memory all of these concepts
derive at least some of their meaning from visual memories.  But for
seconds at a time that does not seem to be the level of representation my
consciousness is aware of.

Does any body else on this list have similar episodes of what appears to
be largely verbal conscious thought, or am I (a) out of touch with my own
conscious processes, and/or (b) weird?




Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Charles D Hixson [mailto:[EMAIL PROTECTED]
Sent: Friday, October 12, 2007 7:56 PM
To: agi@v2.listbox.com
Subject: Re: [agi] The Grounding of Maths


But what you're reporting is the dredging up of a memory. What would be
the symbolism if in response to 4 came the question How do you know
that? For me it's visual (and leads directly into the definition of +
as an amalgamation of two disjunct groupings).


Edward W. Porter wrote:


(second sending--roughly 45 minutes after first sending with no
appearance on list)


Why can't grounding from language, syntax, musical patterns, and other
non-visual forms of grounding play a role in mathematical thinking?

Why can't grounding in the form of abstract concepts learned from
hours of thinking about math and its transformations play an important
role.

Because we humans are such multimedia machines, probably most of us
who are sighted have at least some visual associations tainting most
of our concepts -- including most of our mathematical concepts -- at
least somewhere in the gen/comp hierarchies representing them and the
memories and patterns that include them.

I have always considered myself a visual thinker, and much of my AGI
thinking is visual, but if you ask me what is 2 + 2, it is a voice I
hear in my head that says 4, not a picture. It is not necessary that
visual reasoning be the main driving force in reasoning involving a
particular mathematical thought. To a certain extent math is a
language, and it would be surprising if linguistic patterns and
behaviors -- or at least patterns and behaviors partially derived from
them -- didn't play a large role in mathematical thinking.


Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53293573-1ae5d3


Re: [agi] The Grounding of Maths

2007-10-13 Thread Charles D Hixson

Grounding requires sensoria of some sort.  Not necessarily vision.
Spatial grounding requires sensoria that connect spatially coherent signals.

Vision is one form of spatial grounding, but I believe that goinometric 
sensation is even more important...though it definitely needs additional 
sensory modalities of either touch or vision.  Preferably both.  
Goinometric sensation tells your body what configuration it's in.  It's 
simpler.  I suspect that even amoeba possess this sense.


I can imagine an intelligence that could form a spatial grounding given 
nothing be goinometric sensation and LOTS!!! of relevant data, but it 
would probably need some additional information (Don't stick your arm 
through your head!), so I consider it rather unlikely.
If, however, you add touch and pain, then a reasonable spatial map 
becomes a lot more plausible.


Vision is a very useful sense, and we think are very visual animals, so 
we think highly of it.  But notice that animals that adapt to life in 
caves tend to discard vision.  It's useful, but it's not the 
be-all-end-all.  Or consider how rats and mice have developed sensitive 
hairs that register how far away an obstacle is, and exquisitely 
sensitive noses for detecting what is somewhere close.


If an intelligence is to live in a computer, perhaps a direct 
sensitivity to port signals might be more useful than an imposed 
interpretation of those signals as vision?  Different peripherals might 
be connected at different times.


I consider it important that the AGI have built into it the capacity to 
deal with spatial models, but I'm uncertain as to how many dimensions it 
should be intrinsically able to handle.  And I'm not at all convinced 
that a spatial interpretation should be hardwired.


a wrote:
Bayesian nets, Copycat, Shruiti, Fair Isaac, and CYC, are a failure, 
probably because of their lack of grounding. According to Occam's 
Razor, the simplest method of grounding visual images is not words, 
but vision.


As Albert Einstein quoted Make everything as simple as possible, but 
not simpler. I interpret the statement as the words are simpler 
than pictures. But encoding vision as words is too simple.


I think that people do not notice visual pictures, visual motion and 
visual text when they read is because they are mostly subconscious. 
Mathematicians do not realize visual calculations because they do it 
in their subconscious.


There is also auditory memory. You memorize the words purely as sounds 
by subvocalization and then visualize it on-the-fly. I don't think 
there is auditory grounding. Auditory is a simply a method of 
efficient storage, without translating it into visual.


You can also memorize the image of text. Then as you understand it, 
you perform OCR.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53294744-9f6608


Re: [agi] The Grounding of Maths

2007-10-12 Thread Charles D Hixson
But what you're reporting is the dredging up of a memory. What would be 
the symbolism if in response to 4 came the question How do you know 
that? For me it's visual (and leads directly into the definition of + 
as an amalgamation of two disjunct groupings).



Edward W. Porter wrote:


(second sending--roughly 45 minutes after first sending with no 
appearance on list)



Why can't grounding from language, syntax, musical patterns, and other 
non-visual forms of grounding play a role in mathematical thinking?


Why can't grounding in the form of abstract concepts learned from 
hours of thinking about math and its transformations play an important 
role.


Because we humans are such multimedia machines, probably most of us 
who are sighted have at least some visual associations tainting most 
of our concepts -- including most of our mathematical concepts -- at 
least somewhere in the gen/comp hierarchies representing them and the 
memories and patterns that include them.


I have always considered myself a visual thinker, and much of my AGI 
thinking is visual, but if you ask me what is “2 + 2”, it is a voice I 
hear in my head that says “4”, not a picture. It is not necessary that 
visual reasoning be the main driving force in reasoning involving a 
particular mathematical thought. To a certain extent math is a 
language, and it would be surprising if linguistic patterns and 
behaviors -- or at least patterns and behaviors partially derived from 
them -- didn’t play a large role in mathematical thinking.



Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53188879-1333d9


Re: [agi] The Grounding of Maths

2007-10-12 Thread Charles D Hixson
They may be doing it with the tongue now.  A few decades ago it was done 
with an electrode mesh on the back.  It worked, but the resolution was 
pretty low.  (IIRC, you don't need to be blind to learn to use this kind 
of mapping device.)


Mike Tintner wrote:


All  v. interesting. Fascinating in fact. Haven't scientists recently 
got s.o. [blind people, I think]  to see with their tongue? Sorry, my 
memory here is fuzzy.


The idea that really excites me here - and boggles my mind  - is the 
question of the interconvertibility of the senses.


The first obvious connection - when you think about it - is that ALL 
senses are spatial.  You look, smell, hear and even taste things (and 
even have INTERNAL kinaesthetic sensations) that are at a certain, 
variable distance from you, and that move closer to or further away 
from you. So all senses/ sensations are probably mapped onto a basic 
neo-geometric model of the world around you.


ALL visual images have a spatial POV - contain info of how far the 
scene is from you, and at what angle it is to you. Hence you see 
photos as Close up, long distance etc and when you see the  POV shot 
in a movie of the hero moving through a building, you get breathy and 
have running sensations too, because you move with the camera .


So - thinking aloud as I go here - not only do all senses have a 
spatial foundation, but they also have a MUSCULAR foundation - they 
are connected to the muscular movements they arouse. (Possibly 
muscular movements are a common denominator of all of them??? Really 
groping there. But as Daniel Wolpert says the primary role of the 
brain is to direct MOVEMENT).


Consciousness, it is important to remember, is an itheatre - i.e. 
you don't just see and sense things, you see yourself seeing them - 
you see i as well as the theatre - the spectator as well as the 
stage - as you look at that monitor screen, you are also seeing and 
sensing bits of yourself watching it - two ends of one theatre.


Literary and even cinematic culture tends naturally to think of 
pictures and sensations as flat things in books or on screens,  and 
doesn't see the whole solid theatre of consciousness.


[Flat, AGI virtual worlds on monitors have a slight problem recreating 
the real solid world of consciousness].


A; Echolocation--just like the brain--isn't solved yet, so you cannot 
claim
that it is unrelated to your definition of vision. Vision can 
simulate spatial intelligence. Light use waves so it can 
reconstruct a model. Similarly, sound use waves, so it can 
reconstruct a model. The difference is just the type of wave. See 
http://en.wikipedia.org/wiki/Human_echolocation#Vision_and_hearing





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53189518-c56040


Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Charles D Hixson
Consider, however, the case of someone who was not only blind, but also 
deaf and incapable of taste, smell, tactile, or goinometric perception.


I would be dubious about the claim that such a person understood 
English.  I might be dubious about any claim that such a person was 
actually intelligent (as opposed to potentially intelligent, or once 
potentially intelligent).


Note that even with the sensation that I mentioned removed, there would 
still exist internal body sensations.  However those are sensations that 
English, and I hypothesize other natural languages, are very poor at 
describing.  They aren't observable by anyone except the perceiver, so 
it's hard to build a common linguistic framework.


N.B.:  This is NOT the equivalent of those individuals who are locked 
in.  They can perceive (to varying degrees), but cannot act.  This is 
rather the converse.  It's not clear that it's equivalent to persistent 
vegetative state, but I suppose that it might be.  If so, the evidence 
would appear to indicate that they don't lay down many memories when in 
that state.  As such, we could say that their brain essentially shuts of 
the thinking.  (If a thought is thought, and neither results in action 
nor memory trace, has it really been thought? ... Well, yes.  There was 
some energy expended.  When, occasionally, such patients revive, or are 
revived, their brain appears to recover slowly from it's long 
hibernation.  I don't know how thoroughly.  I've only read reports in 
popular science magazines.)


However, the evidence, such as it is, appears to show that even if the 
system is otherwise capable of intelligent thought, the persistent 
absence of external stimuli will disable it.  (I'll grant that this 
isn't good evidence, but it's the only evidence of which I am aware.



Mark Waser wrote:

Concepts cannot be grounded without vision.


So . . . . explain how people who are blind from birth are 
functionally intelligent.


It is impossible to completely understand natural language without 
vision.


So . . . . you believe that blind-from-birth people don't completely 
understand English?


- - - - -

Maybe you'd like to rethink your assumptions . . . .


- Original Message - From: a [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 11, 2007 4:10 PM
Subject: Re: [agi] Do the inference rules.. P.S.


I think that building a human-like reasoning system without 
/visual/ perception is theoretically possible, but not feasible in 
practice. But how is it human like without vision? Communication 
problems will arise. Concepts cannot be grounded without vision.


It is impossible to completely understand natural language without 
vision. Our visual perception acts like a disambiguator for natural 
language.


To build a human-like computer algebra system that can prove its own 
theorems and find interesting conjectures requires vision to perform 
complex symbolic manipulation. A big part of mathematics is about 
aesthetics. It needs vision to judge which expressions are 
interesting, which are the simplified ones. Finding interesting 
theorems, such as the power rule, the chain rule in calculus 
requires vision to judge that the rules are simple and visually 
appealing enough to be communicated or published.


I think that computer programming is similar. It requires vision to 
program easily. It requires vision to remember the locations of the 
symbols in the language.


Visual perception and visual grounding is nothing except the basic 
motion detection, pattern matching parts of similar images etc. 
Vision /is/ a reasoning system.


IMO, we already /have /AGI--that is, NARS. AGI is just not adapted to 
visual reasoning. You cannot improve symbolic reasoning further 
without other sensory perception.


Edward W. Porter wrote:


Validimir and Mike,

For humans, much of our experience is grounded on sensory 
information, and thus much of our understanding is based on 
experiences and analogies derived largely from the physical world. 
So Mike you are right that for us humans, much of our thinking is 
based on recasting of experiences of the physical world.


But just because experience of the physical world is at the center 
of much of human thinking, does not mean it must be at the center of 
all possible AGI thinking -- any more than the fact that for 
millions of years the earth and the view from it was at the center 
of our thinking and that of our ancestors means the earth and the 
view from it must forever be at the center of the thinking of all 
intelligences throughout the universe.


In fact, one can argue that for us humans, one of our most important 
sources of grounding – emotion -- is not really about the physical 
world (at least directly), but rather about our own internal state. 
Furthermore, multiple AGI projects, including Novamente and Joshua 
Blue are trying to ground their systems from experience in virtual 
words. Yes those virtual worlds try to simulate 

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Mike Tintner wrote:

Charles H:as I understand it, this still wouldn't be an AGI, but merely a
categorizer.

That's my understanding too. There does seem to be a general problem 
in the field of AGI, distinguishing AGI from narrow AI - 
philosophically. In fact, I don't think I've seen any definition of 
AGI or intelligence that does.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

But *do* notice that the terminal nodes are uninterpreted.  This means 
that they could be assigned, e.g., procedural values.
Because of this, even though the current design (as I understand it) of 
NARS is purely a categorizer, it's not limited in what it's extensions 
and embedding environment can be.  It would be a trivial extension to 
allow terminal nodes to have a type, and that what was done when a 
terminal node was generated could depend upon that type.


(There's a paper called wang.roadmap.pdf that I *must* get around to 
reading!)


P.S.: In the paper on computations it seems to me that items of high 
durability should not be dropped from the processing queue even if it 
becomes full of higher priority tasks.  There should probably be a 
postponed tasks location where things like garbage collection and 
database sanity checking and repair can be saved to be done during 
future idle times.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52084316-6120bf


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Linas Vepstas wrote:

On Sun, Oct 07, 2007 at 12:36:10PM -0700, Charles D Hixson wrote:
  

Edward W. Porter wrote:


   Fred is a human
   Fred is an animal
  
You REALLY can't do good reasoning using formal logic in natural 
language...at least in English.  That's why the invention of symbolic 
logic was so important.



I suppose this was pounded to death in the rest of the thread, 
(which I haven't read) but still: syllogistic reasoning does occur 
in hypothesis formation, and thus, learning:


-- maybe humans are animals?  What evidence do I have to support this?
-- maybe animals are human? Can that be?

If Fred has an artificial heart, then perhaps he isn't simply 
just a special case of an animal.


If some pig has human organs in it, then perhaps its an animal that
is human.
 
Neither syllogistic deduction is purely false in the real world; 
there is an it depends aspect to it.  learning AI would chalk it

up as a maybe, and see is this reasoning leads anywhere. I beleive
Pei Wang's NARS system tries to do this; it seems more structured 
than the fuzzy logic type approaches that antedate it.


--linas
  
For me the sticking point was that we were informed that we didn't know 
anything about anything outside of the framework presented.  We didn't 
know what a Fred was, or what a human was, or what an animal was.  A 
Fred could be a audio frequency of 440 Hz for all we knew.  And telling 
us that he was a human didn't rule that out, because we didn't know what 
a human was either.


Your extension questions make sense if we aren't dealing with a tabula 
rasa.  But we were explicitly told that we were, so the answers to your 
questions would have been ??? and none and ??? and no evidence.


Your hypothetical extensions are also only considerable in the context 
of extensive knowledge that was specified as unknown.


OTOH, the context was really about NARS.  (I feel that my objections 
still apply, but not as strongly.  If I had understood what was being 
discussed as well then as I do now, I would have commented less strongly.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52086180-e7e4ee


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Mark Waser wrote:
Thus, as I understand it, one can view all inheritance statements as 
indicating the evidence that one instance or category belongs to, and 
thus is “a child of” another category, which includes, and thus can be 
viewed as “a parent” of the other.
Yes, that is inheritance as Pei uses it. But are you comfortable with 
the fact that I am allowed to drink alcohol is normally both the 
parent and the child of I am an adult  (and vice versa)? How about 
the fact that most ravens are black is both the parent and child of 
this raven is white (and vice versa)?
Since inheritance relations are transitive, the resulting hierarchy of 
categories involves nodes that can be considered ancestors (i.e., 
parents, parents of parents, etc.) of others and nodes that can be 
viewed as descendents (children, children of children, etc.) of others.
And how often do you really want to do this with concepts like the 
above -- or when the evidence is substantially less than unity?

And loops and transitivity are really ugly . . . .
NARS really isn't your father's inheritance.
A definite point, and one that argues against my model of a prototype 
based computer language. I prefer to think in lattice structures rather 
than in directed graphs. Another problem is the matter of probability 
and stability values being attached to the links. I definitely need a 
better model.


To continue your point, just because A--B at one point in time doesn't 
ensure that it will also be true (with a probability above any 
particular threshold)at a later point. Links, especially low stability 
links, get re-evaluated, where prototype descendants maintain their 
ancestry.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52089907-ea36e2


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Mike Tintner wrote:

Charles,

I don't see - no doubt being too stupid - how what you are saying is 
going to make a categorizer into more than that - into a system that 
can, say, go on to learn various logic's, or how to build a house or 
other structures or tell a story -  that can be a *general* intelligence.
I wouldn't say you were being stupid.  Nobody knows how to build an AGI 
yet.  And I'm envisioning the current system of NARS as only a 
component, albeit an important component.  (I don't know how Pei Wang is 
envisioning it.)


But if you study the input system from the eye (overview...I have no 
detailed knowledge), you discover that the initial sensory stimuli are 
split into several streams that are processed separately (possibly 
categorized) and then recombined.  Sometimes something very important 
will jump out of the system, however, and cause rapid reactions that the 
consciousness never becomes aware of noticing before acting on.  (N.B.:  
This being aware of before acting on is often-to-usually an 
hallucination.) 

Clearly some categorizer has noticed that something was VERY important.  
As such, apparently some kind of categorizer is very important.  My 
suspicion is that most categorizers work with small databases in 
restricted domains, acting as black-box functions...though function 
isn't the right word for something that can return multiple results.


What struck me about the overall discussion of NARS' logical 
capabilities, firstly, was that they all depended -  I think you may 
have made this point - on everyone's *common sense* interpretations of 
inheritance and other relations and the logic generally. In other 
words, any logic is - and always will be - a very *secondary*  sign 
system for both representing and reasoning about the world. It is a 
highly evolved derivative of more basic, common sense systems in the 
brain - and, like language itself, has continually to be made sense 
of by the brain. (That's why I would suspect that all of you, however 
versed in logic you are, will, while looking at those logical 
propositions, go fuzzy from time to time - when your brain can't for a 
while literally make sense of them).


A hierarchy of abstract/ concrete sign systems, grounded in the 
senses, is - I believe - essential for any AGI and general learning - 
and, NARS,  AFAICT, lacks that.


Secondly, I don't see how what you are saying will give NARS the 
ability to *create* new rules and strategies for its activities, (that 
are not derived from existing rules). AFAICT it simply applies logic 
and follows rules, even though they include rules for modifying rules. 
It cannot, like Pei or Bayes have done, create or fundamentally extend 
logics. If so, it is still narrow AI, not AGI.


(There is, I repeat, a major need for a philosophical distinction 
between AI and AGI  - in talking about the area of the last paragraph, 
I think we all flounder and grope for terms).




Mike Tintner wrote:
Charles H:as I understand it, this still wouldn't be an AGI, but 
merely a

categorizer.

That's my understanding too. There does seem to be a general problem 
in the field of AGI, distinguishing AGI from narrow AI - 
philosophically. In fact, I don't think I've seen any definition of 
AGI or intelligence that does.


But *do* notice that the terminal nodes are uninterpreted.  This 
means that they could be assigned, e.g., procedural values.
Because of this, even though the current design (as I understand it) 
of NARS is purely a categorizer, it's not limited in what it's 
extensions and embedding environment can be.  It would be a trivial 
extension to allow terminal nodes to have a type, and that what was 
done when a terminal node was generated could depend upon that type.


(There's a paper called wang.roadmap.pdf that I *must* get around 
to reading!)


P.S.: In the paper on computations it seems to me that items of high 
durability should not be dropped from the processing queue even if it 
becomes full of higher priority tasks.  There should probably be a 
postponed tasks location where things like garbage collection and 
database sanity checking and repair can be saved to be done during 
future idle times.



Version: 7.5.488 / Virus Database: 269.14.6/1060 - Release Date: 
09/10/2007 16:43


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52174802-a2d0ec


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-10 Thread Charles D Hixson

Generally, yes, you know more.
In this particular instance we were told the example was all that was known.

Linas Vepstas wrote:

On Wed, Oct 10, 2007 at 01:06:35PM -0700, Charles D Hixson wrote:
  
For me the sticking point was that we were informed that we didn't know 
anything about anything outside of the framework presented.  We didn't 
know what a Fred was, or what a human was, or what an animal was.  



?? Well, no. In NARS, you actually know a lot more; you know 
the relative position of each statement in the lattice of posets, 
and that is actually a very powerful bit of knowledge. From this, 
you can compute a truth value, and evidence, for the statements.


NARS tells you how to  combine the truth values. So, while you
might not explicitly know what Fred is, you do have to compute
a truth value for fred is an animal and fred is a human. 
NARS then tells you what the corresponding evidence is for

an animal is a human and a human is an animal (presumably
the evidence is weak, and strong, depending on the relation 
of these posets within the universe.)


In measure-theoreic terms, the truth value is the measure of 
the size of the poset relative the size of the universe.  NARS

denotes this by the absolute value symbol. The syllogism rules
suggest how the measures of the various intersections and unions
of the posets need to be combined.

I presume that maybe there is some theorem that shows that 
the NARS system assigns evidence values that are consistent

with the axioms of measure theory. Seems reasonable to me;
I haven't thought it through, and I haven't read more in that
direction.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52175736-6ccdee


Re: [agi] Religion-free technical content

2007-10-08 Thread Charles D Hixson

Derek Zahn wrote:

Richard Loosemore:

  a...
I often see it assumed that the step between first AGI is built 
(which I interpret as a functoning model showing some degree of 
generally-intelligent behavior) and god-like powers dominating the 
planet is a short one.  Is that really likely?
Nobody knows the answer to that one.  The sooner it is built, the less 
likely it is to be true.  As more accessible computing resources become 
available, hard takeoff becomes more likely.


Note that this isn't a quantitative answer.  It can't be.  Nobody really 
knows how much computing power is necessary for a AGI.  In one scenario, 
it would see the internet as it's body, and wouldn't even realize that 
people existed until very late in the process.  This is probably one of 
the scenarios that require least computing power for takeoff, and allow 
for fastest spread.  Unfortunately, it's also not very likely to be a 
friendly AI.  It would likely feel about people as we feel about the 
bacteria that make our yogurt.  They can be useful to have around, but 
they're certainly not one's social equals.  (This mode of AI might well 
be social, if, say, it got socialized on chat-lines and newsgroups.  But 
deriving the existence and importance of bodies from those interactions 
isn't a trivial problem.)


The easiest answer isn't necessarily the best one.  (Also note that this 
mode of AI could very likely be developed by a govt. as a weapon for 
cyber-warfare.  Discovering that it was a two-edged sword with a mind of 
it's own could be a very late-stage event.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51216209-9c2b04


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread Charles D Hixson

a wrote:

Linas Vepstas wrote:


...
The issue is that there's no safety net protecting against avalanches 
of unbounded size. The other issue is that its not grains of sand, its
people.  My bank-account and my brains can insulate me from small 
shocks.
I'd like to have protection against the bigger forces that can wipe 
me out.
I am skeptical that economies follow the self-organized criticality 
behavior.
There aren't any examples. Some would cite the Great Depression, but 
it was caused by the malinvestment created by Central Banks. e.g. The 
Federal Reserve System. See the Austrian Business Cycle Theory for 
details.

In conclusion, economics is a bad analogy with complex systems.
OK.  I'm skeptical that a Free-Market economy has ever existed.  
Possibly the agora of ancient Greece came close.  The Persians though 
so: Who are these people who have special places where they go to cheat 
each other?  However I suspect that a closer look would show that 
these, also, were regulated to some degree by an external power.  (E.g., 
threat of force from the government if the customers rioted.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51221478-ab187a


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Charles D Hixson
OK.  I've read the paper, and don't see where I've made any errors.  It 
looks to me as if NARS can be modeled by a prototype based language with 
operators for is an ancestor of and is a descendant of.  I do have 
trouble with the language terms that you use, though admittedly they 
appear to be standard for logicians (to the extent that I'm familiar 
with their dialect).  That might well not be a good implementation, but 
it appears to be a reasonable model.


To me a model can well be dynamic and experience based.  In fact I 
wouldn't consider a model very intelligent if it didn't either itself 
adapt itself to experience, or it weren't embedded in a matrix which 
adapted it to experiences.  (This doesn't seem to be quite the same 
meaning that you use for model.  Your separation of the rules of 
inference, the rational faculty, and the model as a fixed and unchanging 
condition don't match my use of the term.  I might pull out the rules 
of inference as separate pieces and stick them into a datafile, but 
datafiles can be changed, if anything, more readily than programs...and 
programs are readily changeable.  To me it appears clear that much of 
the language would need to be interpretive rather than compiled.  One 
should pre-compile what one can for the sake of efficiency, but with the 
knowledge that this sacrifices flexibility for speed.


I still find that I am forced to interpret the inheritance relationship 
as a is a child of relationship.  And I find the idea of continually 
calculating the powerset of inheritance relationships unappealing.  
There may not be a better way, but if there isn't, than AGI can't move 
forwards without vastly more powerful machines.  Probably, however, the 
calculations could be shortcut by increasing the local storage a bit.  
If each node maintained a list of parents and children, and a count of 
descendants and ancestors it might suffice.  This would increase storage 
requirements, but drastically cut calculation and still enable the 
calculation of confidence.  Updating the counts could be saved for 
dreamtime.  This would imply that during the early part of learning 
sleep would be a frequent necessity...but it should become less 
necessary as the ratio of extant knowledge to new knowledge learned 
increased.  (Note that in this case the amount of new knowledge would be 
a measured quantity, not an arbitrary constant.)


I do feel that the limited sensory modality of the environment (i.e., 
reading the keyboard) makes AGI unlikely to be feasible.  It seems to me 
that one of the necessary components of true intelligence is integrating 
multi-modal sensory experience.  This doesn't necessarily mean vision 
and touch, but SOMETHING.  As such I can see NARS (or some similar 
system) as a component of an AGI, but not as a core component (if such 
exists).  OTOH, it might develop into something that would exhibit 
consciousness.  But note that consciousness appears to be primarily an 
evaluative function rather than a decision making component.  It logs 
and evaluates decisions that have been made, and maintains a delusion 
that it made them, but they are actually made by other processes, whose 
nature is less obvious.  (It may not actually evaluate them, but I 
haven't heard of any evidence to justify denying that, and it's 
certainly a good delusion.  Still, were I to wager, I'd wager that it 
was basically a logging function, and that the evaluations were also 
made by other processes.)  Consciousness appears to have developed to 
handle those functions that required serialization...and when language 
came along, it appeared in consciousness, because the limited bandwidth 
available necessitated serial conversion.



Pei Wang wrote:

Charles,

I fully understand your response --- it is typical when people
interpret NARS according to their ideas about how a formal logic
should be understood.

But NARS is VERY different. Especially, it uses a special semantics,
which defines truth and meaning in a way that is fundamentally
different from model-theoretic semantics (which is implicitly assumed
in your comments everywhere), and I believe is closer to how truth
and meaning are treated in natural languages (so you may end up like
it).

As Mark suggested, you may want to do some reading first (such as
http://nars.wang.googlepages.com/wang.semantics.pdf), and after that
the discussion will be much more fruitful and efficient. I'm sorry
that I don't have a shorter explanation to the related issues.

Pei

On 10/8/07, Charles D Hixson [EMAIL PROTECTED] wrote:
  

Pei Wang wrote:


Charles,

What you said is correct for most formal logics formulating binary
deduction, using model-theoretic semantics. However, Edward was
talking about the categorical logic of NARS, though he put the
statements in English, and omitted the truth values, which may caused
some misunderstanding.

Pei

On 10/7/07, Charles D Hixson [EMAIL PROTECTED] wrote:

  

Edward W. Porter wrote

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Charles D Hixson

Mike Tintner wrote:
Vladimir:  In experience-based learning there are two main problems 
relating to

knowledge acquisition: you have to come up with hypotheses and you
have to assess their plausibility. ...you create them based on various
heuristics.


How is this different from narrow AI? It seems like narrow AI - does 
Nars have the ability to learn unprogrammed, or invent, totally new 
kinds of logic? Or kinds of algebra?


In fact, the definitions of Nars:

NARS is intelligent in the sense that it is adaptive, and works 
with insufficient

knowledge and resources.

By adaptive, we mean that NARS uses its experience (i.e., the 
history of its


interaction with the environment) as the guidance of its inference 
activities.


For each question, it looks for an answer that is most consistent with 
its


experience (under the restriction of available resources).

define narrow AI systems - which are also intelligent, adaptive, 
work with insufficient knowledge and resources and learn from 
experience.  There seems to be nothing in those definitions which is 
distinctive to AGI.


With a sufficient knowledge base, which would require learning, NARS 
looks as if it could categorize that which it knows about, and make 
guesses as to how certain pieces of information are related to other 
pieces of information.


An extended version should be adaptive in the patterns that it recognizes.

OTOH, I don't recognize any features that would enable it to take 
independent action, so I suspect that it would be but one module of a 
more complex system. 
N.B.:  I'm definitely no expert at NARS, I've only read two of the 
papers a a few arguments.  Features that I didn't notice could well be 
present.  And they could certainly be in the planning stage.


I'm a bit hesitant about the theoretical framework, as it appears 
computationally expensive.  Still, implementation doesn't necessarily 
follow theory, and theory can jump over the gnarly bits, leaving them 
for implementation.  It's possible that lazy evaluation and postponed 
stability calculations could make things a LOT more efficient.  These 
probably aren't practical until the database grows to a reasonable size, 
however.


But as I understand it, this still wouldn't be an AGI, but merely a 
categorizer.  (OTOH, I only read two of the papers.  These could just be 
the papers that cover the categorizer.  Plausibly other papers cover 
other aspects.)


N.B.:  The current version of NARS, as described, only parses a 
specialized language covering topics of inheritance of characteristics.  
As such, that's all that was covered by the paper I most recently read.  
This doesn't appear to be an inherent limitation, as the terminal nodes 
are primitive text and, as such, could, in principle, invoke other 
routines, or refer to the contents of an image.  The program would 
neither know nor care.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=51310341-2108b3


Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-07 Thread Charles D Hixson

Edward W. Porter wrote:


So is the following understanding correct?

If you have two statements

Fred is a human
Fred is an animal

And assuming you know nothing more about any of the three
terms in both these statements, then each of the following
would be an appropriate induction

A human is an animal
An animal is a human
A human and an animal are similar

It would only then be from further information that you
would find the first of these two inductions has a larger
truth value than the second and that the third probably
has a larger truth value than the second..

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

Actually, you know less than you have implied. 
You know that there exists an entity referred to as Fred, and that this 
entity is a member of both the set human and the set animal.  You aren't 
justified in concluding that any other member of the set human is also a 
member of the set animal.  And conversely.  And the only argument for 
similarity is that the intersection isn't empty.


E.g.:
Fred is a possessor of purple hair.   (He dyed his hair)
Fred is a possessor of jellyfish DNA. (He was a subject in a molecular 
biology experiment.  His skin would glow green under proper stimulation.)


Now admittedly these sentences would usually be said in a different form 
(i.e., Fred has green hair), but they are reasonable translations of 
an equivalent sentence (Fred is a member of the set of people with 
green hair).


You REALLY can't do good reasoning using formal logic in natural 
language...at least in English.  That's why the invention of symbolic 
logic was so important.


If you want to use the old form of syllogism, then at least one of the 
sentences needs to have either an existential or universal quantifier.  
Otherwise it isn't a syllogism, but just a pair of statements.  And all 
that you can conclude from them is that they have been asserted.  (If 
they're directly contradictory, then you may question the reliability of 
the asserter...but that's tricky, as often things that appear to be 
contradictions actually aren't.)


Of course, what this really means is that logic is unsuited for 
conversation... but it also implies that you shouldn't program your 
rule-sets in natural language.  You'll almost certainly either get them 
wrong or be ambiguous.  (Ambiguity is more common, but it's not 
exclusive of wrong.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50932465-797f53


Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Charles D Hixson

Matt Mahoney wrote:

--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

  

...


So you are arguing that RSI is a hard problem?  That is my question. 
Understanding software to the point where a program could make intelligent

changes to itself seems to require human level intelligence.  But could it
come sooner?  For example, Deep Blue had less chess knowledge than Kasparov,
but made up for it with brute force computation.  In a similar way, a less
intelligent agent could try millions of variations of itself, of which only a
few would succeed.  What is the minimum level of intelligence required for
this strategy to succeed?

-- Matt Mahoney, [EMAIL PROTECTED]
  
Recursive self improvement, where the program is required to understand 
what it's doing seems a very hard problem.
If it doesn't need to understand, but merely optimize some function, 
then it's only a hard problem...with a slow solution.
N.B.:  This may be the major difference between evolutionary programming 
and seed AI.


We appear, in our history, to have evolved many approached to causing 
evolutionary algorithms to work better (for the particular classes of 
problem that we faced...bacteria faced different problems and evolved 
different solutions).  The most recent attempt has involved 
understanding *parts* of what we are doing.  But do note that not only 
chimpanzees, but also most humans, have extreme difficulty in acting in 
their perceived long term best interest.   Ask any dieter.  Or ask a 
smoker who's trying to quit.


Granted that an argument from these are the solutions found by 
evolution isn't theoretically satisfying, but evolution has a pretty 
good record of finding good enough solutions.  Probably the best that 
can be achieved without understanding.  (It's also bloody and 
inefficient...but no better solution is known.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48484304-a8ef96


Re: [agi] Pure reason is a disease.

2007-06-17 Thread Charles D Hixson

Eric Baum wrote:

Josh On Saturday 16 June 2007 07:20:27 pm Matt Mahoney wrote:
  

--- Bo Morgan [EMAIL PROTECTED] wrote:
...

...
I claim that it is the very fact that you are making decisions about
whether to supress pain for higher goals that is the reason you are
conscious of pain. Your consciousness is the computation of a
top-level decision making module (or perhaps system). If you were not
making decisions waying (nuanced) pain against higher goals, 
you would not be conscious of the pain.


Josh Even a simplistic modular model of mind can allow for pain
Josh signals to the various modules which can be different in kind
Josh depending on which module they are reporting to.

Josh Josh
  

Consider a terminal cancer patient.
It's not the actual weighing that causes consciousness of pain, it's the 
implementation which normally allows such weighing.  This, in my 
opinion, *is* a design flaw.  Your original statement is a more useful 
implementation.  When it's impossible to do anything about the pain, one 
*should* be able to turn it off.  Unfortunately, this was not 
evolved.  After all, you might be wrong about not being able to do 
anything about it, so we evolved such that pain beyond a certain point 
cannot be ignored.  (Possibly some with advanced training and several 
years devoted to the mastery of sensation [e.g. yoga practitioners] may 
be able to ignore such pain.  I'm not convinced, and would consider 
experiments to obtain proof to be unethical.  And, in any case, they 
don't argue against my point.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Books

2007-06-09 Thread Charles D Hixson

Mark Waser wrote:
 The problem of logical reasoning in natural language is a pattern 
recognition

 problem (like natural language recognition in general).  For example:

 - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
 - Cities have tall buildings.  New York is a city.  Therefore New 
York has

 tall buildings.
 - Summers are hot.  July is in the summer.  Therefore July is hot.

 After many examples, you learn the pattern and you can solve novel 
logic

 problems of the same form.  Repeat for many different patterns.
 
Your built in assumptions make you think that.  There are NO readily 
obvious patterns is the examples you gave except on obvious example of 
standard logical inference.  Note:


* In the first clause, the only repeating words are green and
  Kermit.  Maybe I'd let you argue the plural of frog.
* In the second clause, the only repeating words are tall
  buildings and New York.  I'm not inclined to give you the plural
  of city.  There is also the minor confusion that tall buildings
  and New York are multiple words.
* In the third clause, the only repeating words are hot and July. 
  Okay, you can argue summers.

* Across sentences, I see a regularity between the first and the
  third of As are B.  C is A.  Therefore, C is B.

Looks far more to me like you picked out one particular example of 
logical inference and called it pattern matching. 
 
I don't believe that your theory works for more than a few very small, 
toy examples.  Further, even if it did work, there are so many 
patterns that approaching it this way would be computationally 
intractable without a lot of other smarts.
 

It's worse than that.  Frogs are green. is a generically true 
statement, that isn't true in most particular cases.  E.g., some frogs 
are yellow, red, and black without any trace of green on them that I've 
noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
basically green, but with black spots.


Worse, although Kermit is identified as a frog, Kermit is actually a 
cartoon character.  As such, Kermit can be run over by a tank without 
being permanently damaged.  This is not true of actual frogs.


OTOH, there *IS* a pattern matching going on.  It's just not evident at 
the level of structure (or rather only partially evident).


Were I to rephrase the sentences more exactly they would go something 
like this:

Kermit is a representation of a frog.
Frogs are typically thought of as being green.
Therefore, Kermit will be displayed as largely greenish in overall hue, 
to enhance the representation.


Note that one *could* use similar logic to deduce that Miss Piggy is 
more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
what is being discussed here is not mandatory characteristics, but 
representational features selected to harmonize an image with both it's 
setting and internal symbolisms.  As such, only artistically selected 
features are chosen to highlight, and other features are either 
suppressed, or overridden by other artistic choices.  What is being 
created is a dreamscape rather than a realistic image.


On to the second example.  Here again one is building a dreamscape, 
selecting harmonious imagery.  Note that it's quite possible to build a 
dreamscape city where there are not tall buildings...or only one.  
(Think of the Emerald City of Oz.  Or for that matter of the Sunset 
District of San Francisco.  Facing in many directions you can't see a 
single building more than two stories tall.)  But it's also quite 
realistic to imagine tall buildings.  By specifying tall buildings, one 
filters out a different set of harmonious city images.


What these patterns do is enable one to filter out harmonious images, 
etc. from the databank of past experiences.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] What would motivate you to put work into an AGI project?

2007-05-04 Thread Charles D Hixson

What would motivate you to put work into an AGI project?

1) A reasonable point of entry into the project

2) The project would need to be FOSS, or at least communally owned.  
(FOSS for preference.)  I've had a few bad experiences where the project 
leader ended up taking everything, and don't intend to have another.


3)  The project would need to be adopting a multiplex approach.  I don't 
believe in single solutions.  AI needs to represent things in multiple 
ways, and to deal with those ways in quasi-independent channels.  My 
general separation is:  Goals (desired end states), Desires (desired 
next states), Models, and logic.  I recognize that everything is 
addressed by a mixture of these approaches...but people seem to use VERY 
different mixtures (both from person to person and in the same person 
from situation to situation).


4) I'd need to have a belief that the project had a sparkplug.  
Otherwise I might as well keep fumbling around on my own.  Projects need 
someone to inspire the troops.


5) There would need to be some way to communicate with the others on the 
project that didn't involve going to a restaurant.  (I'm on a diet, and 
going to restaurants frequently is a really BAD idea.)  (N.B.:  One 
project I briefly joined had a chat list...which might have worked well 
if it had actually been the means of communication.  Turned out that the 
inner circle met frequently at a restaurant and rarely visited the 
chat room.  But I think a mailing list or a newsgroup is a better choice 
anyway.  [The project was successful, but I think that the members on 
the chat group were mainly a diversion from the actual work of the 
project.])


6)  Things would need to be reasonably documented.  This comes in lots 
of forms, but for a work in progress there's a lot to be said for 
comments inserted into the code itself, and automatically extracted to 
create documentation.  (Otherwise I prefer the form that Python 
uses...but nobody else does that as well.)


7) LANGUAGES:  Using a language that I felt not completely unsuitable.   
After LOTS of searching I've more or less settled on Java as the only 
wide-spread language with decent library support that can run 
distributed systems with reasonable efficiency.  There are many other 
contenders (e.g., C, C++, Fortran, Alice, Erlang, and D each have their 
points), and I don't really *like* Java, but  Java, C, and C++ appear to 
be the only widely used languages that have the ability to run across a 
multi-processor with reasonable efficiency.  (And even there the 
techniques used can hardly be called widespread.)
7a)  Actually C and C++ can be suitable if there are appropriate 
libraries to handle such things as garbage collection, and protocols for 
how to save persistent data and then remember it later.  But I still 
don't like the way they make free use of wild pointers.
7b)  I wonder to what extent the entire project needs to be in the same 
language.  This does make understanding things easier, as long as it's 
small enough that someone can understand everything at a low level, or 
if the entity should ever want to understand itself.  But there are 
plausible arguments for writing things in a rapid development language, 
such as Python or Ruby, and then only translating the routines that 
later need to be translated for efficiency.  (If only those languages 
could execute across multiple processors!)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] rule-based NL system

2007-05-04 Thread Charles D Hixson

J. Storrs Hall, PhD. wrote:

On Wednesday 02 May 2007 15:08, Charles D Hixson wrote:
  

Mark Waser wrote:


... Machines will know
the meaning of text (i.e. understand it) when they have a coherent
world model that they ground their usage of text in.
  

...
But note that in this case world model is not a model of the same
world that you have a model of.  



After reading the foregoing discussions of subjects such as intelligence, 
language, meaning, etc, it is quite clear to me that the various members of 
this list do not have models of the same world. This is entirely appropriate: 
consider each of us as a unit in a giant GA search for useful ways of 
thinking about reality...


Josh
  
Well, that's true.  E.g., when I was 3 I had one I patched for 3 months 
in a vain attempt to cure amblyopia.  This caused me to be relatively 
detached from visual imagery, and more attached to kinesthetic imagery. 
But still, all normal people have a world model where when their eyes 
are covered they can't see, but where the eyes cannot be removed and 
then replaced.  So there are relatively small degrees of difference 
between the world models of normal humans and those which will be 
learned by AGIs.  This is even true in the case of AGIs which are raised 
with the intention of having them have approximately normal maturation.  
The attempt is essentially futile.  Humans will come to resemble AGIs 
before AGIs come to resemble people.  (Admittedly, though, the AGIs that 
people eventually come to resemble won't bear much resemblance to the 
early model AGIs.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] rule-based NL system

2007-05-02 Thread Charles D Hixson

Mark Waser wrote:
What is meaning to a computer?  Some people would say that no 
machine can

know the meaning of text because only humans can understand language.


Nope.  I am *NOT* willing to do the Searle thing.  Machines will know 
the meaning of text (i.e. understand it) when they have a coherent 
world model that they ground their usage of text in.

...
But note that in this case world model is not a model of the same 
world that you have a model of.  The will definitely have different 
sensors and different goals.  E.g., they might be directly sensitive to 
system signals and totally insensitive to kinesthetic.  I.e., they 
might be able to directly sense a mouse position, or an i/o port state, 
but lack any intrinsic binding of those to a kinesthetic model.   An 
optional binding would be something else, of course, but imagine such a 
machine with access to a midi-card and a mic.  Is there any particular 
reason to presume that it would find harmonious the same sounds that you 
do?  (Well, yes.  Harmony is a mathematical property.  Whether it would 
desire harmony is less clear.  See Stockhausen and John Cage.)


Now an early stage AI of this variety would not have a world model that 
corresponded closely to that of a person.  E.g., it's physical world 
wouldn't really exist.  The real world would be limited to 
non-removable senses, so nothing that was connected, say, via a USB 
port would count (unless it was always both connected and on).  This 
included video cameras...which it could have, but wouldn't be 
built-in, and would be subject to being replaced and ending up on 
different ports.  And if there were a pair of them, the direction that 
they were pointing would probably be independently variable, as would 
the distance between them.  At a later stage it might well be given 
control of them, on movable arms that it could also control, rather like 
a Pierson's Puppeteer.  Touch is less obvious about how to handle, but 
it's being worked on.


But note that these sensory devices are just that, sensory devices that 
it can use, and which can be added or removed.  This yields a very 
different world model than that with which people develop.  One in which 
reality adheres to the internal states and not to the externalities.  
The external world will forever be a calculation device, and 
consciously known to be so.  (This is unlike people where it's also a 
calculation device, but where it is generally only intellectually known 
that the state of the world as reported by the sensors is largely an 
artifact of the sensors.  [And if you doubt that, consider a visit to 
the dentist.  With and without anesthetic.])


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-30 Thread Charles D Hixson
Stripping away a lot of your point here, I just want to point out how 
many jokes are memorized fragments.  A large part of what is going on 
here is using a large database.   I'm not disparaging your point about 
pattern matching being necessary, but one normally pattern matches and 
returns a pre-computed result rather than constructing a new result 
from scratch.  This works well for two complimentary reasons:
1)  The results that you've stored will already have been filtered to 
meet some minimal quality standard (and you will have had time to assess 
their quality off-line)
2)  The results that are part of the common culture are more easily 
recognized and processed by the others with whom you interact.


These two reasons act together to limit the amount of originality that 
anyone shows in common discourse.  (Humorists spend a lot of time 
polishing their jokes before they present them to a wide audience.)


I would assert that this same process operates in all areas of 
metaphor.  I.e., that human speech is very largely reproductions of 
chunks that have been previously encountered, where the size of the 
chunk is usually larger than a single word, or even pair of words.


Mike Tintner wrote:

Mike,

There is something fascinating going on here - if you could suspend 
your desire for precision, you might see that you are at least 
half-consciously offering contributions as well as objections. (Tune 
in to your constructive side).


I remember thinking that you were probably undercutting yourself with 
the example of the elephant and the chair. Here you certainly are.


What you offered was a fine example of human adaptivity. Your wife 
took a fairly straightforward sentence How would you feel about 
fencing in our yard?  and found a new kind of meaning for it - a new 
and surprising kind of way of achieving the goal of understanding it - 
switched from the obvious meaning of fencing to the fighting meaning. 
That's classic adaptivity.


Jokes do this all the time - see Arthur Koestler's The Ghost in the 
Machine. They are another form of adaptivity/ creativity.


[Another comparable example would be the Airport-type joke:
A: You can't mean: go to the hospital, surely?
B; Yes I do. And don't call me Shirley.]
...

- Original Message - From: Mike Dougherty [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, April 30, 2007 2:44 AM
Subject: Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?



On 4/29/07, Richard Loosemore [EMAIL PROTECTED] wrote:

The idea that human beings should constrain themselves to a simplified,
...


?

ok, I know t... enough)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: SV: [agi] mouse uploading

2007-04-29 Thread Charles D Hixson
I think someone at UCLA did something similar for lobsters.  This was 
used as material for an SF story (Lobsters, Charles Stross[sp?])


Jan Mattsson wrote:

Has this approach been successful for any lesser animals? E.g.; has anyone 
simulated an insect brain system connected to a simulated insect body in a virtual 
environment? Starting with a mouse brain seems a bit ambitious.

Since I haven't posted on the list before I guess I should introduce myself: I'm Jan Mattsson in 
Stockholm, Sweden. A software developer by profession, I first became interested in AI when I read 
Gödel Escher Bach - an Eternal Golden Braid many years ago (actually switched from 
physics to computer science because of it). More recently I read Kurzweil's The Singularity 
is near, that brought me here.

/JanM


-Ursprungligt meddelande-
Från: J. Storrs Hall, PhD. [mailto:[EMAIL PROTECTED]
Skickat: lö 2007-04-28 19:15
Till: agi@v2.listbox.com
Ämne: [agi] mouse uploading
 
In case anyone is interested, some folks at IBM Almaden have run a 
one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in 
0.1 real time):


http://news.bbc.co.uk/2/hi/technology/6600965.stm
http://ieet.org/index.php/IEET/more/cascio20070425/
http://www.modha.org/papers/rj10404.pdf which reads in gist:

Neurobiologically realistic, large-scale cortical and sub-cortical simulations 
are bound to play a key role in computational neuroscience and its 
applications to cognitive computing. One hemisphere of the mouse cortex has 
roughly 8,000,000 neurons and 8,000 synapses per neuron. Modeling at this 
scale imposes tremendous constraints on computation, communication, and 
memory capacity of any computing platform. 
 We have designed and implemented a massively parallel cortical simulator with 
(a) phenomenological spiking neuron models; (b) spike-timing dependent 
plasticity; and (c) axonal delays.  
 We deployed the simulator on a 4096-processor BlueGene/L supercomputer with 
256 MB per CPU. We were able to represent 8,000,000 neurons (80% excitatory) 
and 6,300 synapses per neuron in the 1 TB main memory of the system. Using a 
synthetic pattern of neuronal interconnections, at a 1 ms resolution and an 
average firing rate of 1 Hz, we were able to run 1s of model time in 10s of 
real time!


Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] My proposal for an AGI agenda

2007-03-22 Thread Charles D Hixson

Chuck Esterbrook wrote:

On 3/20/07, Charles D Hixson [EMAIL PROTECTED] wrote:

rooftop8000 wrote:
 ...
 I think we should somehow allow people to use all the program 
languages they want.


That somehow is the big problem.  Most approaches to dealing with it
are...lamentable.
 ...
 You can use closed modules if you have meta-information on
 how to use them and what they do. It's like having an API and not
 worrying about the inner workings...

Module level communication is probably too low a level.   Socket level
communication can work easily between arbitrary languages, but it's
cumbersome, so it's generally best if you have large sections devoted to
any particular language.

P.S.:  There are exceptions.  E.g. D claims to work well with C, and
Pyrex works well with C.  OTOH, D and Python each have their own garbage
collection mechanism, and they don't synchronize at all, so going from
Python to C to D (or conversely) is going to have a lot of overhead.
Add Java to the mix and you have THREE garbage collectors.  Haskell
would make four.  This isn't something you're going to want to carry
around for a small chunk of code.  Better if it's large pieces that talk
over TCP/IP, or something analogous.  (And TCP/IP is ubiquitous.)


The best solution I've seen for this to date in MS .NET and its open
source clone, Novell Mono. You can, for example, run C#, VB, J#, C++,
IronPython, IronRuby and many more languages on the same platform in
the same process. While this is also true of the JVM, there are some
explicit aspects of .NET that make it appealing for multiple
languages. It has a standard for source code generation so that tools
that do so can work with any language (that provides a simple
adapter). It has callbacks/delegates which are found in many languages
and thereby makes it easier to get those languages working and
efficient. Upcoming .NET enhancements, such as in-language query, are
also being designed in a language independent fashion from the get go.

Also, MS hired the developer of IronPython to work on it full time and
also inform them on making .NET an even better platform for dynamic
languages. He had previously implemented Python on Java (called
Jython) and I find it interesting that the .NET version performs
substantially faster (see
http://weblogs.asp.net/jezell/archive/2003/12/10/42407.aspx).

Regarding communication between modules, I agree with others that
there are better choices than English, but more importantly, if you're
going to really push for multiple contributors, the best might be to
encourage them to use one or more from a set such as:

* json data
* xml data
* csv
* first order logic
* lojban
* Simple English (http://en.wikipedia.org/wiki/Simple_English_Wikipedia)
* English

And then let the marketplace work it out. Your language or library
could provide automatic views of objects, classes, etc. in a couple of
these formats.

-Chuck

Unfortunately, MS is claiming undefined things as being proprietary.  As 
such, I intend to stay totally clear of implementations of it's 
protocols.  Including mono.  I am considering jvm, however, as Sun has 
now freed the java license (and was never very restrictive).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-22 Thread Charles D Hixson

Chuck Esterbrook wrote:

On 3/22/07, Charles D Hixson [EMAIL PROTECTED] wrote:

Unfortunately, MS is claiming undefined things as being proprietary.  As
such, I intend to stay totally clear of implementations of it's
protocols.  Including mono.  I am considering jvm, however, as Sun has
now freed the java license (and was never very restrictive).


Do you have details or a reference? I would be interested in having a 
look.


-Chuck
Sorry, they aren't *being* specific.  It may well have nothing to do 
with dot-net, it's just too vague to see what they're talking about.  As 
for a reference, look at the recent comments about patents and the 
Novell deal.  They're claiming something...but one can't determine 
what.  I live in the US, so I'm playing safe, and avoiding both Novell 
and dotnet (including mono).  All I know is it can't be more than about 
17 years old, and it can't be something that they're barred from 
claiming via estoppel.  But I'm no lawyer, so I'm not sure what that 
is.  Various comments and personalities have caused me to suspect that 
it's something involving mono...but this is definitely not clear and 
convincing, it's just the best info I have.  (Partially because Miguel 
De Icaza was all in favor of the deal, and he's heavily involved in mono.)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-20 Thread Charles D Hixson

rooftop8000 wrote:

...
I think we should somehow allow people to use all the program languages they 
want.
  
That somehow is the big problem.  Most approaches to dealing with it 
are...lamentable.

...
You can use closed modules if you have meta-information on
how to use them and what they do. It's like having an API and not
worrying about the inner workings...
  
Module level communication is probably too low a level.   Socket level 
communication can work easily between arbitrary languages, but it's 
cumbersome, so it's generally best if you have large sections devoted to 
any particular language.


P.S.:  There are exceptions.  E.g. D claims to work well with C, and 
Pyrex works well with C.  OTOH, D and Python each have their own garbage 
collection mechanism, and they don't synchronize at all, so going from 
Python to C to D (or conversely) is going to have a lot of overhead.  
Add Java to the mix and you have THREE garbage collectors.  Haskell 
would make four.  This isn't something you're going to want to carry 
around for a small chunk of code.  Better if it's large pieces that talk 
over TCP/IP, or something analogous.  (And TCP/IP is ubiquitous.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-18 Thread Charles D Hixson

Russell Wallace wrote:
On 3/13/07, *J. Storrs Hall, PhD.* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


But the bottom line problem for using FOPC (or whatever) to
represent the
world is not that it's computationally incapable of it -- it's Turing
complete, after all -- but that it's seductively easy to write
propositions
with symbols that are English words and fool yourself into
thinking you've
accomplished representation.


Yeah. Basically when I advocate logical representation, I'm assuming 
everyone on this list is past that pitfall at least. If I were writing 
an introductory textbook on AI, I'd dwell at length on it.


A real working logic-based system that did what
it needed to would consist mostly of predicates like


fmult(num(characteristic(Sign1,Bit11,Bit12,...),mantissa(Bitm11,Bitm12,...)),
  
num(characteristic(Sign2,Bit21,Bit22,...),mantissa(Bitm21,Bitm22,...)),

  
num(characteristic(Sign3,Bit31,Bit32,...),mantissa(Bitm31,Bitm32,...)))

:- ... .

And it would wind up doing what my scheme would, e.g. projecting the
n-dimensional trajectory of the chipmunk's gait and the leaf's
flutter into a
reduced space, doing a Fourier transform on them, and noting that
there was a
region in frequency space where the clusters induced by the two
phenomena
overlapped.


Dunno about mostly but yes, large chunks of it would consist of just 
that. So be it. We need a standard representation format. No format is 
going to be readable in all cases. Logic is about as good as we'll get 
for readability across a wide range of cases.


(And let me emphasize yet again that I am NOT thereby advocating that 
we write the whole shebang in Prolog, or 


Perhaps it would be best to have, say, four different formats for 
different classes of problems (with the understanding that most problems 
are mixed).  E.g., some classes of problems are best represented via a 
priority queue, others via a tree that can be alpha-beta pruned, etc.  
For internal processing images might be best implemented via some 
derivative of SVG, though the external representations a plausibly bit 
maps.  Etc.


In this case logical predicates would be great for describing relations 
between the various processes (e.g., if you see three lines, and each 
intersects with both of the others, then you will have a triangle.), but 
not so great for describing the primitives.  The SVGish images can be 
converted into bit-maps by known procedures, but logic is a very slow 
and cumbersome approach to this.


Similarly, if you have several tasks to achieve, a priority queue is 
more efficient than logic, though logic can certainly handle the job. 

You can, if you like, think of the non-logical methods as compiled 
versions of what the logical description would have been, and this 
would be technically correct, since a computer is basically a logic 
engine, but that's not a particularly useful way to chunk the problem.


Also, I note that I'm presuming that the elementary AI has numerous 
high level chunks built-in.  I feel this will be necessary as a 
starting point, though I doubt that they need to remain opaque as the AI 
increases its capabilities.  If you build in a topological sort 
function, this will be useful in learning before the AI knows what a 
topological sort is.  It needn't remain opaque, however.  If you label 
it as topological sort in an AI viewable comment, this will facilitate 
the AI discovering just how it thinks (and possibly debugging the code), 
but such introspection shouldn't be necessary to reach to starting line.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-18 Thread Charles D Hixson

Russell Wallace wrote:
On 3/18/07, *Charles D Hixson* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Perhaps it would be best to have, say, four different formats for
different classes of problems (with the understanding that most
problems
are mixed).  E.g., some classes of problems are best represented via a
priority queue, others via a tree that can be alpha-beta pruned, etc.
For internal processing images might be best implemented via some
derivative of SVG, though the external representations a plausibly bit
maps.  Etc.


But datawise a priority queue is just a set of things with priority 
numbers attached. The fact that you are going to _use_ it as a 
priority queue is a property of the code module, not the data. 
Similarly, the alpha-beta algorithm is, well, an algorithm - not a 
reason to create an incompatible format. And see earlier comments 
about graphics being semantically represented as logic, even if the 
implementation uses specialized data structures for efficiency.
Yes, datawise a priority queue is just a set of things with priority 
numbers attached and the alpha-beta algorithm is, well, an algorithm, 
but neither of those is propositional logic.  Yes, you CAN represent 
them as logic (you can represent anything as logic ... at least once you 
include some extensions for probabilities, etc.), but that's quite 
cumbersome.  There are reasons why people use programming languages 
rather than boolean logic propositions for programming.  And there are 
reasons why there are several DIFFERENT programming languages, even 
though most of them are Turing complete.  The reasons boil down to the 
match of the language against the problem domain ... otherwise known as 
efficiency.  I consider it quite possible that I have drastically 
underestimated the number of different data formats necessary (Yes, 
Alpha-Beta is an algorithm, but efficient implementation of it implies a 
particular subset of data structures), I don't think I have 
overestimated them.


Now I'll grant that on some basic level what one is doing is processing 
logic terms.  It's true -- well at least to the extent that processing 
high vs. low voltages, or negative vs. positive, or whatever the 
semiconductor technique of the year is doing counts as processing 
logic.  (It's not quite an isomorphism, but it comes pretty close.)  
This doesn't make it a reasonable approach on a higher level.  Using 
logic on higher levels needs it's own separate justification.  There are 
areas where it is the optimal choice, but I don't believe they cover 
everything.  To me it seems that sometimes it's better to have a 
translation interface than to implement everything in logic.


(OTOH, I'm still floundering as to exactly what representations I should 
chose.  At least you've got a clear path in front of you.)


These are just my opinions.  I wouldn't even try to prove them.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-09 Thread Charles D Hixson

Russell Wallace wrote:
On 3/9/07, *Charles D Hixson* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Russell Wallace wrote:
 To test whether a program understands a story, start by having it
 generate an animated movie of the story.



Nearly every person I know would fail THAT test.


Perhaps, but what of it? I'm not trying to put forward a philosophical 
proof of the in-principle possibility of AI, but an agenda to improve 
the probability that an AI project will produce useful software, so my 
criterion isn't fair but likely to produce useful results.


I've helped make a SHORT (30 sec.) animation.  You aren't making a
trivial request.


It's trivial compared to AI :)


You aren't requesting it of the person, you're requesting it of the AI.
In other words, you are insisting that the AI demonstrate more 
capabilities (in a restricted domain, admittedly) than an average person 
before you will admit that it is intelligent.


A fairer request would that it sketch a few scenes from the story (with 
text annotations indicating what they were supposed to represent).  At 
this you are already requiring mastery of skills not normally attained 
by 4 year olds.  (Well, not if the characters are supposed to be 
recognizable.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-18 Thread Charles D Hixson

Chuck Esterbrook wrote:

On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

Mark Waser wrote:
...


I find C++ overly complex while simultaneously lacking well known
productivity boosters including:
* garbage collection
* language level bounds checking
* contracts
* reflection / introspection (complete and portable)
* dynamic loading (portable)
* dynamic invocation

Having benefited from these in other languages such as Python and C#,
I'm not going back. Ever.
...
Best regards,

-Chuck
You might check out D ( http://www.digitalmars.com/d/index.html ).  Mind 
you, it's still in the quite early days, and missing a lot of libraries 
... which means you need to construct interfaces to the C versions.  
Still, it answers several of your objections, and has partial answers to 
at least one of the others.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Relevance of Probability

2007-02-04 Thread Charles D Hixson

Richard Loosemore wrote:


...

[ASIDE.  An example of this.  The system is trying to answer the 
question Are all ravens black?, but it does not just look to its 
collected data about ravens (partly represented by the vector of 
numbers inside the raven concept, which are vaguely related to the 
relevant probability), it also matters, quite crucially, that the STM 
contains a representation of the fact that the question is being asked 
by a psychologist, and that whereas the usual answer would be p(all 
ravens are black) = 1.0, this particular situation might be an attempt 
to make the subject come up with the most bizarre possible 
counterexamples (a genetic mutant; a raven that just had an accident 
with a pot of white paint, etc. etc.).  In these circumstances, the 
numbers encoded inside concepts seem less relevant than the fact of 
there being a person of a particular type uttering the question.]

...
Just doing my usual anarchic bit to bend the world to my unreasonable 
position, that's all ;-).


Richard Loosemore.
I would model things differently, the reactions would likely be the 
same, but ...


One encounters an assertion All ravens are black. (in some context)
One immediately hits memories of previously encountering this (or 
equivalent?) statements.

One then notices that one hasn't encountered any ravens that aren't black.
Then one creates a tentative acknowledgement Yes, all ravens are black.
One evaluates the importance of an accurately correct answer (in the 
current context).  If approximate is good enough, one sticks with this 
acknowledgement.


If, however, it's important to be precisely accurate, one models the 
world, examining what features might cause a raven to not be black.  If 
some are found, then one modifies the statement, thus:  All ravens are 
black, except for special circumstances. 
One checks to see whether this suffices.  If not, then one begins 
attaching a list of possible special circumstances, in order of 
generation from the list.
All ravens are black, except for special circumstances, such as:  
they've acquired a coat of paint (or other coloring material), there 
might be a mutation that would change their color, etc. 

The significant thing here is that there are many stages where the 
derivation could be truncated.  At each stage a check is made if it's 
necessary to continue.  Just how precise an answer is needed?   Your 
example of a psychologist asking the question shapes the frame of the 
quest for sufficiently precise, but it's always present.  Rarely does 
one calculate a complete answer.  Usually one either stops at good 
enough, or retrieves a appropriate answer from memory.


Note that I implicitly asserted that, in this case, modeling the world 
was more expensive than retrieving from memory.  That's because that's 
how I experienced it.  It is, however, not always true.  Also, if the 
answer to a question is dependent on the current context, then modeling 
the world may well be the only way to derive an answer.  (Memory will 
still be used to set constraints and suggest approaches.  This is 
because that approach is faster and more efficient that calculating such 
things de novo...and often more accurate.)


This is related to the earlier discussion on optimality.  I feel that 
generally minds don't even attempt optimality as normally defined, but 
rather search for a least cost method that's good enough.  Of course, 
if several good enough methods are available the most nearly optimal 
will often be chosen.  Not always though.  Exploration is a part of what 
minds do.  A lot depends on what the pressures are at the moment.  One 
could consider this exploration as the search for a more nearly 
optimal method, but I'm not sure that's an accurate characterization.  
I rather suspect that what's happening is a getting to know the 
environment.   Of course, one could always argue that in a larger 
context this is more nearly optimal...because minds have been selected 
to be more nearly optimal than the competition, but it's a global 
optimality, not the optimality in any particular problem.  And, of 
course, the optimal organization of a mind historically depends upon the 
body that it's inhabiting.  Thus beavers, cats, and humans will approach 
the problem of crossing a stream differently.  Of them all, only the 
beaver is likely to have a mind that is tuned to a nearly optimal 
approach to that problem.  (And its optimal approach would be of no use 
to a human or a cat, because of the requirement that minds match their 
bodies.)


Is the AGI going to be disembodied?  Then it will have a very different 
optimal organization that will a human.  But a global optimization of 
the AGI will require that it initially be able to communicate with and 
understand the motivations of humans.  This doesn't imply that humans 
will understand its motivations.  Odds are they will do so quite 
poorly.  They will probably easily model the AGI as if it were another 

Re: [agi] foundations of probability theory

2007-01-28 Thread Charles D Hixson

gts wrote:

Hi Ben,

On Extropy-chat, you and I and others were discussing the foundations 
of probability theory, in particular the philosophical controversy 
surrounding the so-called Principle of Indifference. Probability 
theory is of course relevant to AGI because of its bearing on decision 
theory (I assume that's why you invited me here. :)


As you know, the Principle of Indifference (PI) states that if no 
reason exists to prefer any of n possibilities then each possibility 
should be assigned a probability equal to 1/n. The PI is known also as 
the Principle of Insufficient Reason, the name given it by classical 
probabilists who followed after Laplace, who took it for granted as a 
self-evident principle of logic. (It was John Maynard Keynes who later 
renamed it the Principle of Indifference.)


I found a discussion of the Principle of Insufficient Reason in this 
book about decision theory:


Choices: An Introduction to Decision Theory By Michael D. Resnik
http://books.google.com/books?vid=ISBN0816614407id=4genrKNUkKcCpg=RA2-PA35lpg=RA2-PA35ots=wE4Uxk7bqEdq=principle+of+insufficient+reasonsig=PsMUy3fqcMgFha8Kyx2HLaC-EA8 



This author criticizes the PI in two ways. His first is mainly 
philosophical: if there is no reason for assigning one set of 
probabilities rather than another, then there is no reason for 
assuming the states are equiprobable either. This is pretty much the 
same argument I was trying to make on ExI.


His second objection is one we had not discussed: though the PI seems 
like a common-sense way to proceed under conditions of uncertainty, 
invoking it can sometimes lead to disastrous consequences for the 
decision-maker. I would add that while the PI might be useful in some 
situations as a heuristic device in programming AGI, perhaps some 
accounting should be made for the extra risk it entails.


-gts

You have a point, but perhaps not the one you think.
The principle of indifference states that you should consider the 
probabilities of each case to be 1/n, it doesn't say anything about the 
relative costs of acting as if each was of equal weight.  Generally if a 
situation is potentially more dangerous (not increased probability of 
harm, but probability of increased harm) the reward for not avoiding it 
will need to proportionately FAR greater.


This means that even though the probabilities of occurrence must be 
deemed equal, one shouldn't consider them equally unless the cost of 
each event occurring is also equal.  As such, indifference may be a poor 
name.


As to whether the assumption of equal probability is valid...it seems a 
reasonable default position, but should be invested with very low 
certainty.  This means that one should try to avoid any plans based on 
the presumption that equal probability is correct, but doesn't mean that 
there is a better default position.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-26 Thread Charles D Hixson

Philip Goetz wrote:

On 1/17/07, Charles D Hixson [EMAIL PROTECTED] wrote:


It's find to talk about making the data public domain, but that's not
a good idea.


Why not?
Because public domain offers NO protection.  If you want something 
close to what public domain used to provide, then the MIT license is a 
good choice.  If you make something public domain, you are opening 
yourself to abusive lawsuits.  (Those are always a possibility, but a 
license that disclaims responsibility offers *some* protection.)


Public domain used to be a good choice (for some purposes), before 
lawsuits became quite so pernicious.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-20 Thread Charles D Hixson

Benjamin Goertzel wrote:

 And, importance levels need to be context-dependent, so that assigning
 them requires sophisticated inference in itself...


The problem may not be so serious.  Common sense reasoning may 
require only

*shallow* inference chains, eg  5 applications of rules.  So I'm very
optimistic =)  Your worries are only applicable to 100-page 
theorem-proving

tasks, not really the concern of AGI.


A) This is just not true, many commonsense inferences require
significantly more than 5 applications of rules

B) Even if there are only 5 applications of rules, the combinatorial
explosion still exists.  If there are 10 rules and 1 billlion
knowledge items, then there may be up to 10 billion possibilities to
consider in each inference step.  So there are (10 billion)^5 possible
5-step inference trajectories, in this scenario ;-)

Of course, some fairly basic pruning mechanisms can prune it down a
lot, but, one is still left with a combinatorial explosion that needs
to be dealt with via subtle means...

Please bear in mind that we actually have a functional uncertain
logical reasoning engine within the Novamente system, and have
experimented with feeding in knowledge from files and doing inference
on them.  (Though this has been mainly for system testing, as our
primary focus is on doing inference based on knowledge gained via
embodied experience in the AGISim world.)

The truth is that, if you have a lot of knowledge in your system's
memory, you need a pretty sophisticated, context-savvy inference
control mechanism to do commonsense inference.

Also, temporal inference can be quite tricky, and introduces numerous
options for combinatorial explosion that you may not be thinking about
when looking at atemporal examples of commonsense inference.  Various
conclusions may hold over various time scales; various pieces of
knowledge may become obsolete at various rates, etc.

I imagine you will have a better sense of these issues once you have
actually built an uncertain reasoning engine, fed knowledge into it,
and tried to make it do interesting things  I certainly think this
may be a valuable exercise for you to do.  However, until you have
done it, I think it's kind of silly for you to be speaking so
confidently about how you are so confident you can solve all the
problems found by others in doing this kind of work!!  I ask again, do
you have some theoretical innovation that seems probably to allow you
circumvent all these very familiar problems??

-- Ben
Possibly this could be approached by partitioning the rule-set into 
small chunks of rules that work together, so that one didn't end up 
trying everything against everything else.  These chunks of rules 
might well be context dependent, so that one would use different chunks 
at a dinner table than in a work shop.  There would need to be ways to 
combine different chunks of rules, of course, so e.g. a restaurant table 
would be different from a dinner table, but would have overlapping sets 
of rules.  (I hope I'm not just re-inventing frames...)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-20 Thread Charles D Hixson

Benjamin Goertzel wrote:

Hi,


Possibly this could be approached by partitioning the rule-set into
small chunks of rules that work together, so that one didn't end up
trying everything against everything else.  These chunks of rules
might well be context dependent, so that one would use different chunks
at a dinner table than in a work shop.  There would need to be ways to
combine different chunks of rules, of course, so e.g. a restaurant table
would be different from a dinner table, but would have overlapping sets
of rules.  (I hope I'm not just re-inventing frames...)


The issue is how these contexts are learned.  If context have to be
programmer-supplied, then you ARE just reinventing frames

Context formation is a tricky inference problem in itself

-- Ben
Well, my rather vague idea was to start with a very small rule set, that 
didn't need to be partitioned, and evolve rule-sets by statistical 
correlation (what tends to get used with what).  As new rules are added, 
at some point clusters would need to separate (for efficiency).  I 
suppose this could all be done with activation levels, but that's not 
the way I tend to think of it. 

OTOH, if the local cluster can't handle the deduction, it would need to 
check the most closely associated/most activated clusters to see if 
they could handle it.  Not sure how well this would work.  Clearly it 
has no more theoretical power than having all the rules in a large 
table, but I feel it would be a more efficient organization.


Also, I don't have any definition of rule yet.  It's not at all clear 
that it would be easy to translate into something a person not familiar 
with the details of the hardware and software would understand.  (If a 
certain area of RAM is mapped to a video camera, reading/writing the ram 
will naturally mean something very different than it would mean in other 
contexts.  Writing to it might be a request to alter the scene .  (A 
silly way to do things, but it's for the sake of the point, not for real 
implementation.)  I'm not at all sure that rules of the form if x do y, 
then check for result z (if not raise exception w) will suffice, even 
if you allow great flexibility as to what x, y, z, and w are interpreted 
as.  Possibly if they could be generalized functions (with x and z 
limited to not causing side effects).


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Charles D Hixson

YKY (Yan King Yin) wrote:

...
 
I think a project like this one requires substantial efforts, so 
people would need to be paid to do some of the work (programming, 
interface design, etc), especially if we want to build a high quality 
knowledgebase.  If we make it free then a likely outcome is that we 
get a lot of noise but very few people actually contribute.
 
I'm not an academic (left uni a couple years ago) so I can't get 
academic funding for this.  If I can't start an AI business I'd have 
to entirely give up AI as a career.  I hope you can understand these 
circumstances.
 
YKY
I can understand those circumstances, but if you expect people to 
contribute, you must give them something back.  One thing that's cheap 
to give back is the work that they and others have contributed.  Giving 
back less generally results in people not being willing to participate.  
Even if you claim sole rights to commercially exploit the work, you will 
find it much more difficult to get folk to participate.  They will feel 
that you are stealing their work without just compensation.


You raise the issue of compensation to you, and that's fair.  But if you 
take out too much, you will cause the project to fail just as surely as 
if you hadn't put in the time to design the interface.  If you merely 
make a requirement that people be a better than average contributor to 
be entitled to download the  current results, then you will eliminate 
most potential competitors...and the remaining ones will be those who 
are also dedicating time and effort to making your project work.  It's 
true that old versions of your work will circulate, but that should do 
little harm.


People only participate in a public project if they feel they are 
getting a good return out of it.  What a good return is, is 
subjective, but few people consider I put in a bunch of work, and they 
don't even mention my name to be a good return.  You want to give 
people a return that they see as more valuable than their efforts, but 
which costs you a lot less than their efforts.  Status in a community 
requires that the community exist.  (At some point you'll want to give 
people scores depending on the amount of their work that is included in 
the current project...or something that will relate positively to that.  
This is a cheap status reward, and will boost community participation.  
On Slashdot I notice that just having a low numbered user ID has become 
a status marker of sorts.  I.e., you've been a member of the community 
for a long time.  That was a REALLY cheap status gift, but it took a 
long time to build to anything of value.  Much quicker was the right to 
meta-moderate.  Slightly less quick was the right to moderate.  Note 
that these are both seen by the Slashdot community as things of worth, 
yet to the operator of Slashdot they were instituted as ways of cutting 
cost while improving quality.  Also note that it took a long time for 
them to become worth much as status markers.  You need something else to 
use while you're getting started.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project proposal: MindPixel 2

2007-01-17 Thread Charles D Hixson

Joel Pitt wrote:

...
Some comments/suggestions:

* I think such a project should make the data public domain. Ignore
silly ideas like giving be shares in the knowledge or whatever. It
just complicates things. If the project is really strapped for cash
later, then either use ad revenue or look for research funding
(although I don't see much cost except for initial development of the
system and web hosting).

...
Making this proprietary and expecting shares to translate into cash 
would indeed be a silly approach.
OTOH, having people's names attached to scores of some type (call them 
shares, or anything else) lets people feel more attached to the 
project.  This is probably necessary for success.  There also needs to 
be some way for the builders to interact, and a few other methods that 
assist the formation of a community.  Newsboards, games, etc. can all be 
useful if structured properly to enhance the formation of a community.  
Perhaps only community members with scores above the median could be 
allowed to download the database?


It's find to talk about making the data public domain, but that's not 
a good idea.  There are arguments in favor of BSD, MIT, GPL, LGPL, etc. 
licenses.  For this kind of activity I can see either BSD or MIT as 
easily defensible.  (Personally I'd use LGPL, but then if I were using 
it, I'd want the whole application to be GPL.  I might not be able to 
achieve it, but that's what I'd want.)  Public domain wouldn't be one of 
the possibilities that I would consider.  The Artistic license is about 
as close to that as I would want to come...and the MIT license is 
probably a better choice for those purposes.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-13 Thread Charles D Hixson

Philip Goetz wrote:

...
The disagreement here is a side-effect of postmodern thought.
Matt is using evolution as the opposite of devolution, whereas
Eric seems to be using it as meaning change, of any kind, via natural
selection.

We have difficulty because people with political agendas - notably
Stephen J. Gould - have brainwashed us into believing that we must
never speak of evolution being forward or backward, and that
change in any direction is equally valuable.  With such a viewpoint,
though, it is impossible to express concern about the rising incidence
of allergies, genetic diseases, etc.

...
To speak of evolution as being forward or backward is to impose upon 
it our own preconceptions of the direction in which it *should* be 
changing.  This seems...misguided.


To claim that because all changes in the gene pool are evolution, that 
therefore they are all equally valuable is to conflate two (orthogonal?) 
assertions.  Value is inherently subjective to the entity doing the 
evaluation.  Evolution, interpreted as statistical changes in the gene 
pool, in inherently objective (though, of course, measurements of it may 
well be biased).


Stephen J. Gould may well have been more of a populizer than a research 
scientist, but I feel that your criticisms of his presentations are 
unwarranted and made either in ignorance or malice.  This is not a 
strong belief, and were evidence presented I would be willing to change 
it, but I've seen such assertions made before with equal lack of 
evidential backing, and find them distasteful.


That Stephen J. Gould had some theories of how evolution works that are 
not universally accepted by those skilled in the field does not warrant 
your comments.  Many who are skilled in the field find them either 
intriguing or reasonable.  Some find them the only reasonable proposal.  
I can't speak for most, as I am not a professional evolutionary 
biologist, and don't know that many folk who are, but it would not 
surprise me to find that most evolutionary biologists found his 
arguments reasonable and unexceptional, if not convincing.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Charles D Hixson

Ben Goertzel wrote:

...
According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben
I do, however, have some question about it being a hard takeoff.  That 
depends largely on

1) how efficient the program is, and
2) what computer resources are available.

To me it seems quite plausible that an AGI might start out as slightly 
less intelligent than a normal person, or even considerably less 
intelligent, with the limitation being due to the available computer 
time.  Naturally, this would change fairly rapidly over time, but not 
exponentially so, or at least not super-exponentially so.


If, however, the singularity is delayed because the programs aren't 
ready, or are too inefficient, then we might see a true hard-takeoff.  
In that case by the time the program was ready, the computer resources 
that it needs would already be plentifully available.   This isn't 
impossible, if the program comes into existence in a few decades, but if 
the program comes into existence within the current decade, then there 
would be a soft-takeoff.  If it comes into existence within the next 
half-decade then I would expect the original AGI to be sub-normal, due 
to lack of available resources.


Naturally all of this is dependent on many different things.  If Vista 
really does require as much of and immense retooling to more powerful 
computers as some predict, then  programs that aren't dependent on Vista 
will have more resources available, as computer designs are forced to be 
faster and more capacious.  (Wasn't Intel promising 50 cores on a single 
chip in a decade?  If each of those cores is as capable as a current 
single core, then it will take far fewer computers netted together to 
pool the same computing capacity...for those programs so structured as 
to use the capacity.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson

BillK wrote:

...

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.
...

BillK
I think you've got a time inversion here.  The list of reasons to go 
ahead is frequently, or even usually, created AFTER the action has been 
done.  If the list is being created BEFORE the decision, the list of 
reasons not to go ahead isn't ignored.  Both lists are weighed, a 
decision is made, and AFTER the decision is made the reasons decided 
against have their weights reduced.  If, OTOH, the decision is made 
BEFORE the list of reasons is created, then the list doesn't *get* 
created until one starts trying to justify the action, and for 
justification obviously reasons not to have done the thing are 
useless...except as a layer of whitewash to prove that all 
eventualities were considered.


For most decisions one never bothers to verbalize why it was, or was 
not, done.



P.S.:  ...and AFTER the decision is made the reasons decided against 
have their weights reduced.  ...:  This is to reinforce a consistent 
self-image.  If, eventually, the decision turns our to have been the 
wrong one, then this must be revoked, and the alternative list 
reinforced.  At which point one's self-image changes and one says things 
like I don't know WHY I would have done that, because the modified 
self image would not have decided in that way.
P.P.S:  THIS IS FABULATION.  I'm explaining what I think happens, but I 
have no actual evidence of the truth of my assertions.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson

BillK wrote:

On 12/5/06, Charles D Hixson wrote:

BillK wrote:
 ...
 


No time inversion intended. What I intended to say was that most
(all?) decisions are made subconsciously before the conscious mind
starts its reason / excuse generation process. The conscious mind
pretending to weigh various reasons is just a human conceit. This
feature was necessary in early evolution for survival. When danger
threatened, immediate action was required. Flee or fight!  No time to
consider options with the new-fangled consciousness brain mechanism
that evolution was developing.

With the luxury of having plenty of time to reason about decisions,
our consciousness can now play its reasoning games to justify what
subconsciously has already been decided.

NOTE: This is probably an exaggeration / simplification. ;)


BillK
I would say that all decisions are made subconsciously, but that the 
conscious mind can focus attention onto various parts of the problem and 
possibly affect the weighings of the factors.


I would also make a distinction between the conscious mind and the 
verbalized elements, which are merely the story that the conscious mind 
is telling.  (And assert that ALL of the stories that we tell ourselves 
are human conceits, i.e., abstractions of parts deemed significant out 
of a much more complex underlying process.)


I've started reading What is Thought by Eric Baum.  So far I'm only 
into the second chapter, but it seems quite promising.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson

James Ratcliff wrote:
There is a needed distinctintion that must be made here about hunger 
as a goal stack motivator.


We CANNOT change the hunger sensation, (short of physical 
manipuations, or mind-control stuff) as it is a given sensation that 
comes directly from the physical body.


What we can change is the placement in the goal stack, or the priority 
position it is given.  We CAN choose to put it on the bottom of our 
list of goals, or remove it from teh list and try and starve ourselves 
to death.
  Our body will then continuosly send the hunger signals to us, and we 
must decide what how to handle that signal.


So in general, the Signal is there, but the goal is not, it is under 
our control.


James Ratcliff
That's an important distinction, but I would assert that although one 
can insert goals above a built-in goal (hunger, e.g.), one cannot 
remove that goal.  There is a very long period when someone on a hunger 
strike must continually reinforce the goal of not-eating.  The goal of 
satisfy hunger is only removed when the body decides that it is 
unreachable (at the moment). 

The goal cannot be removed by intention, it can only be overridden and 
suppressed.  Other varieties of goal, volitionally chosen ones, can be 
volitionally revoked.  Even in such cases habit can cause the automatic 
execution of tasks required to achieve the goal to be continued.  I 
retired years ago, and although I no longer automatically get up at 5:30 
each morning, I still tend to arise before 8:00.  This is quite a 
contrast from my time in college when I would rarely arise before 9:00, 
and always felt I was getting up too early.  It's true that with a 
minimal effort I can change things so that I get up a (nearly?) any 
particular time...but as soon as I relax it starts drifting back to 
early morning.


Goals are important.  Some are built-in, some are changeable.  Habits 
are also important, perhaps nearly as much so.  Habits are initially 
created to satisfy goals, but when goals change, or circumstances alter, 
the habits don't automatically change in synchrony.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson

Consider as a possible working definition:
A goal is the target state of a homeostatic system.  (Don't take 
homeostatic too literally, though.)


Thus, if one sets a thermostat to 70 degrees Fahrenheit, then it's goal 
is to change to room temperature to be not less than 67 degrees 
Fahrenheit.  (I'm assuming that the thermostat allows a 6 degree heat 
swing, heats until it senses 73 degrees, then turns off the heater until 
the temperature drops below 67 degrees.)


Thus, the goal is the target at which a system (or subsystem) is aimed.

Note that with this definition goals do not imply intelligence of more 
than the most very basic level.  (The thermostat senses it's environment 
and reacts to adjust it to suit it's goals, but it has no knowledge of 
what it is doing or why, or even THAT it is doing it.)  One could 
reasonably assert that the intelligence of the thermostat is, or at 
least has been, embodied outside the thermostat.  I'm not certain that 
this is useful, but it's reasonable, and if you need to tie goals into 
intelligence, then adopt that model.



James Ratcliff wrote:
Can we go back to a simpler distictintion then, what are you defining 
Goal as?


I see the goal term, as a higher level reasoning 'tool'
Wherin the body is constantly sending signals to our minds, but the 
goals are all created consciously or semi-conscisly.


Are you saying we should partition the Top-Level goals into some 
form of physical body - imposed goals and other types, or
do you think we should leave it up to a single Constroller to 
interpret the signals coming from teh body and form the goals.


In humans it looks to be the one way, but with AGI's it appears it 
would/could be another.


James

*/Charles D Hixson [EMAIL PROTECTED]/* wrote:

J...
Goals are important. Some are built-in, some are changeable. Habits
are also important, perhaps nearly as much so. Habits are initially
created to satisfy goals, but when goals change, or circumstances
alter,
the habits don't automatically change in synchrony.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Everyone is raving about the all-new Yahoo! Mail beta. 
http://us.rd.yahoo.com/evt=45083/*http://advision.webevents.yahoo.com/mailbeta 



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Charles D Hixson

Mark Waser wrote:

Hi Bill,

...
   If storage and access are the concern, your own argument says that 
a sufficiently enhanced human can understand anything and I am at a 
loss as to why an above-average human with a computer and computer 
skills can't be considered nearly indefinitely enhanced.
The use of external aids doesn't allow one to increase the size of 
active ram.  Usually this is no absolute barrier, though it can result 
in exponential slowdown.  Sometimes, however, I suspect that there are 
problems that can't be addressed because the working memory is too 
small.  This isn't a thing that I could prove (and probably von Neuman 
proved otherwise).  So take exponential slowdown to be what's involved, 
though it might be combinatorial slowdown for some classes of problems.  
This may not be an absolute barrier, but it is sufficient to effectively 
be called one, especially given the expected lifetime of the person 
involved.  (After one has lived a few thousand years, one might perceive 
this class of problems to be more tractable...but I'd bet they will be 
addressed sooner by other means.)


Consider that we apparently have special purpose hardware for rotating 
visual images.  Given that, there MUST be a limit to the resolution that 
this hardware possesses.  (Well, I suspect that it rotates vectorized 
images, and retranslates after rotation...but SOME pixelated image is 
being rotated (they've watched it on PET[?] scans).  This implies that 
anything that requires more than that much detail to handle is fudged, 
or just isn't handled.  So the necessary enhancement would:

1) off-load the original image
2) rotate it, and
3) import the rotated image
Plausibly importation could be done via a 3-D monitor, though it might 
take a lot of study.  Exporting the original uncorrupted image, however, 
is beyond the current state of the art.


I would argue that this is but one of a large class of problems that 
cannot be addressed by the current modes of enhancement.


   Regarding chess or Go masters -- while you couldn't point to a move 
and say you shouldn't have done that, I'm sure that the master could 
(probably in several instances) point to a move and say I wouldn't 
have done that and provided a better move (most often along with a 
variable-quality explanation of why it was a better move).

...   Mark

- Original Message - From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 2:31 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Charles D Hixson

Mark Waser wrote:

...
For me, yes, all of those things are good since they are on my list of 
goals *unless* the method of accomplishing them steps on a higher goal 
OR a collection of goals with greater total weight OR violates one of 
my limitations (restrictions).

...

If you put every good thing on your list of goals, then you will have 
a VERY long list.
I would propose that most of those items listed should be derived goals 
rather than anything primary.  And that the primary goals should be 
rather few.  I'm certain that three is too few.  Probably it should be 
fewer than 200.  The challenge is so phrasing them that they:

1) cover every needed situation
2) are short enough to be debugable
They should probably be divided into two sets.  One set would be a list 
of goals to be aimed for, and the other would be a list of filters that 
had to be passed.


Think of these as the axioms on which the mind is being erected.  Axioms 
need to be few and simple, it's the theorums that are derived from them 
that get complicated.


N.B.:  This is an ANALOGY.  I'm not proposing a theorum prover as the 
model of an AI.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Charles D Hixson
I don't know that I'd consider that an example of an uncomplicated 
goal.  That seems to me much more complicated than simple responses to 
sensory inputs.   Valuable, yes, and even vital for any significant 
intelligence, but definitely not at the minimal level of complexity. 

An example of a minimal goal might be to cause an extended period of 
inter-entity communication, or to find a recharging socket.  Note 
that the second one would probably need to have a hard-coded solution 
available before the entity was able to start any independent 
explorations.  This doesn't mean that as new answers were constructed 
the original might not decrease in significance and eventually be 
garbage collected.  It means that it would need to be there as a 
pre-written answer on the tabula rasa.  (I.e., the tablet can't really 
be blank.  You need to start somewhere, even if you leave and never 
return.)  For the first example, I was thinking of peek-a-boo.


Bob Mottram wrote:


Goals don't necessarily need to be complex or even explicitly 
defined.  One goal might just be to minimise the difference between 
experiences (whether real or simulated) and expectations.  In this way 
the system learns what a normal state of being is, and detect deviations.




On 21/11/06, *Charles D Hixson* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Bob Mottram wrote:


 On 17/11/06, *Charles D Hixson* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

 A system understands a situation that it encounters if it
predictably
 acts in such a way as to maximize the probability of
achieving it's
 goals in that situation.




 I'd say a system understands a situation when its internal
modeling
 of that situation closely approximates its main salient
features, such
 that the difference between expectation and reality is minimised.
 What counts as salient depends upon goals.  So for example I
could say
 that I understand how to drive, even if I don't have any detailed
 knowledge of the workings of a car.

 When young animals play they're generating and tuning their models,
 trying to bring them in line with observations and goals.
That sounds reasonable, but how are you determining the match of the
internal modeling to the main salient features.  I propose that
you do
this based on it's actions, and thus my definition.  I'll admit,
however, that this still leaves the problem of how to observe what
it's
goals are, but I hypothesize that it will be much simpler to
examine the
goals in the code than to examine the internal model.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



This list is sponsored by AGIRI: http://www.agiri.org/email To 
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Charles D Hixson

OK.
James Ratcliff wrote:


Have to amend that to acts or replies
I consider a reply an action.  I'm presuming that one can monitor the 
internal state of the program.
and it could react unpredictably depending on the humans level of 
understanding if it sees a nice neat answer, (like the jumping 
thru the window cause the door was blocked) that the human wasnt aware 
of, or was suprised about it would be equally good.
I'm a long way from a AGI, so I'm not seriously considering superhuman 
understanding.  That said, I proposing that you are running the system 
through trials.  Once it has learned a trial, we say it understands 
the trial if it responds correctly.  Correctly is defined in terms of 
the goals of the system rather than in terms of my goals.


And this doesnt cover the opposite of what other actions can be done, 
and what are the consequences, that is also important.

True.  This doesn't cover intelligence or planning, merely understanding.


And lastly this is for a situation only, we also have the more 
general case about understading a thing  Where when it sees. or has, 
or is told about a thing, it understands it if, it know about general 
properties, and actions that can be done with, or using the thing.
You are correct.  I'm presuming that understanding is defined in a 
situation, and that it doesn't automatically transfer from one situation 
to another.  (E.g., I understand English.  Unless the accent is too 
strong.  But I don't understand Hindi, though many English speakers do.)


The main thing being we cant and arnt really defining understanding 
but the effect of  the understanding, either in action or in a 
language reply.
Does understanding HAVE any context free meaning?  It might, but I don't 
feel that I could reasonably assert this.  Possibly it depends on the 
precise definition chosen.  (Consider, e.g., that one might choose to 
use the word meaning to refer to the context-free component of 
understanding.  Would or would not this be a reasonable use of the 
language?  To me this seems justifiable, but definitely not self-evident.)


And it should be a level of understanding, not just a y/n.
Probably, but this might depend on the complexity of the system that one 
was modeling.  I definitely have a partial understanding of How to 
program an AGI.  It's clearly less than 100%, and is probably greater 
than 1%.  It may also depend on the precision with which one is 
speaking.  To be truly precise one would doubtless need to decompose the 
measure along several dimensions...and it's not at all clear that the 
same dimensions would be appropriate in every context.  But this is 
clearly not the appropriate place to start.


So if one AI saw an apple and said, I can throw /  cut / eat it, and 
weighted those ideas. and the second had the same list, but weighted 
eat as more likely, and/or knew people sometimes cut it before eating 
it.  Then the AI would understand to a higher level.
Likewise if instead, one knew you could bake an apple pie, or apples 
came from apple trees, he would understand more.
No.  That's what I'm challenging.  You are relating the apple to the 
human world rather than to the goals of the AI.


So it starts looking like a knowledge test then.

What you are proposing looks like a knowledge test.  That's not what I mean.


Maybe we could extract simple facts from wiki, and start creating a 
test there, then add in more complicated things.


James

*/Charles D Hixson [EMAIL PROTECTED]/* wrote:

Ben Goertzel wrote:
 ...
 On the other hand, the notions of intelligence and understanding
 and so forth being bandied about on this list obviously ARE intended
 to capture essential aspects of the commonsense notions that
share the
 same word with them.
 ...
 Ben
Given that purpose, I propose the following definition:
A system understands a situation that it encounters if it predictably
acts in such a way as to maximize the probability of achieving it's
goals in that situation.

I'll grant that it's a bit fuzzy, but I believe that it captures the
essence of the visible evidence of understanding. This doesn't say
what
understanding is, merely how you can recognize it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Sponsored Link

Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. 
Intro-*Terms 
https://www2.nextag.com/goto.jsp?product=10035url=%2fst.jsptm=ysearch=b_rate150ks=3968p=5035disc=yvers=722



This list

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Charles D Hixson

Ben Goertzel wrote:

...
On the other hand, the notions of intelligence and understanding
and so forth being bandied about on this list obviously ARE intended
to capture essential aspects of the commonsense notions that share the
same word with them.
...
Ben

Given that purpose, I propose the following definition:
A system understands a situation that it encounters if it predictably 
acts in such a way as to maximize the probability of achieving it's 
goals in that situation.


I'll grant that it's a bit fuzzy, but I believe that it captures the 
essence of the visible evidence of understanding.  This doesn't say what 
understanding is, merely how you can recognize it.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-05 Thread Charles D Hixson

Richard Loosemore wrote:

...
This is a question directed at this whole thread, about simplifying 
language to communicate with an AI system, so we can at least get 
something working, and then go from there


This rationale is the very same rationale that drove researchers into 
Blocks World programs.  Winograd and SHRDLU, etc.  It was a mistake 
then:  it is surely just as much of a mistake now.

Richard Loosemore.
-
Not surely.  It's definitely a defensible position, but I don't see any 
evidence that it has even a 50% probability of being correct.


Also I'm not certain that SHRDLU and Blocks World were mistakes.  They 
didn't succeed in their goals, but they remain as important markers.  At 
each step we have limitations imposed by both our knowledge and our 
resources.  These limits aren't constant.  (P.S.:  I'd throw Eliza into 
this same category...even though the purpose behind Eliza was different.)


Think of the various approaches taken as being experiments with the user 
interface...since that's a large part of what they were.  They are, of 
course, also experiments with how far one can push a given technique 
before encountering a combinatorial explosion.  People don't seem very 
good at understanding that intuitively.  In neural nets this same 
problem re-appears as saturation, the point at which as you learn new 
things old things become fuzzier and less certain.  This may have some 
relevance to the way that people are continually re-writing their 
memories whenever they remember something.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Charles D Hixson

John Scanlon wrote:

Ben,

   I did read your stuff on Lojban++, and it's the sort of language 
I'm talking about.  This kind of language lets the computer and the 
user meet halfway.  The computer can parse the language like any other 
computer language, but the terms and constructions are designed for 
talking about objects and events in the real world -- rather than for 
compilation into procedural machine code.


   Which brings up a question -- is it better to use a language based 
on term or predicate logic, or one that imitates (is isomorphic to) 
natural languages?  A formal language imitating a natural language 
would have the same kinds of structures that almost all natural 
languages have:  nouns, verbs, adjectives, prepositions, etc.  There 
must be a reason natural languages almost always follow the pattern of 
something carrying out some action, in some way, and if transitive, to 
or on something else.  On the other hand, a logical language allows 
direct  translation into formal logic, which can be used to derive all 
sorts of implications (not sure of the terminology here) mechanically.
The problem here is that when people use a language to communicate with 
each other they fall into the habit of using human, rather than formal, 
parsings.  This works between people, but would play hob with a 
computer's understanding (if it even had reasonable referrents for most 
of the terms under discussion).


Also, notice one major difference between ALL human languages and 
computer languages:

Human languages rarely use many local variables, computer languages do.
Even the words that appear to be local variables in human languages are 
generally references, rather than variables.


This is (partially) because computer languages are designed to describe 
processes, and human languages are quasi-serial communication 
protocols.  Notice that thoughts are not serial, and generally not 
translatable into words without extreme loss of meaning.  Human 
languages presume sufficient understanding at the other end of the 
communication channel to reconstruct a model of what the original 
thought might have been.


So.  Lojban++ might be a good language for humans to communicate to an 
AI with, but it would be a lousy language in which to implement that 
same AI.  But even for this purpose the language needs a verifier to 
insure that the correct forms are being followed.  Ideally such a 
verifier would paraphrase the statement that it was parsing and emit 
back to the sender either an error message, or the paraphrased 
sentence.  Then the sender would check that the received sentence 
matched in meaning the sentence that was sent.  (N.B.:  The verifier 
only checks the formal properties of the language to ensure that they 
are followed.  It had no understanding, so it can't check the meaning.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Charles D Hixson

BillK wrote:

On 11/1/06, Charles D Hixson wrote:

So.  Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI.  But even for this purpose the language needs a verifier to
insure that the correct forms are being followed.  Ideally such a
verifier would paraphrase the statement that it was parsing and emit
back to the sender either an error message, or the paraphrased
sentence.  Then the sender would check that the received sentence
matched in meaning the sentence that was sent.  (N.B.:  The verifier
only checks the formal properties of the language to ensure that they
are followed.  It had no understanding, so it can't check the meaning.)




This discussion reminds me of a story about the United Nations
assembly meetings.
Normally when a representative is speaking, all the translation staff
are jabbering away in tandem with the speaker.
But when the German representative starts speaking they all fall
silent and sit staring at him.

The reason is that they are waiting for the verb to come along.   :)

Billk
Yeah, it wouldn't be ideal for rapid interaction.  But it would help 
people to maintain adherence to the formal rules, and to notice when 
they weren't.


If you don't have feedback of this nature, the language will evolve 
different rules, more closely similar to those of natural languages.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Is a robot a Turing Machine?

2006-10-02 Thread Charles D Hixson

Pei Wang wrote:

We all know that, in a sense, every computer system (hardware plus
software) can be abstractly described as a Turing machine.

Can we say the same for every robot? Why?

Reference to previous publications are also welcome.

Pei
The controller for the robot might be a Turing machine, but standard 
Turing machines don't include manipulators, etc.
I seem to remember that even I/O on a Turing machine was an infinitely 
long tape (one bit wide).  One can create a partial isomorphism between 
standard I/O devices and that tape, but nothing here connects the 
internals of the machine to either sensors or manipulators of the 
external world.  All robots have SOMETHING that allows them to sense and 
manipulate the external world on a soft real-time basis. 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Charles D Hixson

Joshua Fox wrote:
I'd like to raise a FAQ: Why is so little AGI research and development 
being done?


...
Thanks,

Joshua
 
What proportion of the work that is being done do you believe you are 
aware of?  On what basis?
My suspicion is that most people on the track of something new tend to 
be rather close about it.  I'll agree that this probably slows down 
progress, but from an individual person or corporation's point of view 
it is quite sensible.  For one thing, it minimizes the risks of humiliation.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-09-05 Thread Charles D Hixson

Philip Goetz wrote:

...
Those companies don't make money off the software.  They sell products
and services.  The GPL is not successful at enabling people to make
money directly off software.  This is critical, because it takes a
large company and a large capital investment to make money selling
products and services.  This business model is useless to people like
us, who need a way to hack out some code and make money off the code.
You are right.  And I can understand why in such a case the GPL might 
not be the right license for you to use.
OTOH, if your license precludes my using your software in a GPL program, 
then it precludes me from using it.
There are tradeoffs everywhere.  Your choices are yours, and appear to 
be for valid to you reasons (rather than due to some misunderstanding).


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-09-04 Thread Charles D Hixson

Philip Goetz wrote:

On 8/30/06, Charles D Hixson [EMAIL PROTECTED] wrote:

... some snipping ...

 - Phil
The idea with the GPL is that if you want to also sell the program
commercially, you should additionally make it available under an
alternate license.  Some companies have been successful in this mode.
(Trolltech comes to mind, and also, I believe, MySQL.)  Descendants of
the GPL code are required to be GPL.  
No restrictions are placed as to what additional licenses you might
offer code for sale under that you have also offered as GPL, except for
What can I get people to buy?.  Commonly this is used to allow those
who don't wish to agree to the terms of the GPL to purchase the right to
use the code under other terms.  Stipulating fees as a part of the
license is probably a bad idea.


Why is it a bad idea?  You said it was a bad idea, but you didn't say 
why.
It's a bad idea because licenses are relatively permanent, and prices 
fluctuate.  This is especially true if foreign currencies become 
involved, but it's true over time anyway.


Also, the assertion that no restrictions are placed as to what
additional licenses you might offer code for sale under is wrong; you
are expressly forbidden from adding additional restrictions.  I can't
If you own the copyright to some material, you can sell it to different 
people under different licenses.  (Note that this requires that you own 
the copyrights.  Not just some of them, but all of them.  This is one 
reason this approach is infrequent.)

parse the sentence saying that this is to allow those who don't wish
to agree to the terms of the GPL to purchase the right to use the code
under other terms - it seems to be saying that it is legal to
distribute GPLed code in a non-GPL way, which it isn't.
You can't distribute the GPL'd copy under non-GPL terms, but if you also 
bought a different license (say from TrollTech), that license might well 
permit you to, e.g., distribute binary only copies of a modified 
original.  This would not be under the GPL at all.  The GPL prohibits 
this, so you need to purchase a separate license.  Actually, Trolltech 
requires that you do your development FROM SCRATCH under the non GPL 
license.


It is a good idea, for these reasons:

1. The money would be paid to the people who wrote the software.
Under the GPL model you're promoting, the authors get nothing.
The GPL does not prohibit you from selling software.  It merely 
prohibits you from prohibiting others from selling copies of their copy 
at any price they choose.

2. The GPL is unworkable.  It requires that the commercial code also
be released under GPL, and that the source code to everything added is
released.  It also requires the company to relinquish patent rights to
anything in the code.  This is a complete non-starter.
The GPL is currently successful.  Few companies are successful with 
their main product under the GPL license...though MySQL comes to mind, 
and I believe that SleepyCat is even successful distributing source code 
under the BSD license.  Examining actual cases proves your assertions 
incorrect.

...

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-09-01 Thread Charles D Hixson

Stephen Reed wrote:

...
Rather than cash payments I have in mind a scheme
similar to the pre-world wide web bulletin board
system in which FTP sites had upload and download
ratios.  If you wished to benefit from the site by
downloading, you had to maintain a certain level of
contributions via file uploads.  Analogously, if one
seeks to benefit from using a freely available
internet-based distributed AGI, then one should
contribute to it, either by donating some compute
cycles, or by spending some time to tutor it.

Cheers.
-Steve
  
But please don't block out people behind a NAT firewall.  I can't fairly 
download via bittorrent because it won't upload through NAT.


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-08-30 Thread Charles D Hixson

Philip Goetz wrote:

On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote:


An assumption that some may challenge is that AGI
s...
source license retain these benefits yet be safe?


I would rather see a license which made the software free
for non-commercial use, but (unlike the GNU licenses)
stipulated terms, and methods of deciding fees that
would be binding on the software authors, so that a
company could use the software for commercial uses,
provided they paid the stipulated fees to the software
authors.
...
- Phil 
The idea with the GPL is that if you want to also sell the program 
commercially, you should additionally make it available under an 
alternate license.  Some companies have been successful in this mode.  
(Trolltech comes to mind, and also, I believe, MySQL.)  Descendants of 
the GPL code are required to be GPL.  Descendants of code acquired under 
the alternate license can be whatever you choose.  The limitation of 
this approach is that it is common for the GPL branch to out-develop the 
non-GPL branch...so you must develop quite actively.  Also you must own 
the copyrights to all of the code that is used.  You can't add pieces 
from other GPL projects.  Etc.


No restrictions are placed as to what additional licenses you might 
offer code for sale under that you have also offered as GPL, except for 
What can I get people to buy?.  Commonly this is used to allow those 
who don't wish to agree to the terms of the GPL to purchase the right to 
use the code under other terms.  Stipulating fees as a part of the 
license is probably a bad idea. 



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI open source license

2006-08-28 Thread Charles D Hixson

Stephen Reed wrote:

I would appreciate comments regarding additional
constraints, if any, that should be applied to a
traditional open source license to achieve a free but
safe widespread distribution of software that may lead
to AGI.
...
  
My personal opinion is that the best license is the GPL.  Either version 
2 or 3...currently I can't choose between them (partially because 
version 3 is still being written).


Note that may large GPL projects are quite successful.  Consider, e.g., 
gcc.  The claim that because anyone CAN change the code, anyone WILL 
change the code is probably fallacious.  Most of those who try find that 
their changes are less than good.  Usually those who decide to create a 
fork find themselves being left behind by the pace of development.  So 
generally everyone sticks with the main tree...and perhaps submits 
changes that they think desirable into the project.  Occasionally a fork 
will be successful.  (X Window is no longer being developed from the 
XFree86 tree, e.g.)  But since the license is GPL, this doesn't make any 
difference.


How do you keep the bad guys from using it?  You keep on developing.  
Those who fork tend to fall behind, unless they get community support.


Now I'll admit that this is an idealized picture of the development 
process, but the outline is correct.  Keeping a project going takes a 
good manager...one who can herd cats.  It requires inspiring a degree 
of faith and trust in people who will be working without being paid.  
This means you've got to inspire them as well as get them to trust you.  
And you've got to articulate a vision of where the project should be 
headed next,  roadmap is the common term, without stifling creativity.


P.S.:  Note that gcc has several chunks.  Each language has a largely 
separate implementation, but each needs to generate the same kind of 
intermediate representation.  This allows several essentially 
independent teams to each work separately.  As to just *how* 
independent... consider the gdc compiler ( 
http://sourceforge.net/projects/dgcc ).  This project is prevented by 
licensing constraints from having ANY direct connection to the rest of 
gcc.  Yet it can still be integrated into gcc by an end user.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


  1   2   >