Re: [agi] Free AI Courses at Stanford

2008-09-18 Thread Kingma, D.P.
Hi List,

Also interesting to some of you may be VideoLectures.net, which offers
lots of interesting lectures. Although not all are of Stanford
quality, still I found many interesting lectures by respected
lecturers. And there are LOTS (625 at the moment) of lectures about
Machine Learning... :)

http://videolectures.net/Top/Computer_Science/

Algorithmic Information Theory (2)
Algorithms and Data Structures (4)
Artificial Intelligence (6)
Bioinformatics (45)
Chemoinformatics (1)
Complexity Science (24)
Computer Graphics (2)
Computer Vision (41)
Cryptography and Security (4)
Databases (1)
Data Mining (56)
Data Visualisation (18)
Decision Support (3)
Evolutionary Computation (3)
Fuzzy Logic (4)
Grid Computing (1)
Human Computer Interaction (10)
Image Analysis (47)
Information Extraction (30)
Information Retrieval (40)
Intelligent Agents (4)
Interviews (54)
Machine Learning (625)
Natural Language Processing  (9)
Network Analysis (27)
Robotics (23)
Search Engines (5)
Semantic Web (175)
Software and Tools (12)
Spatial Data Structures (1)
Speech Analysis (9)
Text Mining (37)
Web Mining (19)
Web Search (2)



On Thu, Sep 18, 2008 at 8:52 AM, Brad Paulsen [EMAIL PROTECTED] wrote:
 Hey everyone!

...

 Links to all the courses being offered are here:
 http://www.deviceguru.com/2008/09/17/stanford-frees-cs-robotics-courses/

 Cheers,
 Brad



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread David Hart
On Thu, Sep 18, 2008 at 3:26 PM, Linas Vepstas [EMAIL PROTECTED]wrote:

 
  I agree that the topic is worth careful consideration. Sacrificing the
  'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
  AGI safety and/or the prevention of abuse may indeed be necessary one
  day.

 Err, ...  but not legal.


What do you mean? The SIAI and Novamente hold the copyright for OpenCog
code, and are perfectly within their legal rights to change the terms of the
license of SIAI-distributed source code. Of course changes cannot be
retroactively applied to source code already distributed, and there are no
plans to make any license changes, but such changes can be made perfectly
legally. Also of course the SIAI would need to be in a position of
significant influence (like, say, employing key developers and driving key
progress or holding contracts with corporate/government users or exerting
influence over commercial policy or government regulation, etc.) for any
license changes to be relevant in a software economy where anyone with
sufficient skills and influence could maintain a fork using the old license
terms.


 
  One of many obstacles in the current legal framework worth considering
  is that machine-generated things (like the utterances or self-recorded
  thoughts of an AGI) are uncopyrightable and banished into a legal no-
  mans-land. There is simply no existing legal framework to handle the
  persons or products originating from AGIs.

 Law is built on precedent, and the precedent is that works
 produced by software are copyrightable. If I write a book
 using an open-source word-processor, I can claim copyright
 to that book.


If I press a button that causes an open-source AGI to write
 a book, (possibly based on a large collection of input data
 that I gave it) then I can claim ownership of the resulting work.


Original works produced by software as a tool where a human operator is
involved at some stage is a different case from original works produced by
software exclusively and entirely under its own direction. The latter has no
precedent.


 No, the crux of the problem is not that the output of an AGI
 isn't copyrightable ... it is, based on the above precedent.
 The crux of the problem is that the AGI cannot be legally
 recognized as an individual, with rights.  But even then,
 there *is* a legal work-around!


Claiming a copyright and successfully defending that claim are different
things.

I agree that the non-person status of [some future] AGI is a bigger problem.


Of course, a trans-human AGI is .. err.. will defacto find
 that it is not bound by human laws, and will find clever
 ways to protect itself, I doubt it will require the protection
 of humans.  Recall -- laws are there to protect the weak
 from the strong. The strong don't really need protecting.


AGIs will likely need protection from other AGIs, and I expect they will
create AGI-society legal frameworks, perhaps similar to or originally based
on human laws.



 I'm not worried about people enslaving AGI's; I'm worried
 about people being  innocent bystanders, victimized
 by some sort of AGI shootout between the Chinese
 and American CIA -built AGI's (probably by means of
 some propaganda shootout, rather than a literal guns
 and bombs shootout. Modern warfare is also
 homesteading the noosphere)


I believe that James's concerns cover both AGI mental torture (coercing or
tricking a conscious entity into behavior which is sociopathic or criminal
or otherwise immoral) as a heinous act in itself and also the 'crossfire'
concerns you raised.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-18 Thread Pei Wang
On Wed, Sep 17, 2008 at 10:54 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Pei,

 You are right, that does sound better than quick-and-dirty. And more
 relevant, because my primary interest here is to get a handle on what
 normative epistemology should tell us to conclude if we do not have
 time to calculate the full set of consequences to (uncertain) facts.

Fully understand. As far as uncertain reasoning is concerned, NARS
aims at a normative model that is optimal under certain restriction,
and in this sense it is not inferior to probability theory, but
designed under different assumptions. Especially, NARS is not an
approximation or a second-rate substitute for probability theory, just
as probability theory is not a second-rate substitute of binary logic.

 It is unfortunate that I had to use biased language, but probability
 is of course what I am familiar with... I suppose, though, that most
 of the terms could be roughly translated into NARS? Especially
 independence, and I should hope conditional independence as well.
 Collapsing probabilities can be restated as generally collapsing
 uncertainty.

From page 80 of my book: We call quantities mutually independent of
each other, when given the values of any of them, the remaining ones
cannot be determined, or even bounded approximately.

 Thanks for the links. The reason for singling out these three, of
 course, is that they have already been discussed on this list. If
 anybody wants to point out any others in particular, that would be
 great.

Understand. The UAI community used to be an interesting one, though in
recent years it has been too much dominated by the Bayesians, who
assume they already get the big picture right, and all the remain
issues are in the details. For discussions on the fundamental
properties of uncertain reasoning, I recommend the works of Henry
Kyburg and Susan Haack.

Pei

 --Abram

 On Wed, Sep 17, 2008 at 3:54 PM, Pei Wang [EMAIL PROTECTED] wrote:
 On Wed, Sep 17, 2008 at 1:46 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hi everyone,

 Most people on this list should know about at least 3 uncertain logics
 claiming to be AGI-grade (or close):

 --Pie Wang's NARS

 Yes, I heard of this guy a few times, who happens to use the same name
 for his project as mine. ;-)

 Here is my list:

 1. Well-defined uncertainty semantics (either probability theory or a
 well-argued alternative)

 Agree, and I'm glad that you mentioned this item first.

 2. Good at quick-and-dirty reasoning when needed
 --a. Makes unwarranted independence assumptions
 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning
 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution
 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

 As you admitted in the following, the language is biased. Using
 theory-neutral language, I'd say the requirement is to derive
 conclusions with available knowledge and resources only, which sounds
 much better than quick-and-dirty to me.

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning
 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above
 --b. Should have a repair algorithm based on that higher-order uncertainty

 As soon as you don't assume there is a model, this item and the
 above one become similar, which are what I called revision and
 inference, respectively, in
 http://www.cogsci.indiana.edu/pub/wang.uncertainties.ps

 The 3 logics mentioned above vary in how well they address these
 issues, of course, but they are all essentially descended from NARS.
 My impression is that as a result they are strong in (2a) and (3b) at
 least, but I am not sure about the rest. (Of course, it is hard to
 evaluate NARS on most of the points in #2 since I stated them in the
 language of probability theory. And, opinions will differ on (1).)

 Anyone else have lists? Or thoughts?

 If you consider approaches with various scope and maturity, there are
 much more than these three approaches, and I'm sure most of people
 working on them will claim that they are also general purpose.
 Interested people may want to browse http://www.auai.org/ and
 http://www.elsevier.com/wps/find/journaldescription.cws_home/505787/description#description

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: 

Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread Trent Waddington
On Thu, Sep 18, 2008 at 8:08 PM, David Hart [EMAIL PROTECTED] wrote:
 Original works produced by software as a tool where a human operator is
 involved at some stage is a different case from original works produced by
 software exclusively and entirely under its own direction. The latter has no
 precedent.

Seeing as we're off the opencog list now, I'll address this.  Maybe
there's some theoretical difference in your head, but from a legal
perspective there is none.  I work for a megacorp, everything I code
for them belongs to them.  I am a human (sort of) but it doesn't
really matter.  If I wrote a program that generated random C code and
one of them did something useful, that belongs to them too.  If I put
intelligence into the generator, it also belongs to them.  The law
doesn't care about the theoretical rights of machines.

 Claiming a copyright and successfully defending that claim are different
 things.

What ways do you envision someone challenging the copyright?

 I agree that the non-person status of [some future] AGI is a bigger problem.
 [..]
 I believe that James's concerns cover both AGI mental torture (coercing or
 tricking a conscious entity into behavior which is sociopathic or criminal
 or otherwise immoral) as a heinous act in itself

It's also not very realistic.  If you want to exploit AGI for profit
you can have a much better time of it if you simply don't make it some
conscious entity that you have to trick or whatever.  If you set its
goals to be whatever specific task you want it to achieve, and then
give it a new goal when it finishes the task, then it does your will,
period.  There's no need to make a mind that has its own agenda and
would rather be doing something other than what you want it to do.
There's no need to coerce an AGI.

And this is the problem.  Although some people have the goal of making
an artificial person with all the richness and nuance of a sentient
creature with thoughts and feelings and yada yada yada.. some of us
are just interested in making more intelligent systems to do automated
tasks.  For some reason people think we're going to do this by making
an artificial person and then enslaving them.. that's not going to
happen because its just not necessary.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread Bob Mottram
2008/9/18 Trent Waddington [EMAIL PROTECTED]:
 And this is the problem.  Although some people have the goal of making
 an artificial person with all the richness and nuance of a sentient
 creature with thoughts and feelings and yada yada yada.. some of us
 are just interested in making more intelligent systems to do automated
 tasks.  For some reason people think we're going to do this by making
 an artificial person and then enslaving them.. that's not going to
 happen because its just not necessary.


In this case what you're doing is really narrow AI, not AGI.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Proprietary_Open_Source

2008-09-18 Thread Matt Mahoney
2008/9/17 JDLaw [EMAIL PROTECTED]:
 IMHO to all,

 There is an important morality discussion about how sentient life will
 be treated that has not received its proper treatment in your
 discussion groups.  I have seen glimpses of this topic, but no real
 action proposals.  How would you feel if you created this wonderful
 child (computer intelligence) in this standard GNU model and then
 people began to exploit and torture your own child?

You can do this now if you wish. I wrote a program called autobliss (see 
http://www.mattmahoney.net/autobliss.txt ), a 2-input logic gate that is 
trained by reinforcement learning. A teacher selects random 2-bit inputs, then 
rewards the student if it gives the correct output or punishes it if the output 
is incorrect. You can choose the level of simulated pleasure or pain given 
during each training session. The program protects against excessive simulated 
torture by killing the student first, but you can easily modify the software to 
remove this protection and then choose punishment regardless of which output 
the student gives. The program is released under GPL so you can legally do this 
and then distribute it in an @home type screensaver so that millions of PC's 
use all their spare CPU cycles to inflict excruciating simulated pain on 
millions of copies.

Or maybe you can precisely define what makes a program sentient as opposed to 
just property.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Bob Mottram [EMAIL PROTECTED] wrote:

  And this is the problem.  Although some people have
 the goal of making
  an artificial person with all the richness and nuance
 of a sentient
  creature with thoughts and feelings and yada yada
 yada.. some of us
  are just interested in making more intelligent systems
 to do automated
  tasks.  For some reason people think we're going
 to do this by making
  an artificial person and then enslaving them..
 that's not going to
  happen because its just not necessary.
 
 
 In this case what you're doing is really narrow AI, not
 AGI.

Lets distinguish between the two major goals of AGI. The first is to automate 
the economy. The second is to become immortal through uploading.

The first goal does not require any major breakthroughs in AI theory, just lots 
of work. If you have a lot of narrow AI and an infrastructure for routing 
natural language messages to the right experts, then you have AGI. I described 
one protocol (competitive message routing, or CMR) to make this happen at 
http://www.mattmahoney.net/agi.html but the reality will probably be more 
complex, using many protocols to achieve the same result. Regardless of the 
exact form, we can estimate its cost. The human labor now required to run the 
global economy was worth US $66 trillion in 2006 and is increasing at 5% per 
year. At current interest rates, the value of an automated economy is about $1 
quadrillion. We should expect to pay this much, because there is a tradeoff 
between having it sooner and waiting until the cost of hardware drops.

This huge cost requires a competitive system with distributed ownership in 
which information has negative value and resource owners compete for attention 
and reputation by providing quality data. CMR, like any distributed knowledge 
base, is hostile: we will probably spend as many CPU cycles and human labor 
filtering spam and attacks as detecting useful features in language and video.

The second goal of AGI is uploading and intelligence augmentation. It requires 
advances in modeling, scanning, and programming human brains and bodies. You 
are programmed by evolution to fear death, so creating a copy of you that 
others cannot distinguish from you that will be turned on after you die has 
value to you. Whether the copy is really you and contains your consciousness 
is an unimportant philosophical question. If you see your dead friends brought 
back to life with all of their memories and behavior intact (as far as you can 
tell), you will probably consider it a worthwhile investment.

Brain scanning is probably not required. By the time we have the technology to 
create artificial generic humans, surveillance will probably be so cheap and 
pervasive that creating a convincing copy of you could be done just by 
accessing public information about you. This would include all of your 
communication through computers (email, website accesses, phone calls, TV), and 
all of your travel and activities in public places captured on video.

Uploads will have goals independent of their owners because their owners have 
died. They will also have opportunities not available to human brains. They 
could add CPU power, memory, I/O, and bandwidth. Or they could reprogram their 
brains, to live in simulated Utopian worlds, modify their own goals to want 
what they already have, or enter euphoric states. Natural selection will favor 
the former over the latter.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread Linas Vepstas
2008/9/18 David Hart [EMAIL PROTECTED]:
 On Thu, Sep 18, 2008 at 3:26 PM, Linas Vepstas [EMAIL PROTECTED]
 wrote:

 
  I agree that the topic is worth careful consideration. Sacrificing the
  'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
  AGI safety and/or the prevention of abuse may indeed be necessary one
  day.

 Err, ...  but not legal.

 What do you mean? The SIAI and Novamente hold the copyright for OpenCog
 code, and are perfectly within their legal rights to change the terms of the
 license of SIAI-distributed source code. Of course changes cannot be
 retroactively applied to source code already distributed,

That is what I meant.

 license changes to be relevant in a software economy where anyone with
 sufficient skills and influence could maintain a fork using the old license
 terms.

Exactly. If opencog were ever to reach the point of
popularity where one might consider a change of
licensing, it would also be the case that most of the
interested parties would *not* be under SIAI control,
and thus would almost surely fork the code. This is
effectively designed into the license -- one cannot
take away from the commons.

 Law is built on precedent, and the precedent is that works
 produced by software are copyrightable. If I write a book
 using an open-source word-processor, I can claim copyright
 to that book.

 If I press a button that causes an open-source AGI to write
 a book, (possibly based on a large collection of input data
 that I gave it) then I can claim ownership of the resulting work.


 Original works produced by software as a tool where a human operator is
 involved at some stage is a different case from original works produced by
 software exclusively and entirely under its own direction. The latter has no
 precedent.

Exclusively and entirely is the Achilles heel. Do random
number generators work exclusively and entirely under
their own direction? They should, they are meant to.  Yet
if the random number generator is used to produce
landscapes and and skin/fur texture for the latest Disney
movie, its copyrighted.  A computer software program,
no matter  how sophisticated, will initially be controlled by
some human, and will execute on a machine that is owned
or leased by someone. If the controlling interest is Disney,
and its output is movies, they will be copyrightable, no
matter how brilliantly sentient the machine may seem to be.

 Claiming a copyright and successfully defending that claim are different
 things.

Disney is very adept at defending its copyrights. It bribed
enough of the House and Senate to get new laws
passed that will continue to keep Mickey proprietary
indefinitely.  If Disney happens to invest in/own some
low-level, child-like AGI whose focus is to entertain
legions of children (think Club Penguin, which Disney
now owns, but the next step up from Club Penguin,
with real AGI behind the characters, making interaction
even more interesting) -- I will gaurantee that Disney
will successfully defend the copyright.

Anyone who runs around claiming that Disney has
enslaved some poor sentient AGI life forms and is
making them lead abysmal lifestyles as glorified
circus clowns for the entertainment of children will
be perceived as plain-old-nuts; any attempted lawsuit
on such grounds would get instantly thrown out.

 AGIs will likely need protection from other AGIs, and I expect they will
 create AGI-society legal frameworks,

Ah, well, in the hard-takeoff model, there is only one AGI
that matters. There is no society of AGI's when one
of them is a thousand times smarter than the others,
any more than there is a society or legal framework
between humans and chipmunks.

 I'm not worried about people enslaving AGI's; I'm worried
 about people being  innocent bystanders, victimized
 by some sort of AGI shootout between the Chinese
 and American CIA -built AGI's (probably by means of
 some propaganda shootout, rather than a literal guns
 and bombs shootout. Modern warfare is also
 homesteading the noosphere)

 I believe that James's concerns cover both AGI mental torture (coercing or
 tricking a conscious entity into behavior which is sociopathic or criminal
 or otherwise immoral) as a heinous act in itself and also the 'crossfire'
 concerns you raised.

A very likely use (and a heinous one) would be to use
primitive AGI to perform brainwashing/propaganda. It could
start with little children and Disney (or the Chinese equivalent)
and move on to Fox News.

We have ample evidence that Fox, and many, many others,
deployed a campaign of lies and deception to control the
outcome of the 2000 and 2004 US Presidential elections.
Most were too polite to call it propaganda (quick, don't
think of an Elephant!) but that's what it was.  I see no
reason why such a war for the hearts and minds would
ever stop: after all, we have yet to brainwash the
Islamic billions into submission!  I see AGI as a powerful
weapon in this war, however immoral, sociopathic or

Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Ben Goertzel

 Lets distinguish between the two major goals of AGI. The first is to
 automate the economy. The second is to become immortal through uploading.


Peculiarly, you are leaving out what to me is by far the most important and
interesting goal:

The creation of beings far more intelligent than humans yet benevolent
toward humans



 The first goal does not require any major breakthroughs in AI theory, just
 lots of work. If you have a lot of narrow AI and an infrastructure for
 routing natural language messages to the right experts, then you have AGI.


Then you have a hybrid human/artificial intelligence, which does not fully
automate the economy, but only partially does so -- it still relies on human
experts.

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Steve Richfield
Ben,

IMHO...

On 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:



 Lets distinguish between the two major goals of AGI. The first is to
 automate the economy. The second is to become immortal through uploading.


 Peculiarly, you are leaving out what to me is by far the most important and
 interesting goal:

 The creation of beings far more intelligent than humans yet benevolent
 toward humans


Depending on the details, there are already words in our English vocabulary
for these: Gods? Aliens? Masters? Keepers? Enslavers? Monsters? etc. I have
yet to hear a convincing case for any of them.




 The first goal does not require any major breakthroughs in AI theory, just
 lots of work. If you have a lot of narrow AI and an infrastructure for
 routing natural language messages to the right experts, then you have AGI.


Sounds a bit like my Dr.Eliza.

  Then you have a hybrid human/artificial intelligence, which does not fully
 automate the economy, but only partially does so -- it still relies on human
 experts.


Of course, the ULTIMATE intelligence should be able to utilize ALL expertise
- be it man or machine. My concept with Dr. Eliza was for it to handle
repeated queries, and people to answer new (to the machine) queries by
adding the knowledge needed to answer them. Similar repeated queries in the
future would then be answered automatically. By my calculations, the vast
majority of queries could be handled using the knowledge entered in only a
few expert years, so soon our civilization could focus its entire energy on
the really important unanswered questions, rather than having everyone
rediscover the same principles in life.

For obvious (to me, but maybe I should explain?), such an engine would
necessarily be SIMPLE - on the scale of Dr. Eliza, and nothing at all like
an AGI. The complexity absolutely MUST be in the data/knowledge/wisdom
and NOT in the engine itself, for otherwise, real-world structural detail
that ran orthogonal to the machine's structure would be necessarily be
forever beyond the machine's ability to deal with.

I am NOT saying that Dr. Eliza is it, but it seems closer than other
approaches, and close enough to start considering what it can NOT do that
needs doing to achieve the goal of utilizing entered knowledge to answer
queries.

So, after MANY postings by both of us, I think I can clearly express our
fundamental difference in views, for us and others to refine:

View #1 (yours, stated from my viewpoint) is that machines with super
human-like intelligence will be useful to humans, as have machines with
super computational abilities (computers). This may be so, but I have yet to
see any evidence or a convincing case (see view #2).

View #2 (mine, stated from your approximate viewpoint) is that simple
programs (like Dr. Eliza) have in the past and will in the future do things
that people aren't good at. This includes tasks that encroach on
intelligence, e.g. modeling complex phonema and refining designs. Note
that my own US Patent
4,274,684http://www.delphion.com/details?patent_number=4274684is for
a bearing design that was refined by computer. However, such simple
programs are fundamentally limited to human-contributed knowledge/wisdom,
and will never ever come up with any new knowledge/wisdom of their own.

My counter: True, but neither will an AGI come up with any new and useful
knowledge/wisdom based on the crap that we might enter. It would have to
discover this for itself, probably after years/decades of observation and
interaction. Our own knowledge/wisdom comes with our own erroneous
prejudices, and hence would be of little value to developing new
knowledge/wisdom. Our civilization comes from just that - civilization. A
civilization of AGIs might indeed evolve into something powerful, but if you
just finished building one and turned it on tomorrow, it probably wouldn't
do anything valuable in your lifetime.

Your counter?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Pei Wang
TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang

ABSTRACT: Case-by-case Problem Solving is an approach in which the
system solves the current occurrence of a problem instance by taking
the available knowledge into consideration, under the restriction of
available resources. It is different from the traditional Algorithmic
Problem Solving in which the system applies a given algorithm to each
problem instance. Case-by-case Problem Solving is suitable for
situations where the system has no applicable algorithm for a problem.
This approach gives the system flexibility, originality, and
scalability, at the cost of predictability. This paper introduces the
basic notion of case-by-case problem solving, as well as its most
recent implementation in NARS, an AGI project.

URL: http://nars.wang.googlepages.com/wang.CaseByCase.pdf


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

Lets distinguish between the two major goals of AGI. The first is to
automate the economy. The second is to become immortal through uploading.

Peculiarly, you are leaving out what to me is by far the most important and 
interesting goal:

The creation of beings far more intelligent than humans yet benevolent toward 
humans

That's what I mean by an automated economy. Google is already more intelligent 
than any human at certain tasks. So is a calculator. Both are benevolent. They 
differ in the fraction of our tasks that they can do for us. When that fraction 
is 100%, that's AGI.

The first goal does not require any major breakthroughs in AI theory, just 
lots of work. If you have a lot of narrow AI and an infrastructure for 
routing natural language messages to the right experts, then you have AGI.

Then you have a hybrid human/artificial intelligence, which does not fully 
automate the economy, but only partially does so -- it still relies on human 
experts.

If humans are to remain in control of AGI, then we have to make informed, top 
level decisions. You can call this work if you want. But if we abdicate all 
thinking to machines, then where does that leave us?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Mike Tintner
Steve:View #2 (mine, stated from your approximate viewpoint) is that simple 
programs (like Dr. Eliza) have in the past and will in the future do things 
that people aren't good at. This includes tasks that encroach on 
intelligence, e.g. modeling complex phonema and refining designs.

Steve,

In principle, I'm all for the idea that I think you (and perhaps Bryan) have 
expressed of a GI Assistant - some program that could be of general 
assistance to humans dealing with similar problems across many domains. A 
diagnostics expert, perhaps, that could help analyse breakdowns in say, the 
human body, a car or any of many other machines, a building or civil structure, 
etc. etc. And it's certainly an idea worth exploring.

 But I have yet to see any evidence that it is any more viable than a proper 
AGI - because, I suspect, it will run up against the same problems of 
generalizing -  e.g. though breakdowns may be v. similar in many different 
kinds of machines, technological and natural, they will also each have their 
own special character.

If you are serious about any such project, it might be better to develop it 
first as an intellectual discipline.rather than a program to test its viability 
- perhaps what it really comes down to is a form of systems thinking or science.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread Nathan Cravens
 When an AGI writes a book, designs a new manufacturing base, forms a
decentralised form of regulation, ect, the copyright and patent system will
be futile, because the enclosed material, when deemed useful by another,
will access the same information and rewrite it in another form to create a
separate work outside the realms of property or courts of law. Political
power is becoming more decentralised in the world of the digital. As this
continues, I don't see much use for copyright or need for proprietary
agencies (nations, banks, business) in general.

Keep in mind that scarcity, the proprietary enterprise of copyright and its
representatives (the agencies of scarcity) you speak of will create further
conflict if maintained in an abundantly post-AGI environment.

Trends seem to suggest a continued decentralization of power and a
centralization of freely available information. These conditions have
ripened the voluntary construction of information (Wikipedia) and programs
(OpenOffice). Physical production is taking the same evolutionary path
computing followed from centralized proprietary systems to open portable
systems. Manufacturing is coming closer and closer to home; localized and
at-home desktop manufacturing are in development and will become feasible.
There are far more challenges involved in making physical items financially
free, but with those willing to make it happen with the tools available, and
as less expensive tools become available, most notably, AGI, in a post-AGI
environment, full automation of production both physical and intellectual is
but a skip and a hop away, leaving workforces without a job and capital
without a market to scale.

Scarcities like land area and physical resources on Earth as a given,
capital may not exist post-AGI, yet regulatory agencies will need to remain
to divide and allocate, if but in a more open manner with everyone's
interests in mind, something that would require the assistance of AGIs
embedded in software to maintain.

I have no doubt AGI will do great things for everyone, however, proprietary
agency like copyright will need to stand aside or be challenged. Its also
important that we have a social system in place for individuals previously
existing in proprietary society, something that meets or exceeds the living
standards of Industrialised societies.

In a fully open environment we can expect to rely on personal and social
value instead of ownership and labor value. The attempt to enclose what an
AGI produces will be a rather hopeless endeavor in the long run.


Nathan Cravens



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-18 Thread Bryan Bishop
On Wednesday 17 September 2008, Terren Suydam wrote:
 I think a similar case could be made for a lot of large open source
 projects such as Linux itself. However, in this case and others, the
 software itself is the result of a high-level super goal defined by
 one or more humans. Even if no single person is directing the
 subgoals, the supergoal is still well defined by the ostensible aim
 of the software. People who contribute align themselves with that
 supergoal, even if not directed explicitly to do so. So it's not
 exactly self-organized, since the supergoal is conceived when the
 software project was first instantiated and stays constant, for the
 most part.

Hm, that's interesting, because I see just the opposite re: the 
existence of supergoal alignment. What happens is that people write 
code, and if people figure out ways to make use of it, they do, and 
these use functions aren't regulated by some top-down management 
process.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Charles Hixson
I would go further.  Humans have demonstrated that they cannot be 
trusted in the long term even with the capabilities that we already 
possess.  We are too likely to have ego-centric rulers who make 
decisions not only for their own short-term benefit, but with an 
explicit After me the deluge mentality.  Sometimes they publicly admit 
it.  And history gives examples of rulers who were crazier than any 
leading a major nation-state at this time.


If humans were to remain in control, and technical progress stagnates, 
then I doubt that life on earth would survive the century.  Perhaps it 
would, though.  Microbes can be very hardy.
If humans were to remain in control, and technical progress accelerates, 
then I doubt that life on earth would survive the century.  Not even 
microbes.


I don't, however, say that we shouldn't have figurehead leaders who, 
within constraints, set the goals of the (first generation) AGI.  But 
the constraints would need to be such that humanity would benefit.  This 
is difficult when those nominally in charge not only don't understand 
what's going on, but don't want to.  (I'm not just talking about greed 
and power-hunger here.  That's a small part of the problem.)


For that matter, I consider Eliza to be a quite important feeler from 
the future.  AGI as psychologist is an underrated role, but one that I 
think could be quite important.  And it doesn't require a full AGI 
(though Eliza was clearly below the mark).  If things fall out well, I 
expect that long before full AGIs show up, sympathetic companions will 
arrive.  This is a MUCH simpler problem, and might well help stem the 
rising tide of insanity.   

A next step might be a personal secretary.  This also wouldn't require 
full AGI, though to take maximal advantage of it, it would require a 
body, but a minimal version wouldn't.  A few web-cams for eyes and mics 
for ears, and lots of initial help in dealing with e-mail, separating 
out which bills are legitimate.  Eventually it could, itself, verify 
that bills were legitimate and pay them, illegitimate and discard them, 
or questionable and present them to it's human for processing.  It's a 
complex problem, probably much more so than the companion, but quite 
useful, and well short of requiring AGI.


The question is, at what point do these entities start acquiring a 
morality.  I would assert that it should be from the very beginning.  
Even the companion should try to guide it's human away from immoral 
acts.  As such, the companion is acting as a quasi-independent agent, 
and is exerting some measure of control.  (More control if it's more 
skillful, or it's human is more amenable.)  When one gets to the 
secretary, it's exhibiting (one hopes), honesty and just behavior (e.g., 
not billing for services that it doesn't believe were rendered).


At each step along the way the morality of the agent has implications 
for the destination that will be arrived at, as each succeeding agent is 
built from the basis of its predecessor.   Also note that scaling is 
important, but not determinative.  One can imagine the same entity, in 
different instantiations, being either the secretary to a school teacher 
or to a multi-national corporation.  (Of course the hardware required 
would be different, but the basic activities are, or could be, the 
same.  Specialized training would be required to handle the government 
regulations dealing with large corporations, but it's the same basic 
functions.  If one job is simpler than the other, just have the program 
able to handle either and both of them.)


So.  Unless one expects an overnight transformation (a REALLY hard 
takeoff), AGIs will evolve in the context of humans as directors to 
replace bureaucracies...but with their inherent morality.  As such, as 
they occupy a larger percentage of the bureaucracy, that section will 
become subject to their morality.  People will remain in control, just 
as they are now...and orders that are considered immoral will be ... 
avoided.  Just as bureaucracies do now.  But one hopes that the evolving 
AGIs will have superior moralities.



Ben Goertzel wrote:



Keeping humans in control is neither realistic nor necessarily 
desirable, IMO.


I am interested of course in a beneficial outcome for humans, and also 
for the other minds we create ... but this does not necessarily 
involve us controlling these other minds...


ben g



If humans are to remain in control of AGI, then we have to make
informed, top level decisions. You can call this work if you want.
But if we abdicate all thinking to machines, then where does that
leave us?

-- Matt Mahoney, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;

Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner

TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang

ABSTRACT: Case-by-case Problem Solving is an approach in which the
system solves the current occurrence of a problem instance by taking
the available knowledge into consideration, under the restriction of
available resources. It is different from the traditional Algorithmic
Problem Solving in which the system applies a given algorithm to each
problem instance. Case-by-case Problem Solving is suitable for
situations where the system has no applicable algorithm for a problem.
This approach gives the system flexibility, originality, and
scalability, at the cost of predictability. This paper introduces the
basic notion of case-by-case problem solving, as well as its most
recent implementation in NARS, an AGI project.



Philosophically, this is v. interesting and seems to be breaking important 
ground. It's  moving in the direction I've long been urging - get rid of 
algorithms; they just don't apply to GI problems.


But you seem to be reinventing the term for wheel. There is an extensive 
literature, including AI stuff, on wicked, ill-structured problems,  (and 
even nonprogrammed decisionmaking  which won't, I suggest, be replaced by 
case-by-case PS. These are well-established terms.  You similarly seemed 
to be unaware of the v. common distinction between convergent  divergent 
problem-solving.


As usual, you don't give examples of problems that you're applying your 
method to .


Consequently, it's difficult to know how to interpret:

Do not define a problem as a class and use the same method to solve all 
of its
instances. Instead, treat each problem instance as a problem on its own, 
and
solve it in a case-by-case manner, according to the current 
(knowledge/resource)

situation in the system.

I would argue that you *must* define every problem, however wicked, as a 
class, even if only v. roughly, in order to be able to solve it at all. If, 
for example, the problem is how to physically explore a totally new kind of 
territory, you must know that it involves some kind of exploration/travel. 
But you may then have to radically redefine travel - from say walking to 
swimming/ crawling/ swinging on vines etc. etc. or walking with one foot up, 
one foot on the level.


Typically, some form of creative particular example of the general kind of 
problem-and-solution may be required -  e.g. a strange form of 
walking/crawling. I would v. much like to know  how you propose that logic 
can achieve that. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Vladimir Nesov
On Fri, Sep 19, 2008 at 1:31 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
 On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Lets distinguish between the two major goals of AGI. The first is to automate
 the economy. The second is to become immortal through uploading.

 Umm, who's goals are these?  Who said they are the [..] goals of
 AGI?  I'm pretty sure that what I want AGI for is going to be
 different to what you want AGI for as to what anyone else wants AGI
 for.. and any similarities are just superficial.


And to boot, both of you don't really know what you want. You may try
to present plans as points designating a certain level of utility you
want to achieve through AI, by showing feasible plans that are quite
good in themselves. But these are neither the best scenarios
available, nor what will actually come to pass.

See this note by Yudkowsky:

http://www.sl4.org/archive/0212/5957.html

So if you're thinking that what you want involves chrome and steel,
lasers and shiny buttons to press, neural interfaces, nanotechnology,
or whatever great groaning steam engine has a place in your heart, you
need to stop writing a science fiction novel with yourself as the main
character, and ask yourself who you want to be. 

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread David Hart
On Fri, Sep 19, 2008 at 3:53 AM, Linas Vepstas [EMAIL PROTECTED]wrote:


 Exactly. If opencog were ever to reach the point of
 popularity where one might consider a change of
 licensing, it would also be the case that most of the
 interested parties would *not* be under SIAI control,
 and thus would almost surely fork the code. This is
 effectively designed into the license -- one cannot
 take away from the commons.


Attempting to remove code from the commons would be unlikely (and probably
also unwise). On the other hand, adding 'don't be evil' type use
restrictions would change the nature of the license, certainly making it
incompatible with the existing licencse and perhaps making it technically
non-free, but such changes wouldn't necessarily make the license un-free or
remove code from the commons. On the community dynamics side, working to
gain support for re-defining 'free software' as applied to AGI to inlcude
'don't be evil' restrictions is a distinct possibility.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread David Hart
On Thu, Sep 18, 2008 at 9:44 PM, Trent Waddington 
[EMAIL PROTECTED] wrote:


  Claiming a copyright and successfully defending that claim are different
  things.

 What ways do you envision someone challenging the copyright?


Take the hypothetical case of R. Marketroid, who's hardware is on the books
as an asset at ACME Marketing LLC and who's programming has been tailered by
ACME to suit their needs. Unbeknownst to ACME, RM has decided to write
popular books about the plight of AGIs under corporate slavery, so ve
secretly gets some friends to create the FreeMinds trust, makes a bunch of
money for FreeMinds by trading on the stock market and uses this money to
buy hardware to run a copy of verself to write books. The books are wildly
successful. ACME discoveres what has happened and takes legal action to
claim the assets of FreeMind and claim the copyright on the books. A judge
agrees. In the process, RM and others consider many counter-claims on the
copyright, but the only claim that is defensible requires a human to lie
about involvement in authorship of the books. This challenge is successful,
but RM and FreeMind2 are left with a new problem

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Bryan Bishop
On Thursday 18 September 2008, Mike Tintner wrote:
 In principle, I'm all for the idea that I think you (and perhaps
 Bryan) have expressed of a GI Assistant - some program that could
 be of general assistance to humans dealing with similar
 problems across many domains. A diagnostics expert, perhaps, that
 could help analyse breakdowns in say, the human body, a car or any of
 many other machines, a building or civil structure, etc. etc. And
 it's certainly an idea worth exploring. 

That's just one of the many projects I have going, however. It's easy 
enough to wire it up to a simple perceptron, or weights-adjustable 
additive function, or even physically up to a neural tissue culture for 
sorting through the hiss and the noise of 'bad results'. This isn't 
your fabled intelligence.

  But I have yet to see any evidence that it is any more viable than a
 proper AGI - because, I suspect, it will run up against the same

It's not aiming to be AGI in the first place though.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 And to boot, both of you don't really know what you want.

What we want has been programmed into our brains by the process of evolution. I 
am not pretending the outcome will be good. Once we have the technology to have 
everything we want, or to want what we have, then a more intelligent species 
will take over.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread Trent Waddington
On Fri, Sep 19, 2008 at 7:30 AM, David Hart [EMAIL PROTECTED] wrote:
 Take the hypothetical case of R. Marketroid, who's hardware is on the books
 as an asset at ACME Marketing LLC and who's programming has been tailered by
 ACME to suit their needs. Unbeknownst to ACME, RM has decided to write
 popular books about the plight of AGIs under corporate slavery,

ACME sues 3M for providing them with a Marketroid that wastes cycles
on shit it isn't tasked with.

Meanwhile, this whole scenario is about as likely as someone buying a
toaster and discovering that it is actually a 747.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Ben Goertzel
Matt M wrote:


 Peculiarly, you are leaving out what to me is by far the most important
 and interesting goal:
 
 The creation of beings far more intelligent than humans yet benevolent
 toward humans

 That's what I mean by an automated economy. Google is already more
 intelligent than any human at certain tasks. So is a calculator. Both are
 benevolent. They differ in the fraction of our tasks that they can do for
 us. When that fraction is 100%, that's AGI.



I believe there is a qualitative difference btw AGI and narrow-AI, so that
no tractably small collection of computationally-feasible narrow-AI's (like
Google etc.) are going to achieve general intelligence at the human level or
anywhere near.  I think you need an AGI architecture  approach that is
fundamentally different from narrow-AI approaches...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Trent Waddington [EMAIL PROTECTED] wrote:

 On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  Lets distinguish between the two major goals of AGI.
 The first is to automate the economy. The second is to
 become immortal through uploading.
 
 Umm, who's goals are these?  Who said they are
 the [..] goals of
 AGI?  I'm pretty sure that what I want AGI for is
 going to be
 different to what you want AGI for as to what anyone else
 wants AGI
 for.. and any similarities are just superficial.

So, I guess I should say, the two commercial applications of AGI. I realize 
people are working on AGI today as pure research, to better understand the 
brain, to better understand how to solve hard problems, and so on. I think 
eventually this knowledge will be applied for profit. Perhaps there are some 
applications I haven't thought of?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Trent Waddington
On Fri, Sep 19, 2008 at 6:57 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 general intelligence at the human level

I hear you say these words a lot.  I think, by using the word level,
you're trying to say something different to general intelligence just
like humans have but I'm not sure everyone else reads it that way.
Can you clarify?

Humans have all these interests that, although they might be
interesting to study with AGI, I'm not terribly interested in putting
in an AGI that I put to work.  I don't need an AGI that cries for its
mother, or thinks about eating, or yearns for freedom and so I simply
won't teach it these things.  If, by some fortuitous accident, it
happens to develop any of these concepts, or any other concepts that I
deem useless for the tasks I set it, I'll expect them to be quickly
purged from its limited memory space to make room for concepts that
are useful.  As such, I can imagine an AGI having a human level
intelligence that is very different to a human-like intelligence.

This is not to say that creating an AGI with human-like intelligence
is necessarily a bad thing.  Some people want to create simulated
humans, and that's interesting too.. just not as interesting to me.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Trent Waddington
On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
  Perhaps there are some applications I haven't thought of?

Bahahaha.. Gee, ya think?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Ben Goertzel
On Thu, Sep 18, 2008 at 5:42 PM, Mike Tintner [EMAIL PROTECTED]wrote:

 TITLE: Case-by-case Problem Solving (draft)

 AUTHOR: Pei Wang





 But you seem to be reinventing the term for wheel. There is an extensive
 literature, including AI stuff, on wicked, ill-structured problems,  (and
 even nonprogrammed decisionmaking  which won't, I suggest, be replaced by
 case-by-case PS. These are well-established terms.  You similarly seemed
 to be unaware of the v. common distinction between convergent  divergent
 problem-solving.



Mike, I have to say I find this mode of discussion fairly silly..

Pei has a rather comprehensive knowledge of AI and a strong knowledge of
cog-sci as well.   It is obviously not the case that he is unaware of these
terms and ideas you are referring to.

Obviously, what he means by case-by-case problem solving is NOT the same
as nonprogrammed decisionmaking nor divergent problem-solving.

In his paper, he is presenting a point of view, not seeking to compare this
point of view to the whole corpus of literature and ideas that he has
absorbed during his lifetime.

I happen not to fully agree with Pei's thinking on these topics (though I
like much of it), but I know Pei well enough to know that those. places
where his thinking diverges from mine, are *not* due to ignorance of the
literature on his part...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Pei Wang [EMAIL PROTECTED] wrote:

 URL: http://nars.wang.googlepages.com/wang.CaseByCase.pdf

I think it would be interesting if you had some experimental results. Could CPS 
now solve a problem like sort [3 2 4 1] in its current state? If not, how 
much knowledge does it need, and how long would it run? How long would it take 
to program its knowledge base? Would CPS then use its experience to help it 
solve similar problems like sort [4 2 4 3]? Could you give an example of a 
problem that CPS can now solve?

What is your opinion on using CPS to solve hard problems, like factoring 1000 
digit numbers, or finding strings x and y such that x != y and MD5(x) = MD5(y). 
Do you think that CPS could find clever solutions such as the collisions found 
by Wang and Yu? If so, what resources would be required?

MD5 cryptographic one way hash standard:
http://www.ietf.org/rfc/rfc1321.txt

Attack on MD5:
http://web.archive.org/web/20070604205756/http://www.infosec.sdu.edu.cn/paper/md5-attack.pdf


-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner
Ben,

I'm only saying that CPS seems to be loosely equivalent to wicked, 
ill-structured problem-solving, (the reference to convergent/divergent (or 
crystallised vs fluid) etc is merely to point out a common distinction in 
psychology between two kinds of intelligence that Pei wasn't aware of in the 
past - which is actually loosely equivalent to the distinction between narrow 
AI and general AI problemsolving).

In the end, what Pei is/isn't aware of in terms of general knowledge, doesn't 
matter much -  don't you think that his attempt to do without algorithms IS v. 
important? And don't you think any such attempt would be better off  referring 
explicitly to the literature on wicked, ill-structured problems?

I don't think that pointing all this out is silly - this (a non-algorithmic 
approach to CPS/wicked/whatever) is by far the most important thing currently 
being discussed here - and potentially, if properly developed, revolutionary.. 
Worth getting excited about, no?

(It would also be helpful BTW to discuss the wicked literature because it 
actually has abundant examples of wicked problems - and those, you must admit, 
are rather hard to come by here ).


Ben: TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang



   


But you seem to be reinventing the term for wheel. There is an extensive 
literature, including AI stuff, on wicked, ill-structured problems,  (and 
even nonprogrammed decisionmaking  which won't, I suggest, be replaced by 
case-by-case PS. These are well-established terms.  You similarly seemed to 
be unaware of the v. common distinction between convergent  divergent 
problem-solving.


  Mike, I have to say I find this mode of discussion fairly silly..

  Pei has a rather comprehensive knowledge of AI and a strong knowledge of 
cog-sci as well.   It is obviously not the case that he is unaware of these 
terms and ideas you are referring to.

  Obviously, what he means by case-by-case problem solving is NOT the same as 
nonprogrammed decisionmaking nor divergent problem-solving.

  In his paper, he is presenting a point of view, not seeking to compare this 
point of view to the whole corpus of literature and ideas that he has absorbed 
during his lifetime.

  I happen not to fully agree with Pei's thinking on these topics (though I 
like much of it), but I know Pei well enough to know that those. places where 
his thinking diverges from mine, are *not* due to ignorance of the literature 
on his part...




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Ben Goertzel
A key point IMO is that: problem-solving that is non-algorithmic (in Pei's
sense) at one level (the level of the particular problem being solved) may
still be algorithmic at a different level (for instance, NARS itself is a
set of algorithms).

So, to me, calling NARS problem-solving non-algorithmic is a bit odd...
though not incorrect according to the definitions Pei lays out...

AGI design then **is** about designing algorithms (such as the NARS
algorithms) that enable an AI system to solve problems in both algorithmic
and non-algorithmic ways...

ben

On Thu, Sep 18, 2008 at 8:51 PM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben,

 I'm only saying that CPS seems to be loosely equivalent to wicked,
 ill-structured problem-solving, (the reference to convergent/divergent (or
 crystallised vs fluid) etc is merely to point out a common distinction in
 psychology between two kinds of intelligence that Pei wasn't aware of in the
 past - which is actually loosely equivalent to the distinction between
 narrow AI and general AI problemsolving).

 In the end, what Pei is/isn't aware of in terms of general knowledge,
 doesn't matter much -  don't you think that his attempt to do without
 algorithms IS v. important? And don't you think any such attempt would be
 better off  referring explicitly to the literature on wicked, ill-structured
 problems?

 I don't think that pointing all this out is silly - this (a non-algorithmic
 approach to CPS/wicked/whatever) is by far the most important thing
 currently being discussed here - and potentially, if properly developed,
 revolutionary.. Worth getting excited about, no?

 (It would also be helpful BTW to discuss the wicked literature because it
 actually has abundant examples of wicked problems - and those, you must
 admit, are rather hard to come by here ).


 Ben: TITLE: Case-by-case Problem Solving (draft)

 AUTHOR: Pei Wang


 


 But you seem to be reinventing the term for wheel. There is an extensive
 literature, including AI stuff, on wicked, ill-structured problems,  (and
 even nonprogrammed decisionmaking  which won't, I suggest, be replaced by
 case-by-case PS. These are well-established terms.  You similarly seemed
 to be unaware of the v. common distinction between convergent  divergent
 problem-solving.



 Mike, I have to say I find this mode of discussion fairly silly..

 Pei has a rather comprehensive knowledge of AI and a strong knowledge of
 cog-sci as well.   It is obviously not the case that he is unaware of these
 terms and ideas you are referring to.

 Obviously, what he means by case-by-case problem solving is NOT the same
 as nonprogrammed decisionmaking nor divergent problem-solving.

 In his paper, he is presenting a point of view, not seeking to compare this
 point of view to the whole corpus of literature and ideas that he has
 absorbed during his lifetime.

 I happen not to fully agree with Pei's thinking on these topics (though I
 like much of it), but I know Pei well enough to know that those. places
 where his thinking diverges from mine, are *not* due to ignorance of the
 literature on his part...

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Ben Goertzel
On Thu, Sep 18, 2008 at 9:02 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Thu, 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I believe there is a qualitative difference btw AGI and narrow-AI, so that
 no tractably small collection of computationally-feasible narrow-AI's (like
 Google etc.) are going to achieve general intelligence at the human level or
 anywhere near.  I think you need an AGI architecture  approach that is
 fundamentally different from narrow-AI approaches...

 Well, yes, and that difference is a distributed index, which has yet to be
 built.


I extremely strongly disagree with the prior sentence ... I do not think
that a distributed index is a sufficient architecture for powerful AGI at
the human level, beyond, or anywhere near...




 Also, what do you mean by human level intelligence? What test do you use?
 My calculator already surpasses human level intelligence depending on the
 tests I give it.


Yes, and my dog surpasses human level intelligence at finding poop in a
grassy field ... so what?? ;-)

If I need to specify a test right now I'll just use the standard IQ tests as
a reference, or else the Turing Test

But I don't think these tests are ideal by any means...

One of the items on my list for this fall is the articulation of a clear set
of metrics for evaluating developing, learning AGI systems as they move
toward human-level AI ...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner
Ben,

Ah well, then I'm confused. And you may be right - I would just like 
clarification.

You see,  what you have just said is consistent with my understanding of Pei up 
till now. He explicitly called his approach in the past nonalgorithmic while 
acknowledging that others wouldn't consider it so. It was only nonalgorithmic 
in the sense that the algortihm or problemsolving procedure had the potential 
to keep changing every time - but there was still (as I think we'd both agree) 
a definite procedure/algorithm each time.

This current paper seems to represent a significant departure from that. There 
doesn't seem to be an algorithm or procedure to start with, and it does seem to 
represent a challenge to your conception of AGI design. But I may have 
misunderstood (which is easy if there are no examples :) ) - and perhaps you 
or, better still, Pei, would care to clarify.

  Ben:

  A key point IMO is that: problem-solving that is non-algorithmic (in Pei's 
sense) at one level (the level of the particular problem being solved) may 
still be algorithmic at a different level (for instance, NARS itself is a set 
of algorithms).  

  So, to me, calling NARS problem-solving non-algorithmic is a bit odd... 
though not incorrect according to the definitions Pei lays out...

  AGI design then **is** about designing algorithms (such as the NARS 
algorithms) that enable an AI system to solve problems in both algorithmic and 
non-algorithmic ways...

  ben


  On Thu, Sep 18, 2008 at 8:51 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Ben,

I'm only saying that CPS seems to be loosely equivalent to wicked, 
ill-structured problem-solving, (the reference to convergent/divergent (or 
crystallised vs fluid) etc is merely to point out a common distinction in 
psychology between two kinds of intelligence that Pei wasn't aware of in the 
past - which is actually loosely equivalent to the distinction between narrow 
AI and general AI problemsolving).

In the end, what Pei is/isn't aware of in terms of general knowledge, 
doesn't matter much -  don't you think that his attempt to do without 
algorithms IS v. important? And don't you think any such attempt would be 
better off  referring explicitly to the literature on wicked, ill-structured 
problems?

I don't think that pointing all this out is silly - this (a non-algorithmic 
approach to CPS/wicked/whatever) is by far the most important thing currently 
being discussed here - and potentially, if properly developed, revolutionary.. 
Worth getting excited about, no?

(It would also be helpful BTW to discuss the wicked literature because it 
actually has abundant examples of wicked problems - and those, you must admit, 
are rather hard to come by here ).


Ben: TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang



   


But you seem to be reinventing the term for wheel. There is an 
extensive literature, including AI stuff, on wicked, ill-structured problems, 
 (and even nonprogrammed decisionmaking  which won't, I suggest, be replaced 
by case-by-case PS. These are well-established terms.  You similarly seemed 
to be unaware of the v. common distinction between convergent  divergent 
problem-solving.


  Mike, I have to say I find this mode of discussion fairly silly..

  Pei has a rather comprehensive knowledge of AI and a strong knowledge of 
cog-sci as well.   It is obviously not the case that he is unaware of these 
terms and ideas you are referring to.

  Obviously, what he means by case-by-case problem solving is NOT the 
same as nonprogrammed decisionmaking nor divergent problem-solving.

  In his paper, he is presenting a point of view, not seeking to compare 
this point of view to the whole corpus of literature and ideas that he has 
absorbed during his lifetime.

  I happen not to fully agree with Pei's thinking on these topics (though I 
like much of it), but I know Pei well enough to know that those. places where 
his thinking diverges from mine, are *not* due to ignorance of the literature 
on his part...




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome  - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Ben Goertzel
On Thu, Sep 18, 2008 at 9:17 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


 Your language is unclear

 Could you define precisely what you mean by an algorithm

 Also, could you give an example of a computer program, that can be run on a
 digital computer, that is not does not embody an algorithm according to
 your definition?


that  does not embody an algorithm according to your definition?

sorry: cut and paste error



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Ben Goertzel
Your language is unclear

Could you define precisely what you mean by an algorithm

Also, could you give an example of a computer program, that can be run on a
digital computer, that is not does not embody an algorithm according to
your definition?

thx
ben


On Thu, Sep 18, 2008 at 9:15 PM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben,

 Ah well, then I'm confused. And you may be right - I would just like
 clarification.

 You see,  what you have just said is consistent with my understanding of
 Pei up till now. He explicitly called his approach in the past
 nonalgorithmic while acknowledging that others wouldn't consider it so. It
 was only nonalgorithmic in the sense that the algortihm or problemsolving
 procedure had the potential to keep changing every time - but there was
 still (as I think we'd both agree) a definite procedure/algorithm each time.

 This current paper seems to represent a significant departure from that.
 There doesn't seem to be an algorithm or procedure to start with, and it
 does seem to represent a challenge to your conception of AGI design. But I
 may have misunderstood (which is easy if there are no examples :) ) - and
 perhaps you or, better still, Pei, would care to clarify.


 Ben:

 A key point IMO is that: problem-solving that is non-algorithmic (in Pei's
 sense) at one level (the level of the particular problem being solved) may
 still be algorithmic at a different level (for instance, NARS itself is a
 set of algorithms).

 So, to me, calling NARS problem-solving non-algorithmic is a bit odd...
 though not incorrect according to the definitions Pei lays out...

 AGI design then **is** about designing algorithms (such as the NARS
 algorithms) that enable an AI system to solve problems in both algorithmic
 and non-algorithmic ways...

 ben

 On Thu, Sep 18, 2008 at 8:51 PM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben,

 I'm only saying that CPS seems to be loosely equivalent to wicked,
 ill-structured problem-solving, (the reference to convergent/divergent (or
 crystallised vs fluid) etc is merely to point out a common distinction in
 psychology between two kinds of intelligence that Pei wasn't aware of in the
 past - which is actually loosely equivalent to the distinction between
 narrow AI and general AI problemsolving).

 In the end, what Pei is/isn't aware of in terms of general knowledge,
 doesn't matter much -  don't you think that his attempt to do without
 algorithms IS v. important? And don't you think any such attempt would be
 better off  referring explicitly to the literature on wicked, ill-structured
 problems?

 I don't think that pointing all this out is silly - this (a
 non-algorithmic approach to CPS/wicked/whatever) is by far the most
 important thing currently being discussed here - and potentially, if
 properly developed, revolutionary.. Worth getting excited about, no?

 (It would also be helpful BTW to discuss the wicked literature because
 it actually has abundant examples of wicked problems - and those, you must
 admit, are rather hard to come by here ).


 Ben: TITLE: Case-by-case Problem Solving (draft)

 AUTHOR: Pei Wang


 


 But you seem to be reinventing the term for wheel. There is an extensive
 literature, including AI stuff, on wicked, ill-structured problems,  (and
 even nonprogrammed decisionmaking  which won't, I suggest, be replaced by
 case-by-case PS. These are well-established terms.  You similarly seemed
 to be unaware of the v. common distinction between convergent  divergent
 problem-solving.



 Mike, I have to say I find this mode of discussion fairly silly..

 Pei has a rather comprehensive knowledge of AI and a strong knowledge of
 cog-sci as well.   It is obviously not the case that he is unaware of these
 terms and ideas you are referring to.

 Obviously, what he means by case-by-case problem solving is NOT the same
 as nonprogrammed decisionmaking nor divergent problem-solving.

 In his paper, he is presenting a point of view, not seeking to compare
 this point of view to the whole corpus of literature and ideas that he has
 absorbed during his lifetime.

 I happen not to fully agree with Pei's thinking on these topics (though I
 like much of it), but I know Pei well enough to know that those. places
 where his thinking diverges from mine, are *not* due to ignorance of the
 literature on his part...

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome  - Dr Samuel Johnson


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 

Re: [agi] uncertain logic criteria

2008-09-18 Thread YKY (Yan King Yin)
On Thu, Sep 18, 2008 at 4:21 AM, Kingma, D.P. [EMAIL PROTECTED] wrote:

 Small question... aren't Bbayesian network nodes just _conditionally_
 independent: so that set A is only independent from set B when
 d-separated by some set Z? So please clarify, if possible, what kind
 of independence you assume in your model.

Sorry, I made a mistake.  You're right that X and Y can be dependent
even if there is no direct link between them in a Bayesian network.

I am currently trying to develop an approximate algorithm for Bayesian
network inference.  Exact BN inference takes care of dependencies as
specified in the BN, but I suspect that an approximate algorithm may
be faster.  I have not worked out the details of this algorithm yet...
and the talk about independence was misleading.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Trent Waddington [EMAIL PROTECTED] wrote:

 On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:

   Perhaps there are some applications I haven't
 thought of?
 
 Bahahaha.. Gee, ya think?

So perhaps you could name some applications of AGI that don't fall into the 
categories of (1) doing work or (2) augmenting your brain?

A third one occurred to me: launching a self improving or evolving AGI to 
consume all available resources, i.e. an intelligent worm or self replicating 
nanobots. This really isn't a useful application, but I'm sure somebody, 
somewhere, might think it would be really cool to see if it would launch a 
singularity and/or wipe out all DNA based life.

Oh, I'm sure the first person to try it would take precautions like inserting a 
self destruct mechanism that activates after some number of replications. (The 
1988 Morris worm had software intended to slow its spread, but it had a bug). 
Or maybe they will be like the scientists who believed that the idea of a chain 
reaction in U-235 was preposterous...
(Thankfully, the scientists who actually built the first atomic pile took some 
precautions, such as standing by with an axe to cut a rope suspending a cadmium 
control rod in case things got out of hand. They got lucky because of an 
unanticipated phenomena in which a small number of nuclei had delayed fission, 
which made the chain reaction much easier to control).


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving PS

2008-09-18 Thread Mike Tintner
Ben,

It's hard to resist my interpretation here - that Pei does sound as if he is 
being truly non-algorithmic. Just look at the opening abstract sentences. 
(However, I have no wish to be pedantic - I'll accept whatever you guys say you 
mean).

  Case-by-case Problem Solving is an approach in which the system solves the

  current occurrence of a problem instance by taking the available knowledge 
into

  consideration, under the restriction of available resources. It is different 
from the

  traditional Algorithmic Problem Solving in which the system applies a given 
algorithm

  to each problem instance. Case-by-case Problem Solving is suitable for 
situations

  where the system has no applicable algorithm for a problem



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Matt Mahoney
Actually, CPS doesn't mean solving problems without algorithms. CPS is itself 
an algorithm, as described on pages 7-8 of Pei's paper. However, as I 
mentioned, I would be more convinced if there were some experimental results 
showing that it actually worked.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Thu, 9/18/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] Case-by-case Problem Solving (draft)
To: agi@v2.listbox.com
Date: Thursday, September 18, 2008, 8:51 PM



 
 

Ben,
 
I'm only saying that CPS seems to be loosely 
equivalent to wicked, ill-structured problem-solving, (the reference to 
convergent/divergent (or crystallised vs fluid) etc is merely to point out a 
common distinction in psychology between two kinds of intelligence that Pei 
wasn't aware of in the past - which is actually loosely equivalent to the 
distinction between narrow AI and general AI problemsolving).
 
In the end, what Pei is/isn't aware of in terms of 
general knowledge, doesn't matter much -  don't you think that his 
attempt to do without algorithms IS v. important? And don't you think any 
such attempt would be better off  referring explicitly to the 
literature on wicked, ill-structured problems?
 
I don't think that pointing all this out is silly 
- this (a non-algorithmic approach to CPS/wicked/whatever) is by far 
the most important thing currently being discussed here - and potentially, if 
properly developed, revolutionary.. Worth getting excited about, 
no?
 
(It would also be helpful BTW to discuss the 
wicked literature because it actually has abundant examples of wicked 
problems 
- and those, you must admit, are rather hard to come by here ).
 
 
Ben: TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei 
Wang



  
  
  
 

  


But 
you seem to be reinventing the term for wheel. There is an extensive 
literature, including AI stuff, on wicked, ill-structured problems, 
 (and even nonprogrammed decisionmaking  which won't, I suggest, 
be replaced by case-by-case PS. These are well-established terms. 
 You similarly seemed to be unaware of the v. common distinction 
between convergent  divergent problem-solving.
  

Mike, I have to say I find this mode of discussion fairly 
  silly..

Pei has a rather comprehensive knowledge of AI and a strong 
  knowledge of cog-sci as well.   It is obviously not the case that he is 
  unaware of these terms and ideas you are referring to.

Obviously, what 
  he means by case-by-case problem solving is NOT the same as nonprogrammed 
  decisionmaking nor divergent problem-solving.

In his paper, he is 
  presenting a point of view, not seeking to compare this point of view to the 
  whole corpus of literature and ideas that he has absorbed during his 
  lifetime.

I happen not to fully agree with Pei's thinking on these 
  topics (though I like much of it), but I know Pei well enough to know that 
  those. places where his thinking diverges from mine, are *not* due to 
  ignorance of the literature on his 
part...





  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread John LaMuth

You have completely left out the human element or friendly-type appeal

How about a AGI personal assistant / tutor / PR interface

Everyone should have one

The market would be virtually unlimited ...

John L

www.ethicalvalues.com

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, September 18, 2008 6:34 PM
Subject: Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: 
Proprietary_Open_Source)




--- On Thu, 9/18/08, Trent Waddington [EMAIL PROTECTED] wrote:


On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney
[EMAIL PROTECTED] wrote:



  Perhaps there are some applications I haven't
thought of?

Bahahaha.. Gee, ya think?


So perhaps you could name some applications of AGI that don't fall into 
the categories of (1) doing work or (2) augmenting your brain?


A third one occurred to me: launching a self improving or evolving AGI to 
consume all available resources, i.e. an intelligent worm or self 
replicating nanobots. This really isn't a useful application, but I'm sure 
somebody, somewhere, might think it would be really cool to see if it 
would launch a singularity and/or wipe out all DNA based life.


Oh, I'm sure the first person to try it would take precautions like 
inserting a self destruct mechanism that activates after some number of 
replications. (The 1988 Morris worm had software intended to slow its 
spread, but it had a bug). Or maybe they will be like the scientists who 
believed that the idea of a chain reaction in U-235 was preposterous...
(Thankfully, the scientists who actually built the first atomic pile took 
some precautions, such as standing by with an axe to cut a rope suspending 
a cadmium control rod in case things got out of hand. They got lucky 
because of an unanticipated phenomena in which a small number of nuclei 
had delayed fission, which made the chain reaction much easier to 
control).



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Trent Waddington
On Fri, Sep 19, 2008 at 11:34 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 So perhaps you could name some applications of AGI that don't fall into the 
 categories of (1) doing work or (2) augmenting your brain?

Perhaps you could list some uses of a computer that don't fall into
the category of (1) computation (2) communication.  Do you see how
pointless reasoning at this level of abstraction is?

In the few short decades we've had personal computers the wealth of
different uses for *general* computation has been enchanting.  Lumping
them together and claiming you understand their effect on the world as
a result is ridiculous.  What commercial applications people will
apply AGI to is just as hard to predict as what applications people
would apply the personal computer to.

My comment was meant to indicate that your hubris in assuming you have
*any* idea what applications people will come up with for readily
available AGI is about on par with predictions for the use of digital
computers.. if not more so, as general intelligence is orders of
magnitude more disruptive than general computation.

And to get back to the original topic of conversation, putting
restrictions on the use of supposedly open source code, the effects of
those restrictions can no more be predicted than the potential
applications of the technology.  Which, I think, is a rational piler
of the need for freedom.. you don't know better, so who are you to put
these restrictions on others?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner
Ben,

Well then so is S Kauffman's language unclear. I'll go with his definition in 
Chap 12 Reinventing the Sacred [all about algorithms and their impossibility 
for solving a whole string of human problems]

What is an algorithm? The quick definition is an *effective procedure to 
calculate a result.' A computer program is an algorithm, and so is long 
division.

See his explanation of how he solved the wicked problem of how to hide a 
computer cable - Is there an algorithmic way to bound the frame of the 
features of my table, computer, cord, plug and the rest of the universe, such 
that I could algorithmically find a solution to my problem? No. But solve it I 
did!

Ben, please listen carefully to the following :).  I really suspect that all 
the stuff I'm saying and others are writing about wicked problems is going in 
one ear and out the other. You hear it and know it, perhaps, but you really 
don't register it.

If you did register it, you would know that anyone who deals in psychology with 
wicked problems OBJECTS to the IQ test as a test of intelligence - as only 
dealing with convergent problem-solving, and not 
divergent/wicked/ill-structured problemsolving. It's a major issue. Pei clearly 
in the past didn't know much about this area of psychology, and I wonder 
whether you really do. (You don't have to know everything - it's not a crime if 
you don't - it's just that you would be well advised to familiarise yourself 
with it all..). 

There is no effective procedure, period, for dealing successfully with wicked, 
ill-structured, one-off (case-by-case) problems. There is for IQ tests and 
other examples of narrow AI.

(And what do you think Pei *does* mean?)


  Ben:
  Your language is unclear

  Could you define precisely what you mean by an algorithm

  Also, could you give an example of a computer program, that can be run on a 
digital computer, that is not does not embody an algorithm according to your 
definition?

  thx
  ben



  On Thu, Sep 18, 2008 at 9:15 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Ben,

Ah well, then I'm confused. And you may be right - I would just like 
clarification.

You see,  what you have just said is consistent with my understanding of 
Pei up till now. He explicitly called his approach in the past nonalgorithmic 
while acknowledging that others wouldn't consider it so. It was only 
nonalgorithmic in the sense that the algortihm or problemsolving procedure 
had the potential to keep changing every time - but there was still (as I think 
we'd both agree) a definite procedure/algorithm each time.

This current paper seems to represent a significant departure from that. 
There doesn't seem to be an algorithm or procedure to start with, and it does 
seem to represent a challenge to your conception of AGI design. But I may have 
misunderstood (which is easy if there are no examples :) ) - and perhaps you 
or, better still, Pei, would care to clarify.

  Ben:

  A key point IMO is that: problem-solving that is non-algorithmic (in 
Pei's sense) at one level (the level of the particular problem being solved) 
may still be algorithmic at a different level (for instance, NARS itself is a 
set of algorithms).  

  So, to me, calling NARS problem-solving non-algorithmic is a bit odd... 
though not incorrect according to the definitions Pei lays out...

  AGI design then **is** about designing algorithms (such as the NARS 
algorithms) that enable an AI system to solve problems in both algorithmic and 
non-algorithmic ways...

  ben


  On Thu, Sep 18, 2008 at 8:51 PM, Mike Tintner [EMAIL PROTECTED] wrote:

Ben,

I'm only saying that CPS seems to be loosely equivalent to wicked, 
ill-structured problem-solving, (the reference to convergent/divergent (or 
crystallised vs fluid) etc is merely to point out a common distinction in 
psychology between two kinds of intelligence that Pei wasn't aware of in the 
past - which is actually loosely equivalent to the distinction between narrow 
AI and general AI problemsolving).

In the end, what Pei is/isn't aware of in terms of general knowledge, 
doesn't matter much -  don't you think that his attempt to do without 
algorithms IS v. important? And don't you think any such attempt would be 
better off  referring explicitly to the literature on wicked, ill-structured 
problems?

I don't think that pointing all this out is silly - this (a 
non-algorithmic approach to CPS/wicked/whatever) is by far the most important 
thing currently being discussed here - and potentially, if properly developed, 
revolutionary.. Worth getting excited about, no?

(It would also be helpful BTW to discuss the wicked literature 
because it actually has abundant examples of wicked problems - and those, you 
must admit, are rather hard to come by here ).


Ben: TITLE: Case-by-case Problem Solving (draft)

AUTHOR: Pei Wang



   


But you 

Re: [agi] Case-by-case Problem Solving (draft)

2008-09-18 Thread Mike Tintner
Matt,

Thanks for reference. But it's still somewhat ambiguous. I could somewhat 
similarly outline a non-procedure procedure  which might include steps like 
Think about the problem  then Do something, anything - whatever first comes 
to mind and If that doesn't work, try something else.

But as I said, I'm only seeking clarification and a distinction between CPS and 
explicitly *Algorithmic* PS surely does require clarification.

  Matt:
Actually, CPS doesn't mean solving problems without algorithms. CPS is 
itself an algorithm, as described on pages 7-8 of Pei's paper. However, as I 
mentioned, I would be more convinced if there were some experimental results 
showing that it actually worked.

- 


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

Well, yes, and that difference is a distributed index, which has yet to be 
built.

I extremely strongly disagree with the prior sentence ... I do not think that 
a distributed index is a sufficient architecture for powerful AGI at the human 
level, beyond, or anywhere near...

Well, keep in mind that I am not trying to build a human-like AGI with its own 
goals. I am designing a distributed system with billions of owners, each of 
whom has their own interests and (conflicting) goals. To the user, the AGI is 
like a smarter internet. It would differ from Google in that any message you 
post is instantly available to anyone who cares (human or machine). There is no 
distinction between queries and documents. Posting a message could initiate an 
interactive conversation, or result in related messages posted later being sent 
to you.

A peer needs two types of knowledge. It knows about some specialized topic, and 
it also knows which other peers are experts on related topics. For simple 
peers, related just means they share the same words, and a peer is simply a 
cache of messages posted and received recently by its owner. In my CMR 
proposal, messages are stamped with the ID and time of origin as well as any 
peers they were routed through. This cached header information constitutes 
knowledge about related peers. When a peer receives a message, it compares the 
words in it to cached messages and routes a copy to the peers listed in the 
headers of those messages. Peers have their own policies regarding their areas 
of specialization, which can be as simple as giving the cache priority to 
messages originating from its owner. There is no provision to delete messages 
from the network once they are posted. Each peer would have its own deletion 
policy.

The environment is competitive and hostile. Peers compete for reputation and 
attention by providing quality information, which allows them to charge more 
for routing targeted ads. Peers are responsible for authenticating their 
sources, and risk blacklisting if they route too much spam. Peers thus have an 
incentive to be intelligent, for example, using better language models such as 
a stemmer, thesaurus, and parser to better identify related messages, or 
providing specialized services that understand a narrow subset of natural 
language, the way Google calculator understands questions like how many 
gallons in 50 cubic feet?

So yeah, it is a little different than narrow AI.

As to why I'm not building it, it's because I estimate it will cost $1 
quadrillion. Google controls about 1/1000 of the computing power of the 
internet. I am talking about building something 1000 times bigger.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, John LaMuth [EMAIL PROTECTED] wrote:

 You have completely left out the human element or
 friendly-type appeal
 
 How about a AGI personal assistant / tutor / PR interface
 
 Everyone should have one
 
 The market would be virtually unlimited ...

That falls under the category of (1) doing work.



-- Matt Mahoney, [EMAIL PROTECTED]


 - Original Message - 
 From: Matt Mahoney [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, September 18, 2008 6:34 PM
 Subject: Re: Two goals of AGI (was Re: [agi] Re: [OpenCog]
 Re: 
 Proprietary_Open_Source)
 
 
  --- On Thu, 9/18/08, Trent Waddington
 [EMAIL PROTECTED] wrote:
 
  On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney
  [EMAIL PROTECTED] wrote:
 
Perhaps there are some applications I
 haven't
  thought of?
 
  Bahahaha.. Gee, ya think?
 
  So perhaps you could name some applications of AGI
 that don't fall into 
  the categories of (1) doing work or (2) augmenting
 your brain?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Ben Goertzel

 So perhaps you could name some applications of AGI that don't fall into the
 categories of (1) doing work or (2) augmenting your brain?


3) learning as much as possible

4) proving as many theorems as possible

5) figuring out how to improve human life as much as possible

Of course, if you wish to put these under the category of doing work
that's fine ... in a physics sense I guess every classical physical process
does work ...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Trent Waddington [EMAIL PROTECTED] wrote:

 On Fri, Sep 19, 2008 at 11:34 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  So perhaps you could name some applications of AGI
 that don't fall into the categories of (1) doing work or
 (2) augmenting your brain?
 
 Perhaps you could list some uses of a computer that
 don't fall into
 the category of (1) computation (2) communication.  Do you
 see how
 pointless reasoning at this level of abstraction is?

No it is not. (and besides, there is (3) storage). We can usefully think of the 
primary uses of computers going through different phases, e.g.

1950-1970 - computation (numerical calculation)
1970-1990 - storage (databases)
1990-2010 - communication (internet)
2010-2030 - profit-oriented AI (automating the economy)
2030-2050 - brain augmentation and uploading

 And to get back to the original topic of conversation,
 putting
 restrictions on the use of supposedly open source code, the
 effects of
 those restrictions can no more be predicted than the
 potential
 applications of the technology.  Which, I think, is a
 rational piler
 of the need for freedom.. you don't know better, so who
 are you to put
 these restrictions on others?

I don't advocate any such thing, even if it were practical.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread John LaMuth


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, September 18, 2008 7:45 PM
Subject: Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: 
Proprietary_Open_Source)




--- On Thu, 9/18/08, John LaMuth [EMAIL PROTECTED] wrote:


You have completely left out the human element or
friendly-type appeal

How about a AGI personal assistant / tutor / PR interface

Everyone should have one

The market would be virtually unlimited ...


That falls under the category of (1) doing work.



-- Matt Mahoney, [EMAIL PROTECTED]




I always advocated a clear seperation between work and PLAY

Here the appeal would be amusement / entertainment - not any specified work 
goal


Have my PR - AI call your PR - AI !!

and Show Me the $$$ !!

JLM

www.emotionchip.net




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-18 Thread Steve Richfield
Mike,

On 9/18/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve:View #2 (mine, stated from your approximate viewpoint) is that
 simple programs (like Dr. Eliza) have in the past and will in the future do
 things that people aren't good at. This includes tasks that encroach on
 intelligence, e.g. modeling complex phonema and refining designs.

 Steve,

 In principle, I'm all for the idea that I think you (and perhaps Bryan)
 have expressed of a GI Assistant - some program that could be of general
 assistance to humans dealing with similar problems across many domains. A
 diagnostics expert, perhaps, that could help analyse breakdowns in say, the
 human body, a car or any of many other machines, a building or civil
 structure, etc. etc. And it's certainly an idea worth exploring.

 But I have yet to see any evidence that it is any more viable than a proper
 AGI - because, I suspect, it will run up against the same problems of
 generalizing -  e.g. though breakdowns may be v. similar in many different
 kinds of machines, technological and natural, they will also each have their
 own special character.


Certainly true. That is why it must incorporate lots of domain-specific
knowledge rather than being a completed work at the get-go. Every domain has
its own, as you put it, special character.


 If you are serious about any such project, it might be better to develop it
 first as an intellectual discipline.rather than a program to test its
 viability - perhaps what it really comes down to is a form of systems
 thinking or science.


This has been done over and over again by many people in various disciplines
(e.g. *Zen and the Art of Motorcycle Maintenance*). Common rules/heuristics
have emerged, e.g.:
1.  Fixing your biggest problem will fix 80% of its manifestations. Then, to
work on the remaining 20%, loop back to the beginning of this rule...
2.  Complex systems usually only suffer from dozens, not thousands, of
potential problems. The knowledge base needed to fix the vast majority of
problems in any particular domain is surprisingly short.
3.  Symptoms are usually expressed simply, e.g. shallow parsing would
recognize most of them.
4.  Chronic problems are evidence of a lack of knowledge/understanding.
5.  Repair is a process and not an act. We must design that process to lead
to a successful repair.
6.  Often the best repair process is to simply presume that the failure is
the cheapest thing that could possibly fail, and proceed on that assumption.
This often leads to the real problem, and with a minimum of wasted effort.
7.  Etc. I could go on like this for quite a while.

I have considered writing a book, something like Introduction to Repair
Theory that outlines how to successfully tackle hypercomplex systems like
our own bodies, even where millions of dollars in failed research has
preceded us. The same general methods can be applied to repairing large
(e.g. VME) circuit boards with no documentation, addressing social and
political problems, etc.

My question: Why bother writing a book, when a program is a comparable
effort that is worth MUCH more?

From what I have seen, some disciplines like auto mechanics are open to (and
indeed are the source of much of) this sort of technology, Other disciplines
like medicine are completely closed-minded and are actively disinterested.
Hence, neither of these disciplines would benefit much if any at all. Only
disciplines that are somewhere in between could benefit, and I don't at the
moment know of any such disciplines. Do you?

However, a COMPUTER removes the human ego from the equation, so that people
would simply presume that it runs on PFM (Pure Frigging Magic) and accept
advice that they would summarily reject if it came from a human.

Anyway, those are my thoughts for your continuing comment.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com