[agi] Digital incremental transmissions

2010-08-14 Thread Steve Richfield
Long ago I figured out how to build digital incremental transmissions. What
are they? Imagine a sausage-shaped structure with the outside being many
narrow reels of piano wire, with electrical and computer connections on the
end. Under computer control, each of the rings can be independently
controlled to rotate a specific distance playing one strand out while
reeling another strand in, pull a specific amount, or execute a long
coordinated sequence of moves. Further, this is a true infinitely-variable
transmission, so that if you command a ring to turn REALLY slowly, you can
exert nearly limitless force, or at least enough to destroy the structure.
Hence, obvious software safeguards are needed. Lowering a weight recovers
the energy to use elsewhere, or return out the supply lines. In short, a
complete android musculature could be build this way, and take only a tiny
amount of space - MUCH less than in our bodies, or with motors as is now the
case. Little heat would be generated because this system is fundamentally
efficient.

Nearly all of the components are cut from flat metal stock, akin to
mechanical clock parts, only with much beefier shapes. Hence, it is both
cheap and strong. Think horsepower, available from any strand. The strand
pairs would be hooded up to be flexor and extensor muscles for the many
joints, etc.

I haven't actually built it because I haven't (yet) found a customer who
wanted it badly m enough to pay the development costs and then wait a year
for it. However, this would sure make be an enabling system for people who
want to build REAL robots.

Does anyone here have ANY idea what to do with this, other than putting it
back on the shelf and waiting another decade?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-12 Thread Steve Richfield
Ben,

There is obvious confusion here. MOST mutations harm, but occasionally one
helps. By selecting for a particular difficult-to-achieve thing, like long
lifespan, we can discard the harmful mutations while selecting for the
helpful ones. However, selecting for something harmful and easy to achieve,
like the presence of genes that shorten lifespan, the selection process is
SO non-specific that it can't tell us much of anything. There are countless
mutations that kill WITHOUT conferring compensatory advantages. I could see
stressing the flies in various ways without controlling for lifespan, but
controlling for short lifespan in the absence of such stresses would seem to
be completely worthless. Of course, once stressed, you would also be seeing
genes to combat those (irrelevant) stresses.

In short, I still haven't heard words that suggest that this can go
anywhere, though it sure would be wonderful (like you and I might live twice
as long) if some workable path could be found.

I still suspect that the best path is in analyzing the DNA of long-living
people, rather than that of fruit flies. Perhaps there is some way to
combine the two approaches?

Steve

On Wed, Aug 11, 2010 at 8:37 PM, Ben Goertzel b...@goertzel.org wrote:



 On Wed, Aug 11, 2010 at 11:34 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben,

 It seems COMPLETELY obvious (to me) that almost any mutation would shorten
 lifespan, so we shouldn't expect to learn much from it.



 Why then do the Methuselah flies live 5x as long as normal flies?  You're
 conjecturing this is unrelated to the dramatically large number of SNPs with
 very different frequencies in the two classes of populations???

 ben



*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Bryan,

*I'm interested!*

Continuing...

On Tue, Aug 10, 2010 at 11:27 AM, Bryan Bishop kanz...@gmail.com wrote:

 On Tue, Aug 10, 2010 at 6:25 AM, Steve Richfield wrote:

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.


 You might be inerested in this- I've been putting together an
 adopt-a-lab-rat program that is actually an adoption program for lab mice.


... then it is an adopt-a-mouse program?

I don't know if you are a *Pinky and the Brain* fan, but calling your
project something like *The Pinky Project* would be catchy.

In some cases mice that are used as a control group in experiments are then
 discarded at the end of the program because, honestly, their lifetime is
 over more or less, so the idea is that some people might be interested in
 adopting these mice.


I had several discussions with the folks at the U of W whose job it was to
euthanize those mice. Their worries seemed to center in two areas:
1.  Financial liability, e.g. a mouse bites a kid, whose finger becomes
infected and...
2.  Social liability, e.g. some kids who are torturing them put their videos
on the Internet.

Of course, you can also just pony up the $15 and get one from Jackson Labs.


Not the last time I checked. They are very careful NOT to sell them to
exactly the same population that I intend to supply them to - high-school
kids. I expect that if I became a middleman, that they would simply stop
selling to me. Even I would have a hard time purchasing them, because they
only sell to genuine LABS.

I haven't fully launced adopt-a-lab-rat yet because I am still trying to
 figure out how to avoid ending up in a situation where I have hundreds of
 rats and rodents running around my apartment and I get the short end of the
 stick (oops).


*What is your present situation and projections? How big a volume could you
supply? What are their approximate ages? Do they have really good
documentation? Were they used in any way that might compromise anti-aging
experiments, e.g. raised in a nicer-than-usual-laboratory environment? Do
you have any liability concerns as discussed above?
*

Mice in the wild live ~4 years. Lab mice live ~2 years. If you take a young
lab mouse and do everything you can to extend its life, you can approach 4
years. If you take an older lab mouse and do everything you can, you double
the REMAINDER of their life, e.g. starting with a one-year-old mouse, you
could get it to live ~3 years. How much better (or worse) than this you do
is the basis for judging by the Methuselah Mouse people.

Hence, really good documentation is needed to establish when they were born,
and when they left a laboratory environment. Tattoos or tags link the mouse
to the paperwork. If I/you/we are to get kids to compete to develop better
anti-aging methods, the mice need to be documented well enough to PROVE
beyond a shadow of a doubt that they did what they claimed they did.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Ben,

Genescient has NOT paralleled human mating habits that would predictably
shorten life. They have only started from a point well beyond anything
achievable in the human population, and gone on from there. Hence, while
their approach may find some interesting things, it is unlikely to find the
things that are now killing our elderly population.

Continuing...

On Tue, Aug 10, 2010 at 11:59 AM, Ben Goertzel b...@goertzel.org wrote:




 I should dredge up and forward past threads with them. There are some
 flaws in their chain of reasoning, so that it won't be all that simple to
 sort the few relevant from the many irrelevant mutations. There is both a
 huge amount of noise, and irrelevant adaptations to their environment and
 their treatment.


 They have evolved many different populations in parallel, using the same
 fitness criterion.  This provides powerful noise filtering


Multiple measurements improve the S/N ratio by the square root of the number
of measurements. Hence, if they were to develop 100 parallel populations,
they could expect to improve their S/N ratio by 10:1. They haven't done 100
parallel populations, and they need much better than 10:1 improvement to the
S/N ratio.

Of course, this is all aside from the fact that their signal is wrong
because of the different mating habits.


 Even when the relevant mutations are eventually identified, it isn't clear
 how that will map to usable therapies for the existing population.


 yes, that's a complex matter



 Further, most of the things that kill us operate WAY too slowly to affect
 fruit flies, though there are some interesting dual-affecting problems.


 Fruit flies get all the  major ailments that kill people frequently, except
 cancer.  heart disease, neurodegenerative disease, respiratory problems,
 immune problems, etc.


Curiously, the list of conditions that they DO exhibit appears to be the
SAME list as people with reduced body temperatures exhibit. This suggests
simply correcting elderly people's body temperatures as they crash. Then,
where do we go from there?

Note that as you get older, your risk of contracting cancer rises
dramatically - SO dramatically that the odds of you eventually contracting
it are ~100%. Meanwhile, the risks of the other diseases DECREASE as you get
older past a certain age, so if you haven't contracted them by ~80, then you
probably never will contract them.

Scientific American had an article a while back about people in Israel who
are 100 years old. At ~100, your risk of dieing during each following year
DECREASES with further advancing age!!! This strongly suggests some
early-killers, that if you somehow escape them, you can live for quite a
while. Our breeding practices would certainly invite early-killers. Of
course, only a very tiny segment of the population lives to be 100.


 As I have posted in the past, what we have here in the present human
 population is about the equivalent of a fruit fly population that was bred
 for the shortest possible lifespan.


 Certainly not.


??? Not what?


 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


Where? References? The last I looked, all they had in addition to their
long-lived groups were uncontrolled control groups, and no groups bred only
from young flies.

In any case, since the sociology of humans is SO much different than that of
fruit flies, and breeding practices interact so much with sociology, e.g.
the bright colorings of birds, beards (that I have commented on before),
etc. In short, I would expect LOTS of mutations from young-bread groups, but
entirely different mutations in people than in fruit flies.

I suspect that there is LOTS more information in the DNA of healthy people
100 than there is in any population of fruit flies. Perhaps, data from
fruit flies could then be used to reduce the noise from the limited human
population who lives to be 100? Anyway, if someone has thought this whole
thing out, I sure haven't seen it. Sure there is probably lots to be learned
from genetic approaches, but Genescient's approach seems flawed by its
simplicity.

The challenge here is as always. The value of such research to us is VERY
high, yet there is no meaningful funding. If/when an early AI becomes
available to help in such efforts, there simply won't be any money available
to divert it away from defense (read that: offense) work.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Ben,

It seems COMPLETELY obvious (to me) that almost any mutation would shorten
lifespan, so we shouldn't expect to learn much from it. What particular
lifespan-shortening mutations are in the human genome wouldn't be expected
to be the same, or even the same as separated human populations. Hmmm, an
interesting thought: I wonder if certain racially mixed people have shorter
lifespans because they have several disjoint sets of such mutations?!!! Any
idea where to find such data?

It has long been noticed that some racial subgroups do NOT have certain
age-related illnesses, e.g. Japanese don't have clogged arteries, but they
DO have lots of cancer. So far everyone has been blindly presuming diet, but
seeking a particular level of genetic disaster could also explain it.

Any thoughts?

Steve

On Wed, Aug 11, 2010 at 8:06 AM, Ben Goertzel b...@goertzel.org wrote:


 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


 Where? References? The last I looked, all they had in addition to their
 long-lived groups were uncontrolled control groups, and no groups bred only
 from young flies.



 Michael rose's UCI lab has evolved flies specifically for short lifespan,
 but the results may not be published yet...

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Steve Richfield
Ben,

On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel b...@goertzel.org wrote:


 I'm speaking there, on Ai applied to life extension; and participating in a
 panel discussion on narrow vs. general AI...

 Having some interest, expertise, and experience in both areas, I find it
hard to imagine much interplay at all.

The present challenge is wrapped up in a lack of basic information,
resulting from insufficient funds to do the needed experiments.
Extrapolations have already gone WAY beyond the data, and new methods to
push extrapolations even further wouldn't be worth nearly as much as just a
little more hard data.

Just look at Aubrey's long list of aging mechanisms. We don't now even know
which predominate, or which cause others. Further, there are new candidates
arising every year, e.g. Burzynski's theory that most aging is secondary to
methylation of DNA receptor sites, or my theory that Aubrey's entire list
could be explained by people dropping their body temperatures later in life.
There are LOTS of other theories, and without experimental results, there is
absolutely no way, AI or not, to sort the wheat from the chaff.

Note that one of the front runners, the cosmic ray theory, could easily be
tested by simply raising some mice in deep tunnels. This is high-school
level stuff, yet with NO significant funding for aging research, it remains
undone.

Note my prior posting explaining my inability even to find a source of
used mice for kids to use in high-school anti-aging experiments, all while
university labs are now killing their vast numbers of such mice. So long as
things remain THIS broken, anything that isn't part of the solution simply
becomes a part of the very big problem, AIs included.

The best that an AI could seemingly do is to pronounce Fund and facilitate
basic aging research and then suspend execution pending an interrupt
indicating that the needed experiments have been done.

Could you provide some hint as to where you are going with this?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Steve Richfield
Ben,

On Tue, Aug 10, 2010 at 8:44 AM, Ben Goertzel b...@goertzel.org wrote:


 I'm writing an article on the topic for H+ Magazine, which will appear in
 the next couple weeks ... I'll post a link to it when it appears

 I'm not advocating applying AI in the absence of new experiments of
 course.  I've been working closely with Genescient, applying AI tech to
 analyze the genomics of their long-lived superflies, so part of my message
 is about the virtuous cycle achievable via synergizing AI data analysis with
 carefully-designed experimental evolution of model organisms...


I should dredge up and forward past threads with them. There are some flaws
in their chain of reasoning, so that it won't be all that simple to sort the
few relevant from the many irrelevant mutations. There is both a huge amount
of noise, and irrelevant adaptations to their environment and their
treatment. Even when the relevant mutations are eventually identified, it
isn't clear how that will map to usable therapies for the existing
population.

Perhaps you remember the old Star Trek episode about the long-lived
population that was still locked in a war after hundreds of years? The
episode devolved into a dispute over the potential value of this discovery -
was there something valuable in the environment, or did they just evolve to
live longer? Here, the long-lived population isn't even human.

Further, most of the things that kill us operate WAY too slowly to affect
fruit flies, though there are some interesting dual-affecting problems.
Unfortunately, it isn't as practical to autopsy fruit flies as it is to
autopsy people to see what killed them.

As I have posted in the past, what we have here in the present human
population is about the equivalent of a fruit fly population that was bred
for the shortest possible lifespan. Our social practices could hardly do
worse. Our present challenge is to get to where fruit flies were before Rose
first bred them for long life.

I strongly suspect that we have some early-killer mutations, e.g. to people
off as quickly as possible after they pass child-bearing age, which itself
is probably being shortened through our bizarre social habits of mating
like-aged people. Genescient's approach holds no promise of identifying
THOSE genes, and identifying the other genes won't help at all until those
killer genes are first silenced.

In short, there are some really serious challenges to Genescient's approach.
I expect success for several other quarters long before Genescient bears
real-world usable fruit. I suspect that these challenges, along with the
ubiquitous shortage of funding will keep Genescient out of producing
real-world usable results pretty much forever.

Future AGI output: Fund aging research.

Update on studying more of Burzynski's papers: His is not a cancer cure at
all. What he is doing is removing gene-silencing methylization from the DNA,
and letting nature take its course, e.g. having their immune systems kill
the cancer via aptosis. In short, it is a real-world anti-aging approach
that has snuck in under the radar. OF COURSE any real-world working
anti-aging approach would kill cancer! How good is his present product? Who
knows? It sure looks to me like this is a valid approach, and I suspect that
any bugs will get worked out in time. WATCH THIS. This looks to me like it
will work in the real-world long before any other of the present popular
approaches stand a chance of working. After all, it sure seems to be working
on some people with really extreme gene silencing - called cancer.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread Steve Richfield
Ben

On Sat, Aug 7, 2010 at 6:10 PM, Ben Goertzel b...@goertzel.org wrote:

 I need to substantiate the case for such AGI
 technology by making an argument for high-value apps.


There is interesting hidden value in some stuff. In the case of Dr. Eliza,
it provide a communication pathway to sick people, which is EXACTLY what a
research institution needs to support itself.

I think you may be on to something here - looking for high-value.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Steve Richfield
John,

You brought up some interesting points...

On Fri, Aug 6, 2010 at 10:54 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
  On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.com
  wrote:
  statements of stupidity - some of these are examples of cramming
  sophisticated thoughts into simplistic compressed text.
 
  Definitely, as even the thoughts of stupid people transcends our
 (present)
  ability to state what is happening behind their eyeballs. Most stupidity
 is
  probably beyond simple recognition. For the initial moment, I was just
  looking at the linguistic low hanging fruit.

 You are talking about, those phrases, some are clichés,


There seems to be no clear boundary between clichés and other stupid
statements, except maybe that clichés are exactly quoted like that's just
your opinion while other statements are grammatically adapted to fit the
sentences and paragraphs that they inhabit.

Dr. Eliza already translates idioms before processing. I could add clichés
without changing a line of code, e.g. that's just your opinion might
translate into something like I am too stupid to to understand your
explanation.

Dr. Eliza has an extensive wildcard handler, so it should be able to handle
the majority of grammatically adapted statements in the same way, by simply
including appropriate wildcards in the pattern.

are like local K
 complexity minima, in a knowledge graph of partial linguistic structure,
 where neural computational energy is preserved, and the statements are
 patterns with isomorphisms to other experiential knowledge intra and inter
 agent.


That is, other illogical misunderstanding of the real world, which are
probably NOT shared with more intelligent agents. This present a serious
problem with understanding by more intelligent agents.

More intelligent agents have ways of working more optimally with the
 neural computational energy, perhaps by using other more efficient patterns
 thus avoiding those particular detrimental pattern/statements.


... and this present a communications problem with agents with radically
different intelligences, both greater and lesser.


 But the
 statements are catchy because they are common and allow some minimization
 of
 computational energy as well as they are like objects in a higher level
 communication protocol. To store them is less bits and transfer is less
 bits
 per second.


However, they have negative information content - if that is possible,
because they require a false model of the world to process, and produce
completely erroneous results. Of course, despite these problems, they DO
somewhat accurately communicate the erroneous nature of the thinking, so
there IS some value there.


 Their impact is maximal since they are isomorphic across
 knowledge and experience.


... the ultimate being: Do, or do not. There is no try.


 At some point they may just become symbols due to
 their pre-calculated commonness.


Egad, symbols to display stupidity. Could linguistics have anything that is
WORSE?!


  Language is both intelligence enhancing and limiting. Human language is a
  protocol between agents. So there is minimalist data transfer, I had no
  choice but to ... is a compressed summary of potentially vastly complex
  issues.
 
  My point is that they could have left the country, killed their
 adversaries,
  taken on a new ID, or done any number of radical things that they
 probably
  never considered, other than taking whatever action they chose to take. A
  more accurate statement might be I had no apparent rational choice but
 to
  

 The other low probability choices are lossily compressed out of the
 expressed statement pattern. It's assumed that there were other choices,
 usually factored in during the communicational complexity related
 decompression, being situational. The onus at times is on the person
 listening to the stupid statement.


I see. This example was in reality a gapped or ellipsis, where
reasonably presumed words were omitted. These are always a challenge, except
in common places like clichés where the missing words can be automatically
inserted.

Thanks again for your thoughts.

Steve
=


  The mind gets hung-up sometimes on this language of ours. Better off at
  times to think less using English language and express oneself with a
 wider
  spectrum communiqué. Doing a dance and throwing paint in the air for
  example, as some *primitive* cultures actually do, conveys information
 also
  and is medium of expression rather than using a restrictive human chat
  protocol.
 
  You are saying that the problem is that our present communication permits
  statements of stupidity, so we shouldn't have our present system of
  communication? Scrap English?!!! I consider statements of stupidity as a
 sort
  of communications checksum, to see if real interchange of ideas is even
  possible. Often

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Steve Richfield
Ben,

Dr. Eliza with the Gracie interface to Dragon NaturallySpeaking makes a
really spectacular speech I/O demo - when it works, which is ~50% of the
time. The other 50% of the time, it fails to recognize enough to run with,
misses something critical, etc., and just sounds stupid, kinda like most
doctors I know. Even when it fails, it still babbles on with domain-specific
comments.

Results are MUCH better when a person with speech I/O and chronic illness
experience operates it.

Note that Gracie handles interruptions and other violations of
conversational structure. Further, it speaks in 3 voices, one for the
expert, one for the assistant, and one for the environment and OS.

Note that the Microsoft standard speech I/O has a mouth control that moves
simultaneously with the sound, that is pasted on an egghead face, so you can
watch it speak.

Note that the speech recognition works AMAZINGLY well, because the ONLY
thing it is interested in are long technical words and relevant phrases, and
NOT in the short connecting words that are what usually gets messed up. When
you watch what was recognized during casual conversation, what you typically
see is gobbledygook between the important stuff, which comes shining
through.

There are plans to greatly enhance all this, but like everything else on
this forum, it suffers from inadequate resources. If someone is looking for
something that is demonstrable right now to throw even modest resources
into...

That program was then adapted to a web server by adding logic to sense when
it was on a server, whereupon some additional buttons appear to operate and
debug it in a server environment. That adapted program is now up and
running, without any of the speech I/O stuff, on http://www.DrEliza.com.

I know, it isn't AGI, but neither is anything else these days.

Any interest?

Steve

On Sat, Aug 7, 2010 at 6:10 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Steve Richfield
Ian,

I recall several years ago that a group in Britain was operating just such a
chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex.
The program just sat there doing its thing on numerous sites, and whenever a
meeting was set up, it would issue a message to its human owners to alert
the police to go and arrest the pedophiles at the arranged time and place.
No human interaction was needed between arrests.

I can imagine an adaptation, wherein a program claims to be manufacturing
explosives, and is looking for other people to deliver those explosives.
With such a story line, there should be no problem arranging deliveries, at
which time you would arrest the would-be bombers.

I wish I could tell you more about the British project, but they were VERY
secretive. I suspect that some serious Googling would yield much more.

Hopefully you will find this helpful.

Steve
=
On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker ianpark...@gmail.com wrote:

 I wanted to see what other people's views were.My own view of the risks is
 as follows. If the Turing Machine is built to be as isomorphic with humans
 as possible, it would be incredibly dangerous. Indeed I feel that the
 biological model is far more dangerous than the mathematical.

 If on the other hand the TM was *not* isomorphic and made no attempt to
 be, the dangers would be a lot less. Most Turing/Löbner entries are
 chatterboxes that work on databases. The database being filled as you chat.
 Clearly the system cannot go outside its database and is safe.

 There is in fact some use for such a chatterbox. Clearly a Turing machine
 would be able to infiltrate militant groups however it was constructed. As
 for it pretending to be stupid, it would have to know in what direction it
 had to be stupid. Hence it would have to be a good psychologist.

 Suppose it logged onto a jihardist website, as well as being able to pass
 itself off as a true adherent, it could also look at the other members and
 assess their level of commitment and knowledge. I think that the
 true Turing/Löbner  test is not working in a laboratory environment but they
 should log onto jihardist sites and see how well they can pass themselves
 off. If it could do that it really would have arrived. Eventually it could
 pass itself off as a *peniti* to use the Mafia term and produce
 arguments from the Qur'an against the militant position.

 There would be quite a lot of contracts to be had if there were a realistic
 prospect of doing this.


   - Ian Parker

 On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote:

  Philosophical question 2 - Would passing the TT assume human stupidity
 and
  if so would a Turing machine be dangerous? Not necessarily, the Turing
  machine could talk about things like jihad without
 ultimately identifying with
  it.
 

 Humans without augmentation are only so intelligent. A Turing machine
 would
 be potentially dangerous, a really well built one. At some point we'd need
 to see some DNA as ID of another extended TT.

  Philosophical question 3 :- Would a TM be a psychologist? I think it
 would
  have to be. Could a TM become part of a population simulation that would
  give us political insights.
 

 You can have a relatively stupid TM or a sophisticated one just like
 humans.
 It might be easier to pass the TT by not exposing too much intelligence.

 John

  These 3 questions seem to me to be the really interesting ones.
 
 
- Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Steve Richfield
To All,

I have posted plenty about statements of ignorance, our probable inability
to comprehend what an advanced intelligence might be thinking, heidenbugs,
etc. I am now wrestling with a new (to me) concept that hopefully others
here can shed some light on.

People often say things that indicate their limited mental capacity, or at
least their inability to comprehend specific situations.

1)  One of my favorites are people who say I had no choice but to ...,
which of course indicates that they are clearly intellectually challenged
because there are ALWAYS other choices, though it may be difficult to find
one that is in all respects superior. While theoretically this statement
could possibly be correct, in practice I have never found this to be the
case.

2)  Another one recently from this very forum was If it sounds too good to
be true, it probably is. This may be theoretically true, but in fact was,
as usual, made as a statement as to why the author was summarily dismissing
an apparent opportunity of GREAT value. This dismissal of something BECAUSE
of its great value would seem to severely limit the authors prospects for
success in life, which probably explains why he spends so much time here
challenging others who ARE doing something with their lives.

3)  I used to evaluate inventions for some venture capitalists. Sometimes I
would find that some basic law of physics, e.g. conservation of energy,
would have to be violated for the thing to work. When I explained this to
the inventors, their inevitable reply was Yea, and they also said that the
Wright Brothers' plane would never fly. To this, I explained that the
Wright Brothers had invested ~200 hours of effort working with their crude
homemade wind tunnel, and ask what the inventors have done to prove that
their own invention would work.

4)  One old stupid standby, spoken when you have make a clear point that
shows that their argument is full of holes That is just your opinion. No,
it is a proven fact for you to accept or refute.

5)  Perhaps you have your own pet statements of stupidity? I suspect that
there may be enough of these to dismiss some significant fraction of
prospective users of beyond-human-capability (I just hate the word
intelligence) programs.

In short, semantic analysis of these statements typically would NOT find
them to be conspicuously false, and hence even an AGI would be tempted to
accept them. However, their use almost universally indicates some
short-circuit in thinking. The present Dr. Eliza program could easily
recognize such statements.

OK, so what? What should an AI program do when it encounters a stupid user?
Should some attempt be made to explain stupidity to someone who is almost
certainly incapable of comprehending their own stupidity? Stupidity is
forever is probably true, especially when expressed by an adult.

Note my own dismissal of a some past posters for insufficient mental ability
to understand certain subjects, whereupon they invariably come back
repeating the SAME flawed logic, after I carefully explained the breaks in
their logic. Clearly, I was just wasting my effort by continuing to interact
with these people.

Note that providing a stupid user with ANY output is probably a mistake,
because they will almost certainly misconstrue it in some way. Perhaps it
might be possible to dumb down the output to preschool-level, at least
that (small) part of the output that can be accurately stated in preschool
terms.

Eventually as computers continue to self-evolve, we will ALL be categorized
as some sort of stupid, and receive stupid-adapted output.

I wonder whether, ultimately, computers will have ANYTHING to say to us,
like any more than we now say to our dogs.

Perhaps the final winner of the Reverse Turing Test will remain completely
silent?!

You don't explain to your dog why you can't pay the rent from *The Fall of
Colossus*.

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Steve Richfield
Mike,

Your reply flies in the face of two obvious facts:
1.  I have little interest in what is called AGI here. My interests lie
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several
reasons, as it is directly applicable to Dr. Eliza, and because it casts a
shadow on future dreams of AGI. I was hoping that those people who have
thought things through regarding AGIs might have some thoughts here. Maybe
these people don't (yet) exist?!
2.  You seem to think that a walk before you run approach, basically a
bottom-up approach to AGI, is the obvious one. It sure isn't obvious to me.
Besides, if my statements of stupidity theory is true, then why even
bother building AGIs, because we won't even be able to meaningfully discuss
things with them.

Steve
==
On Fri, Aug 6, 2010 at 2:57 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  sTEVE:I have posted plenty about statements of ignorance, our probable
 inability to comprehend what an advanced intelligence might be thinking,

 What will be the SIMPLEST thing that will mark the first sign of AGI ? -
 Given that there are zero but zero examples of AGI.

 Don't you think it would be a good idea to begin at the beginning? With
 initial AGI? Rather than advanced AGI?
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Steve Richfield
John,

Congratulations, as your response was the only one that was on topic!!!

On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.comwrote:

 statements of stupidity - some of these are examples of cramming
 sophisticated thoughts into simplistic compressed text.


Definitely, as even the thoughts of stupid people transcends our (present)
ability to state what is happening behind their eyeballs. Most stupidity is
probably beyond simple recognition. For the initial moment, I was just
looking at the linguistic low hanging fruit.

Language is both intelligence enhancing and limiting. Human language is a
 protocol between agents. So there is minimalist data transfer, I had no
 choice but to ... is a compressed summary of potentially vastly complex
 issues.


My point is that they could have left the country, killed their adversaries,
taken on a new ID, or done any number of radical things that they probably
never considered, other than taking whatever action they chose to take. A
more accurate statement might be I had no apparent rational choice but to


The mind gets hung-up sometimes on this language of ours. Better off at
 times to think less using English language and express oneself with a wider
 spectrum communiqué. Doing a dance and throwing paint in the air for
 example, as some **primitive** cultures actually do, conveys information
 also and is medium of expression rather than using a restrictive human chat
 protocol.


You are saying that the problem is that our present communication permits
statements of stupidity, so we shouldn't have our present system of
communication? Scrap English?!!! I consider statements of stupidity as a
sort of communications checksum, to see if real interchange of ideas is even
possible. Often, it is quite impossible to communicate new ideas to
inflexible-minded people.



 BTW the rules of etiquette of the human language protocol are even more
 potentially restricting though necessary for efficient and standardized data
 transfer to occur. Like, TCP/IP for example. The Etiquette in TCP/IP is
 like an OSI layer, akin to human language etiquette.


I'm not sure how this relates, other than possibly identifying people who
don't honor linguistic etiquette as being (potentially) stupid. Was that
your point?

Steve
==


 *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]

 To All,

 I have posted plenty about statements of ignorance, our probable
 inability to comprehend what an advanced intelligence might be thinking,
 heidenbugs, etc. I am now wrestling with a new (to me) concept that
 hopefully others here can shed some light on.

 People often say things that indicate their limited mental capacity, or at
 least their inability to comprehend specific situations.

 1)  One of my favorites are people who say I had no choice but to ...,
 which of course indicates that they are clearly intellectually challenged
 because there are ALWAYS other choices, though it may be difficult to find
 one that is in all respects superior. While theoretically this statement
 could possibly be correct, in practice I have never found this to be the
 case.

 2)  Another one recently from this very forum was If it sounds too good to
 be true, it probably is. This may be theoretically true, but in fact was,
 as usual, made as a statement as to why the author was summarily dismissing
 an apparent opportunity of GREAT value. This dismissal of something BECAUSE
 of its great value would seem to severely limit the authors prospects for
 success in life, which probably explains why he spends so much time here
 challenging others who ARE doing something with their lives.

 3)  I used to evaluate inventions for some venture capitalists. Sometimes I
 would find that some basic law of physics, e.g. conservation of energy,
 would have to be violated for the thing to work. When I explained this to
 the inventors, their inevitable reply was Yea, and they also said that the
 Wright Brothers' plane would never fly. To this, I explained that the
 Wright Brothers had invested ~200 hours of effort working with their crude
 homemade wind tunnel, and ask what the inventors have done to prove that
 their own invention would work.

 4)  One old stupid standby, spoken when you have make a clear point that
 shows that their argument is full of holes That is just your opinion. No,
 it is a proven fact for you to accept or refute.

 5)  Perhaps you have your own pet statements of stupidity? I suspect that
 there may be enough of these to dismiss some significant fraction of
 prospective users of beyond-human-capability (I just hate the word
 intelligence) programs.

 In short, semantic analysis of these statements typically would NOT find
 them to be conspicuously false, and hence even an AGI would be tempted to
 accept them. However, their use almost universally indicates some
 short-circuit in thinking. The present Dr. Eliza program could easily
 recognize such statements.

 OK

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread Steve Richfield
David,

You are correct in that I keep bad company. My approach to NNs is VERY
different than other people's approaches. I insist on reasonable math being
performed on quantities that I understand, which sets me apart from just
about everyone else.

Your neat approach isn't all that neat, and is arguably scruffier than
mine. At least I have SOME math to back up my approach. Further, note that
we are self-organizing systems, and that this process is poorly understood.
I am NOT particularly interest in people-programmed systems because of their
very fundamental limitations. Yes, self-organization is messy, but it fits
the neat definition better than it meets the scruffy definition. Scruffy
has more to do with people-programmed ad hoc approaches (like most of AGI),
which I agree are a waste of time.

Steve

On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.com wrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a wrote
 was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached is
 a Scruffy approach. Neat approaches are characterized by deliberate
 algorithms that are analogous to the problem and can sometimes be shown to
 be provably correct. An example of a Neat approach is the use of features in
 the paper I mentioned. One can describe why the features are calculated and
 manipulated the way they are. An example of a scruffies approach would be
 neural nets, where you don't know the rules by which it comes up with an
 answer and such approaches are not very scalable. Neural nets require
 manually created training data and the knowledge generated is not in a form
 that can be used for other tasks. The knowledge isn't portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield steve.richfi...@gmail.com
  wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.comwrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I thought
 is that I found a way to describe why existing solutions work, how they work
 and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 coincidence, such as many features moving together over several frames of a
 video and over a statistically significant distance. They use thresholds to
 ensure that the observed changes are likely transformations of the original
 property observed or to ensure the statistical significance of an
 observation. These are highly likely true observations and not coincidences
 or noise.

 *Second*, they make sure that the other possible explanations of the
 observations are very unlikely. This is usually done using a threshold, and
 a second difference threshold from the first match to the second match. This
 makes sure that second best matches are much farther away than the best
 match. This is important because it's not enough to find a very likely match
 if there are 1000 very likely matches. You have to be able to show that the
 other matches are very unlikely, otherwise the specific match you pick may
 be just a tiny bit better than the others, and the confidence of that match
 would be very low.


 So, my initial design plans are as follows. Note: I will probably not
 actually implement the system because the engineering part dominates the
 time. I'd rather convert real

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread Steve Richfield
David

On Wed, Aug 4, 2010 at 1:16 PM, David Jones davidher...@gmail.com wrote:

 3) requires manually created training data, which is a major problem.


Where did this come from. Certainly, people are ill equipped to create dP/dt
type data. These would have to come from sensors.


 4) is designed with biological hardware in mind, not necessarily existing
 hardware and software.


The biology is just good to help the math over some humps. So far, I have
not been able to identify ANY neuronal characteristic that hasn't been
refined to near-perfection, once the true functionality was fully
understood.

Anyway, with the math, you can build a system anyway you want. Without the
math, you are just wasting your time and electricity. The math comes first,
and all other things follow.

Steve
===


 These are my main reasons, at least that I can remember, that I avoid
 biologically inspired methods. It's not to say that they are wrong. But they
 don't meet my requirements. It is also very unclear how to implement the
 system and make it work. My approach is very deliberate, so the steps
 required to make it work are pretty clear to me.

 It is not that your approach is bad. It is just different and I really
 prefer methods that are not biologically inspired, but are designed
 specifically with goals and requirements in mind as the most important
 design motivator.

 Dave

 On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield steve.richfi...@gmail.com
  wrote:

 David,

 You are correct in that I keep bad company. My approach to NNs is VERY
 different than other people's approaches. I insist on reasonable math being
 performed on quantities that I understand, which sets me apart from just
 about everyone else.

 Your neat approach isn't all that neat, and is arguably scruffier than
 mine. At least I have SOME math to back up my approach. Further, note that
 we are self-organizing systems, and that this process is poorly understood.
 I am NOT particularly interest in people-programmed systems because of their
 very fundamental limitations. Yes, self-organization is messy, but it fits
 the neat definition better than it meets the scruffy definition. Scruffy
 has more to do with people-programmed ad hoc approaches (like most of AGI),
 which I agree are a waste of time.

 Steve
 
 On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a
 wrote was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached
 is a Scruffy approach. Neat approaches are characterized by deliberate
 algorithms that are analogous to the problem and can sometimes be shown to
 be provably correct. An example of a Neat approach is the use of features in
 the paper I mentioned. One can describe why the features are calculated and
 manipulated the way they are. An example of a scruffies approach would be
 neural nets, where you don't know the rules by which it comes up with an
 answer and such approaches are not very scalable. Neural nets require
 manually created training data and the knowledge generated is not in a form
 that can be used for other tasks. The knowledge isn't portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.comwrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot 
 computer
 vision that you can in real computer vision. This makes experience 
 probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I
 thought is that I found a way to describe why existing solutions work, how
 they work and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread Steve Richfield
David,

On Wed, Aug 4, 2010 at 1:45 PM, David Jones davidher...@gmail.com wrote:


 Understanding what you are trying to accomplish and how you want the system
 to work comes first, not math.


It's all the same. First comes the qualitative, then comes the quantitative.



 If your neural net doesn't require training data,


Sure it needs training data -real-world interactive sensory input training
data, rather than static manually prepared training data.

I don't understand how it works or why you expect it to do what you want it
 to do if it is self organized. How do you tell it how to process inputs
 correctly? What guides the processing and analysis?


Bingo - you have just hit on THE great challenge in AI/AGI., and the source
of much past debate. Some believe in maximizing the information content of
the output. Some believe in other figures of merit, e.g. success in
interacting with a test environment, success in forming a layered structure,
etc. This particular sub-field is still WIDE open and waiting for some good
answers.

Note that this same problem presents itself, regardless of approach, e.g.
AGI.

Steve
===


 On Wed, Aug 4, 2010 at 4:33 PM, Steve Richfield steve.richfi...@gmail.com
  wrote:

 David

 On Wed, Aug 4, 2010 at 1:16 PM, David Jones davidher...@gmail.comwrote:

 3) requires manually created training data, which is a major problem.


 Where did this come from. Certainly, people are ill equipped to create
 dP/dt type data. These would have to come from sensors.



 4) is designed with biological hardware in mind, not necessarily existing
 hardware and software.


 The biology is just good to help the math over some humps. So far, I have
 not been able to identify ANY neuronal characteristic that hasn't been
 refined to near-perfection, once the true functionality was fully
 understood.

 Anyway, with the math, you can build a system anyway you want. Without the
 math, you are just wasting your time and electricity. The math comes first,
 and all other things follow.

 Steve
 ===


 These are my main reasons, at least that I can remember, that I avoid
 biologically inspired methods. It's not to say that they are wrong. But they
 don't meet my requirements. It is also very unclear how to implement the
 system and make it work. My approach is very deliberate, so the steps
 required to make it work are pretty clear to me.

 It is not that your approach is bad. It is just different and I really
 prefer methods that are not biologically inspired, but are designed
 specifically with goals and requirements in mind as the most important
 design motivator.

 Dave

 On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 You are correct in that I keep bad company. My approach to NNs is VERY
 different than other people's approaches. I insist on reasonable math being
 performed on quantities that I understand, which sets me apart from just
 about everyone else.

 Your neat approach isn't all that neat, and is arguably scruffier than
 mine. At least I have SOME math to back up my approach. Further, note that
 we are self-organizing systems, and that this process is poorly understood.
 I am NOT particularly interest in people-programmed systems because of 
 their
 very fundamental limitations. Yes, self-organization is messy, but it fits
 the neat definition better than it meets the scruffy definition. 
 Scruffy
 has more to do with people-programmed ad hoc approaches (like most of AGI),
 which I agree are a waste of time.

 Steve
 
 On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a
 wrote was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you
 attached is a Scruffy approach. Neat approaches are characterized by
 deliberate algorithms that are analogous to the problem and can sometimes 
 be
 shown to be provably correct. An example of a Neat approach is the use of
 features in the paper I mentioned. One can describe why the features are
 calculated and manipulated the way they are. An example of a scruffies
 approach would be neural nets, where you don't know the rules by which it
 comes up with an answer and such approaches are not very scalable. Neural
 nets require manually created training data and the knowledge generated is
 not in a form that can be used for other tasks. The knowledge isn't
 portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work

Re: [agi] Walker Lake

2010-08-03 Thread Steve Richfield
Matt,

On Tue, Aug 3, 2010 at 4:56 AM, tintner michael tint...@blueyonder.co.ukwrote:

 I totally agree that surveillance will become ever more massive - because
 it has v. positive as well as negative benefits. But people will find ways
 of resisting and evading it - they always do. And it's interesting to
 speculate how - perhaps erver more detailed public identities -  (more and
 more facts about you becoming public knowledge) - will be matched by
 proliferating personas,  (people taking on false identities on the net) -
 or by black spots (times when you're allowed to switch off from the net
 and surveillance) - or no doubt by other means.


The government is now going to borderline insane methods to close on some of
this. To illustrate:

I now live in a large home with a secure fence and remotely controlled gate,
which sits atop a high bluff which is on the same property. To provide some
separation of mail by subject and recipient, I decided to plant another
mailbox with a made-up address. I put a paper in it advising the mailman
(actually a woman) to activate the box, and put the flag up. The next day,
junk mail started to arrive, and it was clearly working.

Fast forward a year to the Census. No Census forms arrived in my new
mailbox. However, after the last investigator asking for information about
the main address was sent packing without any information, one evening yet
another Census investigator arrived asking about my made-up address. He said
that Google showed it as being on the steep part of the bluff. I simply said
that it didn't exist, and he left. The next morning there was a helicopter
hovering over the bluff examining it very carefully.

Apparently, they have given up on tracking personas, but NOT on tracking
properties. They must be going absolutely insane over the ~100K families
living in RVs.

Having lived on wheels for ~16 years in the past, I have observed the
continuous ratcheting up of regulations to control this population.
Dealing with this required a day or two of legal research every year or so.
My officially issued WA driver's license still says NOT FOR IDENTIFICATION
and Not a resident or citizen of WA state on it. I can't imagine someone
just starting out figuring out all that is necessary to navigate the legal
labyrinth.

Imagine the following which happens often to those who are unprepared: You
are driving along on a nice sunny day and a policeman pulls you over. He
asks for your driver's license and asks where you live. You give him your
license and indicate that you live in the RV that you are now driving. He
points out that your license was issued in a different state, and since you
now live in an RV that is distant from that state, your driver's license is
no longer valid. Also, your vehicle license is no longer valid, unless it is
from a state like Nevada that doesn't require residency as a precondition
for registration. He then VERY CAREFULLY inspects your vehicle and finds
that a tail light has burnt out. Oops, we'll have to red-tag this vehicle as
being unsafe! If you were unlucky enough to be stopped in Nevada, you would
probably be arrested for some minor traffic offense, as I once was. Then, a
tow truck arrives and tows your unsafe (because of the bad tail light)
and/or abandoned (because you are now in jail) vehicle away. When you go
to recover it, you discover that they want more money than you have, because
they charged thousands of dollars in towing and storage fees, plus there is
no way to correct its legal shortcomings to get it out of the lockup, and
they won't release it until it is 100% legal by THEIR standards. Storage
fees quickly mount up to a hopeless fortune, and they sell your home. There
are some small towns that support themselves partly in this manner. If you
live in an RV, you absolutely MUST take action against such things because
various variants are quite common, e.g. have a driver's license that isn't
automatically invalid anywhere, never drive your RV anywhere alone, have the
title SO messed up (e.g. with unsatisfied liens) that it is nearly
impossible to navigate the paperwork to seize it, don't own an RV that is
worth enough to employ lawyers to overcome the challenges that you have
placed in their way, etc.

Akin to Richard's proposal of having a hyper-complex network of constraints
to control an AGI, various governments have already developed hyper-complex
constraint networks to control people. After all, that is how our supposedly
free society works. Just take a week or so and read the motor vehicle code
for your state. Ain't freedom just wonderful?!

Having been through this, I hereby soundly reject your assertion that people
can overcome an AGI-controlled society. Sure, a few might manage, but to
overcome something like a central controlling government, it would take a
massive coordinated effort, and there is NO WAY that, with future technology
in the hands of a central controlling government, that this would ever be
even 

[agi] Walker Lake

2010-08-02 Thread Steve Richfield
Sometime when you are flying between the northwest US to/from Las Vegas,
look out your window as you fly over Walker Lake in eastern Nevada. At the
south end you will see a system of roads leading to tiny buildings, all
surrounded by military security. From what I have been able to figure out,
you will find the U.S. arsenal of chemical and biological weapons housed
there. No, we are not now making these weapons, but neither are we disposing
of them.

Similarly, there has been discussion of developing advanced military
technology using AGI and other computer-related methods. I believe that
these efforts are fundamentally anti-democratic, as they allow a small
number of people to control a large number of people. Gone are the days when
people voted with their swords. We now have the best government that money
can buy monitoring our every email, including this one, to identify anyone
resisting such efforts. 1984 has truly arrived. This can only lead to a
horrible end to freedom, with AGIs doing their part and more.

Like chemical and biological weapons, unmanned and automated weapons should
be BANNED. Unfortunately, doing so would provide a window of opportunity for
others to deploy them. However, if we make these and stick them in yet
another building at the south end of Walker Lake, we would be ready in case
other nations deploy such weapons.

How about an international ban on the deployment of all unmanned and
automated weapons? The U.S. won't now even agree to ban land mines. At least
this would restore SOME relationship between popular support and military
might. Doesn't it sound ethical to insist that a human being decide when
to end another human being's life? Doesn't it sound fair to require the
decision maker to be in harm's way, especially when the person being killed
is in or around their own home? Doesn't it sound unethical to add to the
present situation? When deployed on a large scale, aren't these WMDs?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread Steve Richfield
Matt,

I grant you your points, but they miss the my point. Where is this
ultimately leading - to a superpower with the ability to kill its opponents
without any risk to itself. This may be GREAT so long as you agree with and
live under that superpower, but how about when things change for the worse?
What if we get another Bush who lies to congress and wages unprovoked war
with other nations, only next time with vast armies of robots ala *The Clone
Wars*? Sure the kill rate will be almost perfect. Sure we can more
accurately kill their heads of government without killing so many civilians
along the way.

How about when you flee future U.S. tyranny, and your new destination
becomes valued by the U.S. enough to send a bunch of robots in to seize it.
Your last thought could be of the U.S. robot that is killing YOU. Oops, too
late to reconsider where this is all going.

Note in passing that our standard of living has been gradually declining as
the wealth of the world is concentrated into fewer and fewer hands. Note in
passing that the unemployment situation is looking bleaker and bleaker, with
no prospect for improvement in sight. Do you REALLY want to concentrate SO
much power in the hands of SUCH a dysfunctional government? If this doesn't
work out well, what would be the options for improvement? This appears to be
a one-way street with no exit.

Steve
=
On Mon, Aug 2, 2010 at 7:55 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Steve:How about an international ban on the deployment of all unmanned
 and automated weapons?

 You might as well ask for a ban on war (or, perhaps, aggression). I
 strongly recommend reading the SciAm July 2010 issue on robotic warfare. The
 US already operates from memory somewhere between 13,000 and 20,000 unmanned
 weapons. Unmanned war (obviously with some but ever less human
 supervision)  IS the future of war.

 If you used a little lateral thinking, you'd realise that this may well be
 a v.g. thing - let robots kill each other rather than humans - whoever's
 robots win, wins the war. It would be interesting to compare Afghan./Vietnam
 - I imagine the kill count is considerably down (but correct me) - *because*
 of superior, more automated technology.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread Steve Richfield
Matt,

On Mon, Aug 2, 2010 at 1:10 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Steve Richfield wrote:
  How about an international ban on the deployment of all unmanned and
 automated weapons?

 How about a ban on suicide bombers to level the playing field?


Of course we already have that. Unfortunately, one begets the other. Hence,
we seem to have a choice, neither or both. I vote for neither.


  1984 has truly arrived.

 No it hasn't. People want public surveillance.


I'm not sure what you mean by public surveillance. Monitoring private
phone calls? Monitoring otherwise unused web cams? Monitoring your output
when you use the toilet? Where, if anywhere, do YOU draw the line?


 It is also necessary for AGI. In order for machines to do what you want,
 they have to know what you know.


Unfortunately, knowing everything, any use of this information will either
be to my benefit, or my detriment. Do you foresee any way to limit use to
only beneficial use?

BTW, decades ago I developed the plan of, when my kids got in some sort of
trouble in school or elsewhere, to represent their interests as well as
possible, regardless of whether I agreed with them or not. This worked
EXTREMELY well for me, and for several other families who have tried this.
The point is that to successfully represent their interests, I had to know
what was happening. Potential embarrassment and explainability limited the
kids' actions. I wonder if the same would work for AGIs?


 In order for a global brain to use that knowledge, it has to be public.


Again, where do you draw the line between public and private?


 AGI has to be a global brain because it is too expensive to build any other
 way, and because it would be too dangerous if the whole world didn't control
 it.


I'm not sure what you mean by control.

Here is the BIG question in my own mind, that I have asked in various ways,
so far without any recognizable answer:

There are plainly lots of things wrong with our society. We got here by
doing what we wanted, and by having our representatives do what we wanted
them to do. Clearly some social re-engineering is in our future, if we are
to thrive in the foreseeable future. All changes are resisted by some, and I
suspect that some needed changes will be resisted by most, and perhaps
nearly everyone. Disaster scenarios aside, what would YOU have YOUR AGI do
to navigate this future?

To help guide your answer, I see that the various proposed systems of
ethics would prevent breaking the eggs needed to make a good futuristic
omelet. I suspect that completely democratic systems have run their course.
Against this is letting AGI loose has its own unfathomable hazards. I've
been hanging around here for quite a while, and I don't yet see any success
path to work toward.

I'm on your side in that any successful AGI would have to have the
information and the POWER to succeed, akin to *Colossus, the Forbin Project*,
which I personally see as more of a success story than a horror scenario.
Absent that, AGIs will only add to our present problems.

What is the success path that you see?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-08-02 Thread Steve Richfield
Matt,

On Mon, Aug 2, 2010 at 1:05 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Steve Richfield wrote:
  I would feel a **LOT** better if someone explained SOME scenario to
 eventually emerge from our current economic mess.

 What economic mess?

 http://www.google.com/publicdata?ds=wb-wdictype=lstrail=falsenselm=hmet_y=ny_gdp_mktp_cdscale_y=linind_y=falserdim=countryidim=country:USAtdim=truetstart=-31561920tunit=Ytlen=48hl=endl=en

 Perhaps you failed to note the great disparity between the US and the
World's performance since 2003, or that with each year, greater percentages
of the GDP is going into fewer and fewer pockets. Kids starting out now
don't really have a chance.



 http://www.google.com/publicdata?ds=wb-wdimet=ny_gdp_mktp_cdtdim=truedl=enhl=enq=world+gdp#met=ny_gdp_mktp_cdidim=country:USAtdim=true
  Unemployment
 appears to be permanent and getting worse,

 When you pay people not to work, they are less inclined to work.


That does NOT explain that there are MANY unemployed for every available
job, and that many are falling off the end of their benefits with nothing to
help them. This view may have been true long ago, but it is now dated and
wrong.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-08-01 Thread Steve Richfield
Jan, Ian, et al,

On Sun, Aug 1, 2010 at 1:18 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:

  It seems that *getting things right* is not a priority
  for politicians.

 Keeping things running is the priority.


... and there it is in crystal clarity - how things get SO screwed up in
small increments, sometimes over centuries of time.

If nothing else, the Prisoner's Dilemma and Reverse Reductio ad Absurdum
teach us that advanced logical methods can NOT be applied to solving
real-world problems UNLESS the participants first understand the basics of
the methods. In short, an AGI in a world of idiots would fare far worse,
than a would an effective teacher who is familiar with advanced logical
methods. Hence, the expectation of some sort of millennial effect when AGIs
arrive is probably misplaced.

Note the parallels between Buddhism and the Prisoner's Dilemma - as both
teach to presume overall intelligence from the other side.

*Idea:* Suppose the appropriate people got together and created the IR
Certification Guide that explains both the basics and the various advanced
logical methods. A simple on-line test could be created, that when passed
produces a suitable-for-framing certificate of competence.

I suspect that this tool could work better than any AGI in the absence of
such a tool.

On another note:

 How can you, the participants
 on this forum, hope to ever bring stability

 That depends on your definition of stability.

 Progress is often triggered by instability and leads to new forms
of instability. There shouldn't be too much instability in the same
sense that too much stability is also bad.

I agree with these statements, but we may disagree with where they are
leading. With too much stability, it is possible to drive systems into
the ground SO badly that they can't recover, or take insanely long times to
recover. Some past days-long power failures and our present economy are two
example. Indeed, short of something really radical, there seems to be NO
HOPE of ever curing the present unemployment situation. Stability seems to
have destroyed future generations' expectation of life-long gainful
employment.

My simple (and completely unacceptable) cure for this is to tax savings, to
force the money back into the economy. It would be trivial to administer, as
banks could easily collect the tax, and just 1% would probably fix things.
Note that the Koran has Zakat, which is a 5% tax on savings to provide for
the poor. In short, it is socialism! It has worked (depending on your
definition of worked) for ~1,400 years.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Int'l Relations

2010-07-31 Thread Steve Richfield
Jan,

On Fri, Jul 30, 2010 at 4:47 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:

  This brings me to where I came in. How do you deal with irrational
  decision
  making. I was hoping that social simulation would be seeking to provide
  answers. This does not seem to be the case.


Have you ever taken a dispute, completely deconstructed it to determine its
structure, engineered a prospective solution, and attempted to implement it?
I have. Sometimes successfully, and sometimes not so successfully.

First, take a look at Nova's *Mind Over Money* episode:
http://video.pbs.org/video/1479100777/

The message here isn't so much that people create unstable systems around
themselves, but rather, that the (present) systems sciences predictably lead
to unstable systems.

Getting people to act in what may seem at the moment to be ways that are
contrary to their interests is a MAJOR challenge. Indeed, much of the AGI
discussion on this and other forums concerns ways of *stopping* AGIs from
effectively intervening in such instabilities. How can you, the participants
on this forum, hope to ever bring stability to our world when one of your
own goals is to preserve the very sources of those instabilities?

IMHO the underlying problem is mostly too limited of intelligence in most
people. They are simply unable to comprehend the paths to the very things
that they are seeking, and hence have absolutely no hope of success.

You can't write a good Chess playing program unless you have first been a
serious chess player. Similarly, I suspect that demonstrated skill in IR is
a prerequisite to creating any sort of effective IR program. Hence, I would
welcome an opportunity to play on that field, as I suspect others on this
forum might welcome. This should be facilitated, and then watch to see which
approaches seem to at least sometimes work, and which seem to predictably
fail. Once past this, I suspect that the route to an effective IR program
will become more obvious.


 Models of limited rationality (like bounded rationality) are already
 used, e.g., in resource mangement  land use studies, peace and conflict
 studies and some more.


These all seem to incorporate the very presumptions that underlie the
problems at hand. For example, the apparently obvious cure for global
warming is to return the upwind coastal strips to forests and move human
development inland past the first mountain range. This approach should turn
the great deserts green (again), provide an order of magnitude more food,
and consume the CO2 from all of the air and oil still in the ground, plus
lots of coal in addition. Of course no one seriously considers this, because
it involves bulldozing, for example, most of the human development in
America between the Pacific Ocean and the top of the Cascade Mountains.
While the rewards almost certainly exceed the cost, the problem is that the
corporations who own these developments would commit limitless resources to
influence the best government that money can buy to stop any such project.


 The problem with those models is to say _how_much_ irrationality there is.


YES. Some say that my proposal for bulldozing the upwind strips of the
continents is irrational, not because it won't work, but because it hasn't
been experimentally proven. Once past computer simulations, the only way to
prove it is to try it. Judge for yourself which side of this argument is
irrational.


 We can assume (and model) perfect rationality


I don't think so! You may also question this after viewing the NOVA episode
above.


 and then measure the
 gap. Empirically most actors aren't fully irrational or behave random,
 so they approach the rational assumptions. What's often more missing is
 that actors lack information or the means to utilize them.


In short, they lack a lot of everything needed to make rational decisions,
not the least of which are rational questions to decide. Most questions in
our world contain significant content of irrational presumptions, yet people
feel compelled to participate in the irrationality and decide those
questions. Any (competent) AGI would REFUSE TO ANSWER, and would first
redirect attention to the irrational content of the questions.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking Is-a Functionality

2010-07-20 Thread Steve Richfield
Arthur,

Your call for an AGI roadmap is well targeted. I suspect that others here
have their own, somewhat different roadmaps. These should all be merged,
like decks of cards being shuffled together, maybe with percentages
attached, so that people could announce that, say, I am 31% of the way to
having an AGI. At least this would provide SOME metric for progress.

This would apparently place Ben in a awkward position, because on the one
hand he is somewhat resistant to precisely defining his efforts, while on
the other hand he desperately needs to be able to demonstrate some progress
as he works toward something that is useful/salable.

Is a is too vague, e.g. in A robot is a machine, it is unclear whether
robots and machines are simply two different words for the same thing, or
whether robots are a member of the class known as machines. There are also
other more perverse potential meanings, e.g. that a single robot is a
machine, but that multiple robots are something different, e.g. a junk pile.

In Dr. Eliza, I (attempt to) deal with ambiguous statements by having the
final parser demand an unambiguous statement, and utilize my idiom
resolver to recognize common ambiguous statements and fill in the gaps
with clearer words. Hence, simple unambiguous statements and common gapping
works, but less common gapping fails, as do complex statements that can't be
split into 2 or more simple statements.

I suspect that you may be heading toward the common brick wall of paradigm
limitation, where you initially adopt an oversimplified paradigm to get
something to work, and then run into the limitations of that oversimplified
paradigm. For example, Dr. Eliza is up against its own paradigm limitations
that we have discussed here. Hence, it may be time for some paradigm
overhaul if your efforts are to continue smoothly ahead.

I hope this helps.

Steve
=
On Tue, Jul 20, 2010 at 7:20 AM, A. T. Murray menti...@scn.org wrote:

 Tues.20.JUL.2010 -- Seeking Is-a Functionality

 Recently our overall goal in coding MindForth
 has been to build up an ability for the AI to
 engage in self-referential thought. In fact,
 SelfReferentialThought is the Milestone
 next to be achieved on the RoadMap of the
 Google Code MindForth project. However, we are
 jumping ahead a little when we allow ourselves
 to take up the enticing challenge of coding
 Is-a functionality when we have work left over
 to perform on fleshing out question-word queries
 and pronominal gender assignments. Such tasks
 are the loathsome scutwork of coding an AI Mind,
 so we reinvigorate our sense of AI ambition by
 breaking new ground and by leaving old ground to
 be conquered more thoroughly as time goes by.

 We simply want our budding AI mind to think
 thoughts like the following.

 A robin is a bird.
 Birds have wings.

 Andru is a robot.
 A robot is a machine.

 We are not aiming directly at inference or
 logical thinking here. We want rather to
 increase the scope of self-referential AI
 conversations, so that the AI can discuss
 classes and categories of entities in the
 world. If people ask the AI what it is,
 and it responds that it is a robot and
 that a robot is a machine, we want the
 conversation to flow unimpeded and
 naturally in any direction that occurs
 to man or machine.

 We have already built in the underlying
 capabilities such as the usage of articles
 like a or the, and the usage of verbs
 of being. Teaching the AI how to use am
 or is or are was a major problem that
 we worried about solving during quite a
 few years of anticipation of encountering
 an impassable or at least difficult roadblock
 on our AGI Roadmap. Now we regard introducing
 Is-a functionality not so much as an
 insurmountable ordeal as an enjoyable
 challenge that will vastly expand the
 self-referential wherewithal of the
 incipient AI.

 Arthur
 --
 http://robots.net/person/AI4U/diary/22.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread Steve Richfield
Deepak,

An intermediate step is the reverse Turing test (RTT), wherein people or
teams of people attempt to emulate an AGI. I suspect that from such a
competition would come a better idea as to what to expect from an AGI.

I have attempted in the past to drum up interest in a RTT, but so far, no
one seems interested.

Do you want to play a game?!

Steve

On Sun, Jul 18, 2010 at 5:15 AM, deepakjnath deepakjn...@gmail.com wrote:

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Mechanical Analogy for Neural Operation!

2010-07-12 Thread Steve Richfield
Everyone has heard about the water analogy for electrical operation. I have
a mechanical analogy for neural operation that just might be solid enough
to compute at least some characteristics optimally.

No, I am NOT proposing building mechanical contraptions, just using the
concept to compute neuronal characteristics (or AGI formulas for learning).

Suppose neurons were mechanical contraptions, that receive inputs and
communicate outputs via mechanical movements. If one or more of the neurons
connected to an output of a neuron, can't make sense of a given input given
its other inputs, then its mechanism would physically resist the several
inputs that didn't make mutual sense because its mechanism would jam, with
the resistance possibly coming from some downstream neuron.

This would utilize position to resolve opposing forces, e.g. one force
being the observed inputs, and the other force being that they don't make
sense, suggest some painful outcome, etc. In short, this would enforce the
sort of equation over the present formulaic view of neurons (and AGI coding)
that I have suggested in past postings may be present, and show that the
math may not be all that challenging.

Uncertainty would be expressed in stiffness/flexibility, computed
limitations would be handled with over-running clutches, etc.

Propagation of forces would come close (perfect?) to being able to identify
just where in a complex network something should change to learn as
efficiently as possible.

Once the force concentrates at some point, it then gives, something slips
or bends, to unjam the mechanism. Thus, learning is effected.

Note that this suggests little difference between forward propagation and
backwards propagation, though real-world wet design considerations would
clearly prefer fast mechanisms for forward propagation, and compact
mechanisms for backwards propagation.

Epiphany or mania?

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Reward function vs utility

2010-07-02 Thread Steve Richfield
To all,

There may be a fundamental misdirection here on this thread, for your
consideration...

There have been some very rare cases where people have lost the use of one
hemisphere of their brains, and then subsequently recovered, usually with
the help of recently-developed clot-removal surgery. What they report seems
to be completely at odds with the present discussion. I will summarize and
probably overgeneralize, because there aren't many such survivors. One was a
brain researcher who subsequently wrote a book, about which I heard a review
on the radio, but I don't remember the details like title or name.
Hopefully, one of you has found and read this book.

It appears that one hemisphere is a *completely* passive observer, that does
*not* even bother to distinguish you and not-you, other than noting a
probable boundary. The other hemisphere concerns itself with manipulating
the world, regardless of whether particular pieces of it are you or not-you.
It seems unlikely that reward could have any effect at all on the passive
observer hemisphere.

In the case of the author of the book, apparently the manipulating
hemisphere was knocked out of commission for a while, and then slowly
recovered. This allowed her to see the passively observed world, without the
overlay of the manipulating hemisphere. Obviously, this involved severe
physical impairment until she recovered.

Note that AFAIK all of the AGI efforts are egocentric, while half of our
brains are concerned with passively filtering/understanding the world enough
to apply egocentric logic. Note further that since the two hemispheres are
built from the same types of neurons, that the computations needed to do
these two very different tasks are performed by the same wet-stuff. There is
apparently some sort of advanced Turing machine sort of concept going on
in wetware.

This sounds to me like a must-read for any AGIer, and I certainly would have
read it, had I been one.

Hence, I see goal direction, reward, etc., as potentially useful only in
some tiny part of our brains.

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-28 Thread Steve Richfield
Ian, Travis, etc.

On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker ianpark...@gmail.com wrote:


 On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote:

 I think crime has to be made impossible even for an enhanced humans first.


 If our enhancement was Internet based it could be turned off if we were
 about to commit a crime. You really should have said unenhanced humans. If
 my conversation (see above) was about jihad and terrorism AI would provide a
 route for the security services. I think you are muddled here.


Anyone who could suggest making crime impossible, anyone who could respond
to such nonsense other than pointing out that it is nonsense, is SO far
removed from reality that it is hard to imagine that they function in
society. Here are some points for those who don't see this as obvious:
1.  Much/most crime is committed by people who see little/no other
rational choice.
2.  Crime is a state of mind. Almost any act would be reasonable under SOME
bizarre circumstances perceived by the perpetrator. It isn't the actions,
but rather the THOUGHT that makes it a crime.
3.  Courts are there to decide complex issues like necessity (e.g. self
defense or defense of others), understanding (e.g. mental competence), and
the myriad other issues needed to establish a particular act as a crime.
4.  Crimes are defined through a legislative process, by the best
government that money can buy. This would simply consign everything (and
everyone) to the wealthy people who have bought the government. Prepare for
slavery.
5.  Our world is already so over-constrained that it is IMPOSSIBLE to live
without violating any laws.

Is the proposal to make impossible anything that could conceivably be
construed as a crime, or to make impossible anything that couldn't be
construed as anything but a crime? Even these two extremes would have
significant implementation problems.

Anyway, I am sending you two back to kindergarten.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-28 Thread Steve Richfield
Rob,

I just LOVE opaque postings, because they identify people who see things
differently than I do. I'm not sure what you are saying here, so I'll make
some random responses to exhibit my ignorance and elicit more explanation.

On Mon, Jun 28, 2010 at 9:53 AM, rob levy r.p.l...@gmail.com wrote:

 In order to have perceptual/conceptual similarity, it might make sense that
 there is distance metric over conceptual spaces mapping


 It sounds like this is a finer measure than the dimensionality that I was
referencing. However, I don't see how to reduce anything as quantized as
dimensionality into finer measures. Can you say some more about this?

(ala Gardenfors or something like this theory)  underlying how the
 experience of reasoning through is carried out.

This has the advantage of being motivated by neuroscience findings (which
 are seldom convincing, but in this case it is basic solid neuroscience
 research) that there are topographic maps in the brain.


However, different people's brains, even the brains of identical twins, have
DIFFERENT mappings. This would seem to mandate experience-formed topology.


 Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


I agree.


 though I wonder what it actually would be in terms of neurons, (and if that
 matters).


I don't see any route to the answer except via neurons.


 But that is different from what would be considered quantitative reasoning,
 because from the phenomenological perspective the person is training
 sensorimotor expectations by perceiving and doing.  And creative conceptual
 shifts (or recognition of novel perceptual categories) can also be explained
 by this feedback between trained topographic maps and embodied interaction
 with environment (experienced at the ecological level as sensorimotor
 expectations (driven by neural maps). Sensorimotor expectation is the basis
 of dynamics of perception and coceptualization).


All of which is computation of various sorts, the basics of which need to be
understood.

Steve
=

 On Sun, Jun 27, 2010 at 7:24 PM, Ben Goertzel b...@goertzel.org wrote:



 On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben,

 On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote:

  know what dimensional analysis is, but it would be great if you could
 give an example of how it's useful for everyday commonsense reasoning such
 as, say, a service robot might need to do to figure out how to clean a
 house...


 How much detergent will it need to clean the floors? Hmmm, we need to
 know ounces. We have the length and width of the floor, and the bottle says
 to use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
 oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
 multiply all three numbers together to get ounces. This WITHOUT
 understanding things like surface area, utilization, etc.



 I think that the El Salvadorean maids who come to clean my house
 occasionally, solve this problem without any dimensional analysis or any
 quantitative reasoning at all...

 Probably they solve it based on nearest-neighbor matching against past
 experiences cleaning other dirty floors with water in similarly sized and
 shaped buckets...

 -- ben g


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben,

What I saw as my central thesis is that propagating carefully conceived
dimensionality information along with classical information could greatly
improve the cognitive process, by FORCING reasonable physics WITHOUT having
to understand (by present concepts of what understanding means) physics.
Hutter was just a foil to explain my thought. Note again my comments
regarding how physicists and astronomers understand some processes though
dimensional analysis that involves NONE of the sorts of understanding
that you might think necessary, yet can predictably come up with the right
answers.

Are you up on the basics of dimensional analysis? The reality is that it is
quite imperfect, but is often able to yield a short list of answers, with
the correct one being somewhere in the list. Usually, the wrong answers are
wildly wrong (they are probably computing something, but NOT what you might
be interested in), and are hence easily eliminated. I suspect that neurons
might be doing much the same, as could formulaic implementations like (most)
present AGI efforts. This might explain natural architecture and guide
human architectural efforts.

In short, instead of a pot of neurons, we might instead have a pot of
dozens of types of neurons that each have their own complex rules regarding
what other types of neurons they can connect to, and how they process
information. Architecture might involve deciding how many of each type to
provide, and what types to put adjacent to what other types, rather than the
more detailed concept now usually thought to exist.

Thanks for helping me wring my thought out here.

Steve
=
On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel b...@goertzel.org wrote:


 Hi Steve,

 A few comments...

 1)
 Nobody is trying to implement Hutter's AIXI design, it's a mathematical
 design intended as a proof of principle

 2)
 Within Hutter's framework, one calculates the shortest program that
 explains the data, where shortest is measured on Turing  machine M.
 Given a sufficient number of observations, the choice of M doesn't matter
 and AIXI will eventually learn any computable reward pattern.  However,
 choosing the right M can greatly accelerate learning.  In the case of a
 physical AGI system, choosing M to incorporate the correct laws of physics
 would obviously accelerate learning considerably.

 3)
 Many AGI designs try to incorporate prior understanding of the structure 
 properties of the physical world, in various ways.  I have a whole chapter
 on this in my forthcoming book on OpenCog  E.g. OpenCog's design
 includes a physics-engine, which is used directly and to aid with
 inferential extrapolations...

 So I agree with most of your points, but I don't find them original except
 in phrasing ;)

 ... ben


 On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current AGI
 thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore I
 will first attempt to explain what I see, WITHOUT so much trying to convince
 you (or anyone) that it is necessarily correct. Once I convey my vision,
 then let the chips fall where they may.

 On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote:

 Hutter's AIXI for instance works [very roughly speaking] by choosing the
 most compact program that, based on historical data, would have yielded
 maximum reward


 ... and there it is! What did I see?

 Example applicable to the lengthy following discussion:
 1 - 2
 2 - 2
 3 - 2
 4 - 2
 5 - ?
 What is ?.

 Now, I'll tell you that the left column represents the distance along a
 4.5 unit long table, and the right column represents the distance above the
 floor that you will be as your walk the length of the table. Knowing this,
 without ANY supporting physical experience, I would guess ? to be zero, or
 maybe a little more if I were to step off of the table and land onto
 something lower, like the shoes that I left there.

 In an imaginary world where a GI boots up with a complete understanding of
 physics, etc., we wouldn't prefer the simplest program at all, but rather
 the simplest representation of the real world that is not physics/math *
 in*consistent with our observations. All observations would be presumed
 to be consistent with the response curves of our sensors, showing a world in
 which Newton's laws prevail, etc. Armed with these presumptions, our
 physics-complete AGI would look for the simplest set of *UN*observed
 phenomena that explained the observed phenomena. This theory of a
 physics-complete AGI seems undeniable, but of course, we are NOT born
 physics-complete - or are we?!

 This all comes down to the limits of representational math. At great risk
 of hand-waving on a keyboard, I'll try to explain by pseudo-translating the
 concepts into NN/AGI terms.

 We all know about layering and columns in neural systems

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben,

On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote:

  know what dimensional analysis is, but it would be great if you could give
 an example of how it's useful for everyday commonsense reasoning such as,
 say, a service robot might need to do to figure out how to clean a house...


How much detergent will it need to clean the floors? Hmmm, we need to know
ounces. We have the length and width of the floor, and the bottle says to
use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
multiply all three numbers together to get ounces. This WITHOUT
understanding things like surface area, utilization, etc.

Of course, throw in a few other available measures and it become REALLY easy
to come up with several wrong answers. This method does NOT avoid wrong
answers, it only provides a mechanism to have the right answer among them.

While this may be a challenge for dispensing detergent (especially if you
include the distance from the earth to the sun as one of your available
measures), it is little problem for learning.

I was more concerned with learning than with solving. I believe that
dimensional analysis could help learning a LOT, by maximally constraining
what is used as a basis for learning, without throwing the baby out with
the bathwater, i.e. applying so much constraint that a good solution can't
climb out of the process.

Steve


On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 What I saw as my central thesis is that propagating carefully conceived
 dimensionality information along with classical information could greatly
 improve the cognitive process, by FORCING reasonable physics WITHOUT having
 to understand (by present concepts of what understanding means) physics.
 Hutter was just a foil to explain my thought. Note again my comments
 regarding how physicists and astronomers understand some processes though
 dimensional analysis that involves NONE of the sorts of understanding
 that you might think necessary, yet can predictably come up with the right
 answers.

 Are you up on the basics of dimensional analysis? The reality is that it
 is quite imperfect, but is often able to yield a short list of answers,
 with the correct one being somewhere in the list. Usually, the wrong answers
 are wildly wrong (they are probably computing something, but NOT what you
 might be interested in), and are hence easily eliminated. I suspect that
 neurons might be doing much the same, as could formulaic implementations
 like (most) present AGI efforts. This might explain natural architecture
 and guide human architectural efforts.

 In short, instead of a pot of neurons, we might instead have a pot of
 dozens of types of neurons that each have their own complex rules regarding
 what other types of neurons they can connect to, and how they process
 information. Architecture might involve deciding how many of each type to
 provide, and what types to put adjacent to what other types, rather than the
 more detailed concept now usually thought to exist.

 Thanks for helping me wring my thought out here.

 Steve
 =
 On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel b...@goertzel.org wrote:


 Hi Steve,

 A few comments...

 1)
 Nobody is trying to implement Hutter's AIXI design, it's a mathematical
 design intended as a proof of principle

 2)
 Within Hutter's framework, one calculates the shortest program that
 explains the data, where shortest is measured on Turing  machine M.
 Given a sufficient number of observations, the choice of M doesn't matter
 and AIXI will eventually learn any computable reward pattern.  However,
 choosing the right M can greatly accelerate learning.  In the case of a
 physical AGI system, choosing M to incorporate the correct laws of physics
 would obviously accelerate learning considerably.

 3)
 Many AGI designs try to incorporate prior understanding of the structure
  properties of the physical world, in various ways.  I have a whole chapter
 on this in my forthcoming book on OpenCog  E.g. OpenCog's design
 includes a physics-engine, which is used directly and to aid with
 inferential extrapolations...

 So I agree with most of your points, but I don't find them original
 except in phrasing ;)

 ... ben


 On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current
 AGI thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore
 I will first attempt to explain what I see, WITHOUT so much trying to
 convince you (or anyone) that it is necessarily correct. Once I convey my
 vision, then let the chips fall where they may.

 On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote:

 Hutter's AIXI for instance works [very roughly speaking] by choosing
 the most

Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Travis,

The AGI world seems to be cleanly divided into two groups:

1.  People (like Ben) who feel as you do, and aren't at all interested or
willing to look at the really serious lapses in logic that underlie this
approach. Note that there is a similar belief in Buddhism, akin to the
prisoners dilemma, that if everyone just decides to respect everyone else,
that the world will be a really nice place. The problem is, it doesn't work,
and it can't work for some sound logical reasons that were unknown thousands
of years ago when those beliefs were first advanced, and are STILL unknown
to most of the present-day population, and...

2.  People (like me) who see that this is a really insane, dangerous, and
delusional belief system, as it encourages activities that are every bit as
dangerous as DIY thermonuclear weapons. Sure, you aren't likely to build a
successful H-bomb in your basement using heavy water that you separated
using old automobile batteries, but should we encourage you to even try?

Unfortunately, there is ~zero useful communication between these two groups.
For example, Ben explains that he has heard all of the horror scenarios for
AGIs, and I believe that he has, yet he continues in this direction for
reasons that he is too busy to explain in detail. I have viewed some of
his presentations, e.g. at the 2009 Singularity conference. There, he
provides no glimmer of any reason why his approach isn't predictably
suicidal if/when an AGI ever comes into existence, beyond what you outlined,
e.g. imperfect protective mechanisms that would only serve to become their
own points of contention between future AGIs. What if some accident disables
an AGI's protective mechanisms? Would there be some major contention between
Ben's AGI and Osama bin Laden's AGI? How about those nasty little areas
where our present social rules enforce specie-destroying dysgenic activity?
Ultimately and eventually, why should AGIs give a damn about us?

Steve
=
On Fri, Jun 25, 2010 at 1:25 PM, Travis Lenting travlent...@gmail.comwrote:

 I hope I don't miss represent him but I agree with Ben (at
 least my interpretation) when he said, We can ask it questions like, 'how
 can we make a better A(G)I that can serve us in more different ways without
 becoming dangerous'...It can help guide us along the path to a
 positive singularity. I'm pretty sure he was also saying at first it
 should just be a question answering machine with a reliable goal system and
 stop the development if it has an unstable one before it gets to smart. I
 like the idea that we should create an automated
 cross disciplinary scientist and engineer (if you even separate the two) and
 that NLP not modeled after the human brain is the best proposal for
 a benevolent and resourceful super intelligence that enables a positive
 singularity and all its unforeseen perks.
 On Wed, Jun 23, 2010 at 11:04 PM, The Wizard key.unive...@gmail.comwrote:


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Fellow Cylons,

I sure hope SOMEONE is assembling a list from these responses, because this
is exactly the sort of stuff that I (or someone) would need to run a Reverse
Turing Test (RTT) competition.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Abram,

On Mon, Jun 21, 2010 at 8:38 AM, Abram Demski abramdem...@gmail.com wrote:

 Steve,

 You didn't mention this, so I guess I will: larger animals do generally
 have larger brains, coming close to a fixed brain/body ratio. Smarter
 animals appear to be the ones with a higher brain/body ratio rather than
 simply a larger brain. This to me suggests that the amount of sensory
 information and muscle coordination necessary is the most important
 determiner of the amount of processing power needed. There could be other
 interpretations, however.


It is REALLY hard to compare the intelligence of various animals, because of
their innate behavior being overlaid. For example, based on ability to
follow instruction, cats must be REALLY stupid.


 It's also pretty important to say that brains are expensive to fuel. It's
 probably the case that other animals didn't get as smart as us because the
 additional food they could get per ounce brain was less than the additional
 food needed to support an ounce of brain. Humans were in a situation in
 which it was more. So, I don't think your argument from other animals
 supports your hypothesis terribly well.


Presuming for a moment that you are right, then there will be no
singularity! No, this is NOT a reductio ad absurdum proof either way. Why
no singularity?

If there really is a limit to the value of intelligence, then why should we
think that there will be anything special about super-intelligence? Perhaps
we have been deluding ourselves because we want to think that the reason we
aren't all rich is because we just aren't smart enough, when in reality some
entirely different phenomenon may be key? Have YOU observed that success in
life is highly correlated to intelligence?


 One way around your instability if it exists would be (similar to your
 hemisphere suggestion) split the network into a number of individuals which
 cooperate through very low-bandwidth connections.


While helping breadth of analysis, this would seem to absolutely limit
analysis depth to that of one individual.

This would be like an organization of humans working together. Hence,
 multiagent systems would have a higher stability limit.


Providing they don't get into a war of some sort.


 However, it is still the case that we hit a serious diminishing-returns
 scenario once we needed to start doing this (since the low-bandwidth
 connections convey so much less info, we need waaay more processing power
 for every IQ point or whatever).


I see more problems with analysis depth than with bandwidth limitations.


 And, once these organizations got really big, it's quite plausible that
 they'd have their own stability issues.


Yes.

Steve


 On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 There has been an ongoing presumption that more brain (or computer)
 means more intelligence. I would like to question that underlying
 presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand
 a snowball's chance in hell of ever outperforming humans, UNTIL the
 underlying network stability theory is well enough understood to perform
 perfectly to digital precision. This wouldn't necessarily have to address
 all aspects of intelligence, but would at minimum have to address
 large-scale network stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

Your comments appear to be addressing reliability, rather than stability...

On Mon, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
  My underlying thought here is that we may all be working on the wrong
  problems. Instead of working on the particular analysis methods (AGI) or
  self-organization theory (NN), perhaps if someone found a solution to
 large-
  network stability, then THAT would show everyone the ways to their
  respective goals.
 

 For a distributed AGI this is a fundamental problem. Difference is that a
 power grid is such a fixed network.


Not really. Switches may connect or disconnect Canada, equipment is
constantly failing and being repaired, etc. In any case, this doesn't seem
to be related to stability, other than it being a lot easier to analyze a
fixed network rather than a variable network.


 A distributed AGI need not be that
 fixed, it could lose chunks of itself but grow them out somewhere else.
 Though a distributed AGI could be required to run as a fixed network.

 Some traditional telecommunications networks are power grid like. They have
 a drastic amount of stability and healing functions built-in as have been
 added over time.


However, there is no feedback, so stability isn't even a potential issue.


 Solutions for large-scale network stabilities would vary per network
 topology, function, etc..


However, there ARE some universal rules, like the 12db/octave requirement.


 Virtual networks play a large part, this would be
 related to the network's ability to reconstruct itself meaning knowing how
 to heal, reroute, optimize and grow..


Again, this doesn't seem to relate to millisecond-by-millisecond stability.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Jim,

Yours is the prevailing view in the industry. However, it doesn't seem to
work. Even given months of time to analyze past failures, they are often
unable to divine rules that would have reliably avoided the problems. In
short, until you adequately understand the system that your sensors are
sensing, all the readings in the world won't help. Further, when a system is
fundamentally unstable, you must have a control system that completely deals
with the instability, or it absolutely will fail. The present system meets
neither of these criteria.

There is another MAJOR issue. Presuming a power control center in the middle
of the U.S., the round-trip time at the speed of light to each coast is
~16ms, or two half-cycles at 60Hz. In control terms, that is an eternity.
Distributed control requires fundamental stability to function reliably.
Times can be improved by having separate control systems for each coast, but
the interface would still have to meet fundamental stability criteria (like
limiting the rates of change), and our long coasts would still require a
full half-cycle of time to respond.

Note that faults must be responded to QUICKLY to save the equipment, and so
cannot be left to central control systems to operate.

So, we end up with the system we now have, that does NOT meet reasonable
stability criteria. Hence, we may forever have occasional outages until the
system is radically re-conceived.

Steve
==
On Mon, Jun 21, 2010 at 9:17 AM, Jim Bromer jimbro...@gmail.com wrote:

 I think a real world solution to grid stability would require greater use
 of sensory devices (and a some sensory-feedback devices). I really don't
 know for sure, but my assumption is that electrical grid management has
 relied mostly on the electrical reactions of the grid itself, and here you
 are saying that is just not good enough for critical fluctuations in 2010.
 So while software is also necessary of course, the first change in how grid
 management should be done is through greater reliance on off-the-grid (or at
 minimal backup on-grid) sensory devices.  I am quite confident, without
 knowing anything about the subject, that that is what needs to be done
 because I understand a little about how different groups of people work and
 I have seen how sensory devices like gps and lidar have fundamentally
 changed AI projects because they allowed time sensitive critical analysis
 that was too slow and for contemporary AI to solve.  100 years from now,
 electrical grid management won't require another layer of sensors because
 the software analysis of grid fluctuations will be sufficient. On the other
 hand, grid managers will not remove these additional layers of sensors from
 the grid a hundred years from now anymore than we telephone engineers would
 suggest that maybe they should stop using fiber optics because they could
 get back to 1990 fiber optic capacity and reliability using copper wire with
 today's switching and software devices.
 Jim Bromer
 On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 There has been an ongoing presumption that more brain (or computer)
 means more intelligence. I would like to question that underlying
 presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand
 a snowball's chance in hell of ever outperforming humans, UNTIL the
 underlying network stability theory is well enough understood to perform
 perfectly to digital precision. This wouldn't necessarily have to address
 all aspects of intelligence, but would at minimum have to address
 large-scale network stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres

[agi] Formulaic vs. Equation AGI

2010-06-21 Thread Steve Richfield
One constant in ALL proposed methods leading to computational intelligence
is formulaic operation, where agents, elements, neurons, etc., process
inputs to produce outputs. There is scant biological evidence for this,
and plenty of evidence for a balanced equation operation. Note that
unbalancing one side, e.g. by injecting current, would result in a
responding imbalance on the other side, so that synapses might (erroneously)
appear to be one-way. However, there is plenty of evidence that information
flows both ways, e.g. retrograde flow of information to support learning.

Even looking at seemingly one-way things like the olfactory nerve, there are
axons going both ways.

No, I don't have any sort of comprehensive balanced-equation theory of
intelligent operation, but I can see the interesting possibility.

Suppose that the key to life is not competition, but rather is fitting into
the world. Perhaps we don't so much observe things as orchestrate them to
our needs. Hence, we and our world are in a gigantic loop, adjusting our
outputs to achieve balancing characteristics in our inputs. Imbalances
precipitate changes in action to achieve balance. The only difference
between us and our world is implementation detail. We do our part, and it
does its part. I'm sure that there are Zen Buddhists out there who would
just LOVE this yin-yang view of things.

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

On Mon, Jun 21, 2010 at 10:06 AM, John G. Rose johnr...@polyplexic.comwrote:


  Solutions for large-scale network stabilities would vary per network
  topology, function, etc..
 
  However, there ARE some universal rules, like the 12db/octave
 requirement.
 

 Really? Do networks such as botnets really care about this? Or does it
 apply?


Anytime negative feedback can become positive feedback because of delays or
phase shifts, this becomes an issue. Many competent EE people fail to see
the phase shifting that many decision processes can introduce, e.g. by
responding as quickly as possible, finite speed makes finite delays and
sharp frequency cutoffs, resulting in instabilities at those frequency
cutoff points because of violation of the 12db/octave rule. Of course, this
ONLY applies in feedback systems and NOT in forward-only systems, except at
the real-world point of feedback, e.g. the bots themselves.

Of course, there is the big question of just what it is that is being
attenuated in the bowels of an intelligent system. Usually, it is
computational delays making sharp frequency-limited attenuation at their
response speeds.

Every gamer is well aware of the oscillations that long ping times can
introduce in people's (and intelligent bot's) behavior. Again, this is
basically the same 12db/octave phenomenon.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Russell,

On Mon, Jun 21, 2010 at 1:29 PM, Russell Wallace
russell.wall...@gmail.comwrote:

 On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
 steve.richfi...@gmail.com wrote:
  That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 Personally I've always wondered how elephants managed to evolve brains
 as large as they currently have. How much intelligence does it take to
 sneak up on a leaf? (Granted, intraspecies social interactions seem to
 provide at least part of the answer.)


I suspect that intra-specie social behavior will expand to utilize all
available intelligence.


  There are all sorts of network-destroying phenomena that rise from
 complex networks, e.g. phase shift oscillators there circular analysis paths
 enforce themselves, computational noise is endlessly analyzed, etc. We know
 that our own brains are just barely stable, as flashing lights throw some
 people into epileptic attacks, etc. Perhaps network stability is the
 intelligence limiter?

 Empirically, it isn't.


I see what you are saying, but I don't think you have made your case...


  Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect

 Actually it's the other way around. Brains compensate for
 imperfections (both transient error and permanent failure) in neurons
 by using more of them.


William Calvin, the author who is most credited with making and spreading
this view, and I had a discussion on his Seattle rooftop, while throwing pea
gravel at a target planter. His assertion was that we utilize many parallel
circuits to achieve accuracy, and mine was that it was something else, e.g.
successive approximation. I pointed out that if one person tossed the pea
gravel by putting it on their open hand and pushing it at a target, and the
other person blocked their arm, that the relationship between how much of
the stroke was truncated and how great the error was would disclose the
method of calculation. The question boils down to the question of whether
the error grows drastically even with small truncation of movement (because
a prototypical throw is used, as might be expected from a parallel
approach), or grows exponentially because error correcting steps have been
lost. We observed apparent exponential growth, much smaller than would be
expected from parallel computation, though no one was keeping score.

In summary, having performed the above experiment, I reject this common
view.

Note that, as the number of transistors on a
 silicon chip increases, the extent to which our chip designs do the
 same thing also increases.


Another pet peeve of mine. They could/should do MUCH more fault tolerance
than they now are. Present puny efforts are completely ignorant of past
developments, e.g. Tandem Nonstop computers.


  There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes

 The power in my neighborhood fails once every few years (and that's
 from all causes, including 'the cable guys working up the street put a
 JCB through the line', not just network crashes). If you're getting
 lots of power failures in your neighborhood, your electricity supply
 company is doing something wrong.


If you look at the failures/bandwidth, it is pretty high. The point is that
the information bandwidth of the power grid is EXTREMELY low, so it
shouldn't fail at all, at least not more than maybe once per century.
However, just like the May 6 problem, it sometimes gets itself into trouble
of its own making. Any overload SHOULD simply result in shutting down some
low-priority load, like the heaters in steel plants, and this usually works
as planned. However, it sometimes fails for VERY complex reasons - so
complex that PhD engineers are unable to put it into words, despite having
millisecond-by-millisecond histories to work from.


  I wonder, does the very-large-scale network problem even have a
 prospective solution? Is there any sort of existence proof of this?

 Yes, our repeated successes in simultaneously improving both the size
 and stability of very large scale networks (trade,


NOT stable at all. Just look at the condition of the world's economy.


 postage, telegraph,
 electricity, road, telephone, Internet)


None of these involve feedback, the fundamental requirement to be a
network rather than a simple tree structure. This despite common misuse of
the term network to cover everything with lots of interconnections.


 serve as very nice existence
 proofs.


I'm still looking.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

Hmmm, I though that with your EE background, that the 12db/octave would
bring back old sophomore-level course work. OK, so you were sick that day.
I'll try to fill in the blanks here...

On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.comwrote:


  Of course, there is the big question of just what it is that is being
  attenuated in the bowels of an intelligent system. Usually, it is
  computational delays making sharp frequency-limited attenuation at their
  response speeds.
 
  Every gamer is well aware of the oscillations that long ping times can
  introduce in people's (and intelligent bot's) behavior. Again, this is
 basically
  the same 12db/octave phenomenon.
 

 OK, excuse my ignorance on this - a design issue in distributed
 intelligence
 is how to split up things amongst the agents. I see it as a hierarchy of
 virtual networks, with the lowest level being the substrate like IP sockets
 or something else but most commonly TCP/UDP.

 The protocols above that need to break up the work, and the knowledge
 distribution, so the 12db/octave phenomenon must apply there too.


RC low-pass circuits exhibit 6db/octave rolloff and 90 degree phase shifts.
12db/octave corresponds to a 180 degree phase shift. More than 180 degrees
and you are into positive feedback. At 24db/octave, you are at maximum *
positive* feedback, which makes great oscillators.

The 12 db/octave limit applies to entire loops of components, and not to the
individual components. This means that you can put a lot of 1db/octave
components together in a big loop and get into trouble. This is commonly
encountered in complex analog filter circuits that incorporate 2 or more
op-amps in a single feedback loop. Op amps are commonly compensated to
have 6db/octave rolloff. Put 2 of them together and you right at the
precipice of 12db/octave. Add some passive components that have their own
rolloffs, and you are over the edge of stability, and the circuit sits there
and oscillates on its own. The usual cure is to replace one of the op-amps
with an *un*compensated op-amp with ~0db/octave rolloff, until it gets to
its maximum frequency, whereupon it has an astronomical rolloff. However,
that astronomical rolloff works BECAUSE the loop gain at that frequency is
less than 1, so the circuit cannot self-regenerate and oscillate at that
frequency.

Considering the above and the complexity of neural circuits, it would seem
that neural circuits would have to have absolutely flat responses and some
central rolloff mechanism, maybe one of the ~200 different types of neurons,
or alternatively, would have to be able to custom-tailor their responses to
work in concert to roll off at a reasonable rate. A third alternative is
discussed below, where you let them go unstable, and actually utilize the
instability to achieve some incredible results.


 I assume any intelligence processing engine must include a harmonic
 mathematical component


I'm not sure I understand what you are saying here. Perhaps you have
discovered the recipe for the secret sauce?


 since ALL things are basically network, especially
 intelligence.


Most of the things we call networks really just pass information along and
do NOT have feedback mechanisms. Power control is an interesting exception,
but most of those guys are unable to even carry on an intelligent
conversation about the subject. No wonder the power networks have problems.


 This might be an overly aggressive assumption but it seems from observance
 that intelligence/consciousness exhibits some sort of harmonic property, or
 levels.


You apparently grok something about harmonics that I don't (yet) grok.
Please enlighten me.

Are you familiar with regenerative receiver operation where operation is on
the knife-edge of instability, or super-regenerative receiver operation,
wherein an intentionally UNstable circuit is operated to achieve phenomenal
gain and specifically narrow bandwidth? These were common designs back in
the early vacuum tube era, when active components cost a day's wages. Given
all of the observed frequency components coming from neural circuits,
perhaps neurons do something similar to actually USE instability to their
benefit?! Is this related to your harmonic thoughts?

Thanks.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Steve Richfield
No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
talk with my son Eddie, about self-organization theory. This is *his*proposal:

He suggested that I construct a simple NN that couldn't work without self
organizing, and make dozens/hundreds of different neuron and synapse
operational characteristics selectable ala genetic programming, put it on
the fastest computer I could get my hands on, turn it loose trying arbitrary
combinations of characteristics, and see what the winning combination
turns out to be. Then, armed with that knowledge, refine the genetic
characteristics and do it again, and iterate until it *efficiently* self
organizes. This might go on for months, but self-organization theory might
just emerge from such an effort. I had a bunch of objections to his
approach, e.g.

Q.  What if it needs something REALLY strange to work?
A.  Who better than you to come up with a long list of really strange
functionality?

Q.  There are at least hundreds of bits in the genome.
A.  Try combinations in pseudo-random order, with each bit getting asserted
in ~half of the tests. If/when you stumble onto a combination that sort of
works, switch to varying the bits one-at-a-time, and iterate in this way
until the best combination is found.

Q.  Where are we if this just burns electricity for a few months and finds
nothing?
A.  Print out the best combination, break out the wacky tobacy, and come up
with even better/crazier parameters to test.

I have never written a line of genetic programming, but I know that others
here have. Perhaps you could bring some rationality to this discussion?

What would be a simple NN that needs self-organization? Maybe a small
pot of neurons that could only work if they were organized into layers,
e.g. a simple 64-neuron system that would work as a 4x4x4-layer visual
recognition system, given the input that I fed it?

Any thoughts on how to score partial successes?

Has anyone tried anything like this in the past?

Is anyone here crazy enough to want to help with such an effort?

This Monte Carlo approach might just be simple enough to work, and simple
enough that it just HAS to be tried.

All thoughts, stones, and rotten fruit will be gratefully appreciated.

Thanks in advance.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Steve Richfield
 ideas - and deal with the real, only
 roughly definable world -  and you'll never address AGI..


  *From:* Steve Richfield steve.richfi...@gmail.com
 *Sent:* Sunday, June 20, 2010 7:06 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* [agi] An alternative plan to discover self-organization theory

 No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
 talk with my son Eddie, about self-organization theory. This is *his*proposal:

 He suggested that I construct a simple NN that couldn't work without self
 organizing, and make dozens/hundreds of different neuron and synapse
 operational characteristics selectable ala genetic programming, put it on
 the fastest computer I could get my hands on, turn it loose trying arbitrary
 combinations of characteristics, and see what the winning combination
 turns out to be. Then, armed with that knowledge, refine the genetic
 characteristics and do it again, and iterate until it *efficiently* self
 organizes. This might go on for months, but self-organization theory might
 just emerge from such an effort. I had a bunch of objections to his
 approach, e.g.

 Q.  What if it needs something REALLY strange to work?
 A.  Who better than you to come up with a long list of really strange
 functionality?

 Q.  There are at least hundreds of bits in the genome.
 A.  Try combinations in pseudo-random order, with each bit getting asserted
 in ~half of the tests. If/when you stumble onto a combination that sort of
 works, switch to varying the bits one-at-a-time, and iterate in this way
 until the best combination is found.

 Q.  Where are we if this just burns electricity for a few months and finds
 nothing?
 A.  Print out the best combination, break out the wacky tobacy, and come up
 with even better/crazier parameters to test.

 I have never written a line of genetic programming, but I know that others
 here have. Perhaps you could bring some rationality to this discussion?

 What would be a simple NN that needs self-organization? Maybe a small
 pot of neurons that could only work if they were organized into layers,
 e.g. a simple 64-neuron system that would work as a 4x4x4-layer visual
 recognition system, given the input that I fed it?

 Any thoughts on how to score partial successes?

 Has anyone tried anything like this in the past?

 Is anyone here crazy enough to want to help with such an effort?

 This Monte Carlo approach might just be simple enough to work, and simple
 enough that it just HAS to be tried.

 All thoughts, stones, and rotten fruit will be gratefully appreciated.

 Thanks in advance.

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Steve Richfield
Jim,

I'm trying to get my arms around what you are saying here. I'll make some
probably off the mark comments in the hopes that you will clarify your
statement...

On Sun, Jun 20, 2010 at 2:38 AM, Jim Bromer jimbro...@gmail.com wrote:

 On Sun, Jun 20, 2010 at 2:06 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
 talk with my son Eddie, about self-organization theory. This is 
 *his*proposal:

 He suggested that I construct a simple NN that couldn't work without
 self organizing, and make dozens/hundreds of different neuron and synapse
 operational characteristics selectable ala genetic programming, put it on
 the fastest computer I could get my hands on, turn it loose trying arbitrary
 combinations of characteristics, and see what the winning combination
 turns out to be.


 That's a pretty interesting idea, but it won't work...I am joking, what I
 mean is that it is not very interesting if you are only interested
 in substantial success, it is much more interesting if you are interested in
 finding out what happens.  Genetic Programming has a flaw in that it is not
 designed to recall outputs that might be used in a constructive combination.


The program could take the winning genome, try inverting each bit
one-by-one, and observe the relative deterioration in performance. Then, the
important characteristics could be listed in order of deterioration when
their respective bits were inverted, with the most deteriorated ones listed
first. That should give me a pretty good idea of what is important and what
is not, which should be a BIG clue as to how it works.


 If the algorithm was designed to do this, the candidate outputs (probably)
 would have to be organized (indexed) by parts and associated with the
 combinations that created them.


A mix of neurons with a winning genome varied slightly among subgroup(s)
might potentially discover an important combination of characteristics
needed for better operation, e.g. different operation for different emergent
layers.

Furthermore, since the output of a genetic algorithm is evaluated by a
 precise method,


THIS seems to be the BIG challenge - evaluating crap. Like having a Nobel
Laurette judging primary school science projects, only worse. Not only must
figures of merit be evaluated and combined akin to end-branch position
evaluation in a chess playing program, but there is added the sad fact that
the programmer's (my) own ignorance is built into those figures of merit.
This is why chess playing programs written by chess masters work better than
chess playing programs written by really good programmers.

I once wrote such a program for a commercial time sharing service and many
customers played it. It never lost! It also never won!!! It played the most
incredibly boring defensive game that everyone simply walked away from it,
rather than spending the hours needed to ever so carefully beat it. I
learned a lot from that program, NOT including how to play better chess.
Hopefully I can avoid the same fate with this effort.

My hope here is that the programmer (me) will become a master (understand
self-organization) in the process, which is really the goal of this program,
to train the programmer.

the sense of self organization might be voided or at least made more
 elusive and problematic.  You'd have to redesign how genetic algorithms
 evaluate their candidate outputs and before you did that you would have to
 put some thought into how a programmer can design a test for
 self-organization.  It is a subtle question.


I agree. Do you have any thoughts about how to go about this?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Encouraging?

2009-01-14 Thread Steve Richfield
Mike,

On 1/14/09, Mike Tintner tint...@blueyonder.co.uk wrote:

 You have talked about past recessions being real opportunities for
 business. But in past recessions, wasn't business able to get lending? And
 doesn't the tightness of the credit market today inhibit some opportunities?


It definitely changes things. This could all change in a heartbeat. Suppose
for a moment that Saudi Arabia decided to secure its Riyal (their dollar)
with a liter of oil. The Trillions now sitting in Federal Reserve accounts
without interest and at risk of the dollar collapsing, could simply be
transferred into Riyals instead and be secure. Of course, this would
instantly bankrupt the U.S. government and many of those Trillions would be
lost, but it WOULD instantly restore whatever survived of the worldwide
monetary system.

Typically not. Most new innovations are started without access to credit in
 good times or bad.


Only because business can't recognize a good thing when they see it, e.g.
Zerox not seeing the value of their early version of Windows.

Microsoft (MSFT) was started without any access to credit.


Unless you count the millions that his parents had to help him over the
rough spots.

It's only in crazy times that people lend money to people who are
 experimenting with innovations.


In ordinary times, they want stock instead, with its MUCH greater upside
potential.

Most of the great businesses today were started with neither a lot of
 venture capital nor with any bank lending until five or six years after they
 were [up and running].


This really gets down to what up and running is. For HP, they were making
light bulb stabilized audio oscillators in their garage.

Because of numerous possibilities like a secured Riyal mentioned above, I
suspect that things will instantly change one way or another as quickly as
they came down from Credit Default Swaps, a completely hidden boondoggle
until it went off.

Note how Zaire solved their monetary problems years ago. They closed their
borders over the Christmas holidays and exchanged new dollars for old. Then
they re-opened their borders, leaving the worthless old dollars held by
foreigners.as someone ELSE's problem.

Mexico went through a sudden 1000:1 devaluation to solve their problems. In
one stroke this wiped out their foreign debt.

Expect something REALLY dramatic in the relatively near future. I suspect
that our bribe-o-cratic form of government will prohibit our taking
preemptive action, and thereby leave us at the (nonexistent) mercy of other
powers that aren't so inhibited.

I have NEVER seen a desperate action based on a simple lack of alternatives,
like the various proposed stimulus plans, ever work. Never expect a problem
to be solved by the same mindset that created it (Einstein). The lack of
investment money will soon be seen as the least of our problems.

On a side note, There hasn't been $20 spent on real genuine industrial
research in the last decade. This means that you can own the field of your
choice by simply investing a low level of research effort, and waiting for
things to change. I have selected 3 narrow disjoint areas and now appear to
be a/the leader in each. I am just waiting for the world to recognize that
it desperately needs one of them.

Any thoughts?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Epineuronal programming

2009-01-08 Thread Steve Richfield
, structure, etc., then you would be on the right
track here. However, it does have edges and corners of countless
variety, and in dp/dt space, sometimes/often interesting objects move while
the background remains stationary, thereby momentarily extracting objects
with all their features separated from their surroundings - something that
can't happen outside of dp/dt space.

Perhaps you learned about early perceptron experiments, where they taught
them with prototypical inputs, e.g. visual inputs with black figures on
white cards? Of course this worked great for demos, but was unworkable using
real-world patterns for learning. My hope/expectation is that dp/dt can put
things back into similar simplistic learning, but using real-world inputs.
In short, I see dp/dt as a sort of mathematical trick to return to the
simplicity and instantaneous learning of early perceptrons.

My impression here is that this entire field has become hung up on
using probabilistic methods, knowing full well that they don't work well
enough to utilize in practical AGI/NN systems for very fundamental reasons.
dp/dt methods promise an escape from this entire mess by sometimes
extracting prototypical cases from highly complex real-world input, and
thereby provide instant programming from just a few patterns, one of
which must pass the various tests of being prototypical (only a certain
percentage of inputs are active), contains a principal component (no
active lateral inhibition), and be interesting (downstream neurons later
utilize it). Failing these tests, throw it back, put your hook back in the
water, and wait for something else to bite, all while discarding all input
data until you encounter one that fits.

In short, dp/dt looks like a whole new game, with new opportunities that
would be COMPLETELY unworkable outside of dp/dt space. However, this new
game is really VERY old, some of it coming from the very early days of
perceptrons.

Steve Richfield
===

 On Wed, Jan 7, 2009 at 1:40 PM, Steve Richfield
 steve.richfi...@gmail.com wrote:
  Abram,
 
  On 1/6/09, Abram Demski abramdem...@gmail.com wrote:
 
  Well, I *still* think you are wasting your time with flat
  (propositional) learning.
 
 
  I'm not at all sure that I understand what you are saying here, so some
  elaboration is probably in order.
 
  I'm not saying there isn't still progress to
  be made in this area, but I just don't see it as an area where
  progress is critical.
 
 
  My guess is that the poor performance of non dp/dt methods is depressing,
 so
  everyone wants to look elsewhere. Damn that yellow stuff, I'm looking for
  SILVER. My hope/expectation is that this field can be supercharged with
  dp/dt methods.
 
  The main thing that we can do with propositional
  models when we're dealing with relational data is construct
  markov-models.
 
 
  By Markov you are referring to successive computation processes, e.g.
  layers of neurons, each feeding the next?
 
  Markov models are highly prone to overmatching the
  dataset when they become high-order.
 
 
  Only because the principal components haven't been accurately sorted out
 by
  dp/dt methods?
 
  So far as I am aware,
  improvements to propositional models mainly improve performance for
  large numbers of variables, since there isn't much to gain with only a
  few variables.
 
 
  Again, hoping that enough redundancy can deal with the overlapping
 effects
  of things that occur together, a problem generally eliminated by dp/dt
  methods.
 
  (FYI, I don't have much evidence to back up that
  claim.)
 
 
  When I finally get this all wrung out, I'll move onto using Eddie's NN
  platform, that ties into web cams and other complex software or input.
 Then,
  we should have lots of real-world testing. BTW, with really fast
 learning,
  MUCH larger models can be simulated on the same computers.
 
  So, I don't think progress on the propositional front directly
  translates to progress on the relational front, except in cases where
  we have astronomical amounts of data to prevent overmatching.
 
 
  In a sense, dp/dt provides another dimension to sort things out. I am
  hoping/expecting that LESS dp/dt data is needed this way than with other
  competing methods.
 
  Moreover, we need something more than just markov models!
 
 
  The BIG question is: Can we characterize what is needed?
 
  The transition to hidden-markov-model is not too difficult if we take
  the approach of hierarchical temporal memory; but this is still very
  simplistic.
 
 
  Most, though certainly not all elegant solutions are simple. Is dp/dt
 (and
  corollary methods) it or not? THAT is the question.
 
  Any thoughts about dealing with this?
 
 
  Here, I am hung up on this. Rather than respond in excruciating detail
  with a presumption of this, I'll make the following simplistic
 statement
  to get this process started.
 
  Simple learning methods have not worked well for reasons you mentioned
  above. The question here

Re: [agi] Epineuronal programming

2009-01-07 Thread Steve Richfield
Abram,

On 1/6/09, Abram Demski abramdem...@gmail.com wrote:

 Well, I *still* think you are wasting your time with flat
 (propositional) learning.


I'm not at all sure that I understand what you are saying here, so some
elaboration is probably in order.

I'm not saying there isn't still progress to
 be made in this area, but I just don't see it as an area where
 progress is critical.


My guess is that the poor performance of non dp/dt methods is depressing, so
everyone wants to look elsewhere. Damn that yellow stuff, I'm looking for
SILVER. My hope/expectation is that this field can be supercharged with
dp/dt methods.

The main thing that we can do with propositional
 models when we're dealing with relational data is construct
 markov-models.


By Markov you are referring to successive computation processes, e.g.
layers of neurons, each feeding the next?

Markov models are highly prone to overmatching the
 dataset when they become high-order.


Only because the principal components haven't been accurately sorted out by
dp/dt methods?

So far as I am aware,
 improvements to propositional models mainly improve performance for
 large numbers of variables, since there isn't much to gain with only a
 few variables.


Again, hoping that enough redundancy can deal with the overlapping effects
of things that occur together, a problem generally eliminated by dp/dt
methods.

(FYI, I don't have much evidence to back up that
 claim.)


When I finally get this all wrung out, I'll move onto using Eddie's NN
platform, that ties into web cams and other complex software or input. Then,
we should have lots of real-world testing. BTW, with really fast learning,
MUCH larger models can be simulated on the same computers.

So, I don't think progress on the propositional front directly
 translates to progress on the relational front, except in cases where
 we have astronomical amounts of data to prevent overmatching.


In a sense, dp/dt provides another dimension to sort things out. I am
hoping/expecting that LESS dp/dt data is needed this way than with other
competing methods.

Moreover, we need something more than just markov models!


The BIG question is: Can we characterize what is needed?

The transition to hidden-markov-model is not too difficult if we take
 the approach of hierarchical temporal memory; but this is still very
 simplistic.


Most, though certainly not all elegant solutions are simple. Is dp/dt (and
corollary methods) it or not? THAT is the question.

Any thoughts about dealing with this?


Here, I am hung up on this. Rather than respond in excruciating detail
with a presumption of this, I'll make the following simplistic statement
to get this process started.

Simple learning methods have not worked well for reasons you mentioned
above. The question here is whether dp/dt methods blow past those
limitations in general, and whether epineuronal methods blow past best in
particular.

Are we on the same page here?

Steve Richfield

On Mon, Jan 5, 2009 at 12:42 PM, Steve Richfield
 steve.richfi...@gmail.com wrote:
  Thanks everyone for helping me wring out the whole dp/dt thing. Now for
  the next part of Steve's Theory...
 
  If we look at learning as extracting information from a noisy channel, in
  which the S/N ratio is usually 1, but where the S/N ratio is sometimes
  very high, the WRONG thing to do is to engage in some sort of slow
 averaging
  process as present slow-learning processes do. This especially when dp/dt
  based methods can occationally completely separate (in time) the signal
  from the noise.
 
  Instead, it would appear that the best/fastest/cleanest (from an
 information
  theory viewpoint) way to extract the signal would be to wait for a
  nearly-perfect low-noise opportunity and simply latch on to the
 principal
  component therein.
 
  Of course there will still be some noise present, regardless of how good
 the
  opportunity, so some sort of successive refinement process using future
  opportunities could further trim NN synapses, edit AGI terms, etc. In
  short, I see that TWO entirely different learning mechanisms are needed,
 one
  to initially latch onto an approximate principal component, and a second
 to
  refine that component.
 
  Processes like this have their obvious hazards, like initially failing to
  incorporate a critical synapse/term, and in the process dooming their
  functionality regardless of refinement. Neurons, principal components,
  equations, etc., that turn out to be worthless, or which are refined
 into
  nothingness, would simply trigger another epineuronal reprogramming to
 yet
  another principal component, when a lack of lateral inhibition or other
  AGI-equivalent process detects that something is happening that nothing
 else
  recognizes.
 
  In short, I am proposing abandoning the sorts of slow learning processes
  typical of machine learning, except for use in gradual refinement of
  opportunistic instantly-recognized principal components.
 
  Any

[agi] Epineuronal programming

2009-01-05 Thread Steve Richfield
Thanks everyone for helping me wring out the whole dp/dt thing. Now for
the next part of Steve's Theory...

If we look at learning as extracting information from a noisy channel, in
which the S/N ratio is usually 1, but where the S/N ratio is sometimes
very high, the WRONG thing to do is to engage in some sort of slow averaging
process as present slow-learning processes do. This especially when dp/dt
based methods can occationally completely separate (in time) the signal
from the noise.

Instead, it would appear that the best/fastest/cleanest (from an information
theory viewpoint) way to extract the signal would be to wait for a
nearly-perfect low-noise opportunity and simply latch on to the principal
component therein.

Of course there will still be some noise present, regardless of how good the
opportunity, so some sort of successive refinement process using future
opportunities could further trim NN synapses, edit AGI terms, etc. In
short, I see that TWO entirely different learning mechanisms are needed, one
to initially latch onto an approximate principal component, and a second to
refine that component.

Processes like this have their obvious hazards, like initially failing to
incorporate a critical synapse/term, and in the process dooming their
functionality regardless of refinement. Neurons, principal components,
equations, etc., that turn out to be worthless, or which are refined into
nothingness, would simply trigger another epineuronal reprogramming to yet
another principal component, when a lack of lateral inhibition or other
AGI-equivalent process detects that something is happening that nothing else
recognizes.

In short, I am proposing abandoning the sorts of slow learning processes
typical of machine learning, except for use in gradual refinement of
opportunistic instantly-recognized principal components.

Any thoughts?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2009-01-03 Thread Steve Richfield
Abram,

The SparceDBN article you referenced reminds me that I should contact
Babelfish and propose a math-to-English translation option. Here were some
simple concepts obfuscated by notation.

I think you are saying that these guys have a really good learning
algorithm, and I have figured out how to make such things FAST, so that
together, these methods should about equal natural capabilities.

Continuing with your comments...

On 1/2/09, Abram Demski abramdem...@gmail.com wrote:

 Steve,

 I'm thinking that you are taking understanding to mean something
 like identifying the *actual* hidden variables responsible for the
 pattern, and finding the *actual* state of that variable.
 Probabilistic models instead *invent* hidden variables, that happen to
 help explain the data. Is that about right? If so, then explaining
 what I mean by functionally equivalent will help. Here is an
 example: suppose that we are looking at data concerning a set of
 chemical experiments. Suppose that the experimental conditions are not
 very well-controlled, so that interesting hidden variables are
 present. Suppose that two of these are temperature and air pressure,
 but that the two have the same effect on the experiment. Then the
 unsupervised learning will have no way of distinguishing between the
 two, so it will only find one hidden variable representing them. So,
 they are functionally equivalent.


OK.

This implies that, in the absence of further information, the best
 thing we can do to try to understand the data is to
 probabilistically model it.


OK.

Or perhaps when you say understanding it is short for understanding
 the implications of, ie, in an already-present model. In that case,
 perhaps we could separate the quality of predictions from the speed of
 predictions. A complicated-but-accurate model is useless if we can't
 calculate the information we need quickly enough.


I suspect that when better understandings are had, that something will
emerge that is both fast AND accurate. Hence, I am resistant to choosing
unless/until forced to do so.

So, we also want an
 understandable model: one that doesn't take too long to create
 predictions. This would be different than looking for the best
 probabilistic model in terms of prediction accuracy.


Possible, but not shown to be so.

On the other
 hand, it is irrelevant in (practically?) all neural-network style
 approaches today, because the model size is fixed anyway.


I'm not sure I see what you are saying here. Until you run out of memory,
model size is completely variable.

If the output is being fed to humans rather than further along the
 network, as in the conference example, the situation is very
 different. Human-readability becomes an issue. This paper is a good
 example of an approach that creates better human-readability rather
 than better performance:

 http://www.stanford.edu/~hllee/nips07-sparseDBN.pdf

 The altered algorithm also seems to have a performance that matches
 more closely with statistical analysis of the


stray cat's

brain (which was the
 research goal), suggesting a correlation between human-readability and
 actual performance gains (since the brain wouldn't do it if it were a
 bad idea). In a probabilistic framework this is represented best by a
 prior bias for simplicity.


Here, everything boils down to the meaning of simplicity, e.g. does it
mean minimum energy RBM, or something else that is probably fairly similar.

Perhaps we should discuss the a priori knowledge issue from my prior
posting, as I suspect that some of that bears upon simplicity.

Thanks again for staying with me on this. I think we are gradually making
some real progress here.

Steve Richfield
=


 On Fri, Jan 2, 2009 at 1:36 PM, Steve Richfield
 steve.richfi...@gmail.com wrote:
  Abram,
 
  Oh dammitall, I'm going to have to expose the vast extent of my
  profound ignorance to respond. Oh well...
 
  On 1/1/09, Abram Demski abramdem...@gmail.com wrote:
 
  Steve,
 
  Sorry for not responding for a little while. Comments follow:
 
  
   PCA attempts to isolate components that give maximum
   information... so my question to you becomes, do you think that the
   problem you're pointing towards is suboptimal models that don't
   predict the data well enough, or models that predict the data fine
 but
   aren't directly useful for what you expect them to be useful for?
  
  
   Since prediction is NOT the goal, but rather just a useful measure, I
 am
   only interested in recognizing
   that which can be recognized, and NOT in expending resources on
   understanding semi-random noise.
   Further, since compression is NOT my goal, I am not interested in
   combining
   features
   in ways that minimize the number of components. In short, there is a
 lot
   to
   be learned from PCA,
   but a perfect PCA solution is likely a less-than-perfect NN
 solution.
 
  What I am saying is this: a good predictive model will predict
  whatever is desired

Re: [agi] Introducing Steve's Theory of Everything in cognition.

2009-01-02 Thread Steve Richfield
.  The threshold for feature recognition, e.g. the number of active
synapses that must be involved for a feature to be interesting.
3.  The acceptable fuzziness of recognition, e.g. just how accurately must
a feature match its pattern.
4.  ??? What have I missed in this list?
5.  Some or all of the above may be calculable based on ???

Thanks for your help.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2009-01-01 Thread Steve Richfield
J. Andrew,

On 1/1/09, J. Andrew Rogers and...@ceruleansystems.com wrote:


 On Jan 1, 2009, at 2:35 PM, J. Andrew Rogers wrote:

 Since digital and analog are the same thing computationally (digital
 is a subset of analog), and non-digital computers have been generally
 superior for several decades, this is not relevant.



 Gah, that should be *digital* computers have generally been superior for
 several decades  (the last non-digital hold-outs I am aware of were designed
 in the late 1970s).


Ignoring the issues or representation and display, I agree. However,
consider three interesting cases...

1.  I only survived my college differential equations course with the help
of a (now antique) EAI analog computer. Therein, I could simply wire it up
as the equation stated, with about as many wires as symbols in the
equations, without (much) concern for the internal workings of either the
computer or the equation, and get out a parametric plot any way I wanted.
However, with a digital computer, maybe there is suitable software by now,
but I would have to worry about how the computer did things, e.g. how fine
the time slices are, etc. Further, I couldn't just throw the equation at
the machine with a digital computer much as I could do with the analog
computer, though again, maybe software has caught up by now.

2.  Related to above and mentioned earlier, electrolytic fish-tank analogs
have long been used to characterize electric and magnetic fields. While
these may not be as accurate as digital simulation, they are in TRUE
walk-around 3-D representation, and changes can be made in seconds with no
need to verify that the change indeed reflects the intended change. This is
another example where, at the loss of a few down in the noise digits, you
can be SURE that the model indeed simulates reality. The same was long true
of wind tunnels, until things got SO valuable (and competitive) that it was
worth the millions of dollars to go after those last few digits.

3.  Conditioning high-speed phenomena. Transistors are now SO fast and have
SO much gain that they have become nearly perfect mathematical components.
Most people don't think of their TV tuners as being analog computers, but...

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Steve Richfield
J. Andrew,

On 12/30/08, J. Andrew Rogers and...@ceruleansystems.com wrote:


 On Dec 30, 2008, at 12:51 AM, Steve Richfield wrote:

 On a side note, there is the clean math that people learn on their way
 to a math PhD, and then there is the dirty math that governs physical
 systems. Dirty math is fraught with all sorts of multi-valued functions,
 fundamental uncertainties, etc. To work in the world of dirty math, you
 must escape the notation and figure out what the equation is all about, and
 find some way of representing THAT, which may well not involve simple
 numbers on the real-number line, or even on the complex number plane.



 What does dirty math really mean?  There are engineering disciplines
 essentially *built* on solving equations with gross internal inconsistencies
 and unsolvable systems of differential equations. The modern world gets
 along pretty admirably suffering the very profitable and ubiquitous
 consequences of their quasi-solutions to those problems.  But it is still a
 lot of hairy notational math and equations, just applied in a different
 context that has function uncertainty as an assumption. The unsolvability
 does not lead them to pull numbers out of a hat, they have sound methods for
 brute-forcing fine approximations across a surprisingly wide range of
 situations. When the clean mathematical methods do not apply, there are
 other different (not dirty) mathematical methods that you can use.


The dirty line is rather fuzzy, but you know you've crossed it when
instead of locations, things have probability spaces, when you are trying
to numerically solve systems of simultaneous equations and it always seems
that at least one of them produces NANs, etc. Algebra was designed for the
real world as we experience it, and works for most engineering problems,
but often runs aground in theoretical physics, at least until you abandon
the idea of a 1:1 correspondence between states and variables.

Indeed, I have sometimes said the only real education I ever got in AI was
 spending years studying an engineering discipline that is nothing but
 reducing very complex systems of pervasively polluted data and nonsense
 equations to precise predictive models where squeezing out an extra 1%
 accuracy meant huge profit.  None of it is directly applicable, the value
 was internalizing that kind of systems perspective and thinking about every
 complex systems problem in those terms, with a lot of experience
 algorithmically producing predictive models from them. It was different but
 it was still ordinary math, just math appropriate for the particular
 problem.


Bingo! You have to tailor the techniques to the problem - more than just
solving the equations, but often the representation of quantities needs to
be in some sort of multivalued form.

The only thing you could really say about it was that it produced a lot of
 great computer scientists and no mathematicians to speak of (an odd bias,
 that).


Yea, but I'd bet that you got pretty good at numerical analysis  ;-)

  With this as background, as I see it, hypercomputation is just another
 attempt to evade dealing with some hard mathematical problems.



 The definition of hypercomputation captures some very specific
 mathematical concepts that are not captured in other conceptual terms.  I do
 not see what is being evaded,


... which is where the break probably is. If someone is going to claim that
Turing machines are incapable of doing something, then it seems important to
state just what that something is.

since it is more like pointing out the obvious with respect to certain
 limits implied by the conventional Turing model.


I wonder if we aren't really talking about analog computation (i.e.
computing with analogues, e.g. molecules) here? Analog computers have been
handily out-computing digital computers for a long time. One analog computer
that produced tide tables, now in a glass case at the NOAA headquarters,
performed well for ~100 years until it was finally replaced by a large CDC
computer - and probably now with a PC. Some magnetic systems engineers still
resort to fish tank analogs rather than deal with software.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-28 Thread Steve Richfield
Loosemore, et al,

Just to get this discussion out of esoteric math, here is a REALLY SIMPLE
way of doing unsupervised learning with dp/dt that looks like it ought to
work.

Suppose we record each occurrence of the inputs to a neuron, keeping
counters to identify how many times each combination has happened. For this
discussion, each input will be considered to have either a substantial
positive, substantial negative, or nearly zero dp/dt. When we reach a
threshold, of, say 20, identical occurrences of the same combination of
dp/dt that is NOT accompanied by lateral inhibition, we will proclaim THAT
to be our principal component function for that neuron to do for the rest
of its life. Thereafter, the neuron will require the previously observed
positive and negative inputs to be as programmed, but will ignore all inputs
that were nearly zero.

Of course, many frames will be corrupted because of overlapping phenomena,
sampling on a dp/dt edges, noise, fast phenomena, etc., etc. However, there
will be few if any precise repetitions of corrupted frames, whereas clean
frames should be quite common.

First the most common frame (all zeros - nothing there) will be
recognized, followed by each of the most common simultaneously occurring
temporal patterns recognized by successive neurons, all identified in order
of decreasing frequency exactly as needed for Huffman or PCA coding.

This process won't start until all inputs are accompanied by an indication
that they have already been programmed by this process, so that programming
will proceed layer by layer without corruption from inputs being only
partially developed (a common problem in multi-layer NNs).

While clever math might make this work a little faster, and certainly wet
neurons can't store many previous patterns, this should be guaranteed to
work, and produce substantially perfect unsupervised learning, albeit
probably slower than better-math methods, but probably faster than wet
neurons that can't save thousands of combinations during early programming.

Of course, this would be completely unworkable outside of dp/dt space, as in
object space, this would probably exhaust a computer's memory before
completing.

Does this get the Loosemore Certificate of No Objection as being an
apparently workable method for substantially optimal unsupervised learning?

Thanks for considering this.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-25 Thread Steve Richfield
Richard,

On 12/25/08, Richard Loosemore r...@lightlink.com wrote:

 Steve Richfield wrote:

 Ben, et al,
  After ~5 months of delay for theoretical work, here are the basic ideas
 as to how really fast and efficient automatic learning could be made almost
 trivial. I decided NOT to post the paper (yet), but rather, to just discuss
 the some of the underlying ideas in AGI-friendly terms.
  Suppose for a moment that a NN or AGI program (they can be easily mapped
 from one form to the other


 ... this is not obvious, to say the least.  Mapping involves many
 compromises that change the functioning of each type ...


There are doubtless exceptions to my broad statement, but generally, neuron
functionality is WIDE open to be pretty much ANYTHING you choose, including
that of an AGI engine's functionality on its equations.

In the reverse, any NN could be expressed in a shorthand form that contains
structure, synapse functions, etc., and an AGI engine could be
built/modified to function according to that shorthand.

In short, mapping between NN and AGI forms presumes flexibility in the
functionality of the target form. Where that flexibility is NOT present,
e.g. because of orthogonal structure, etc., then you must ask whether
something is being gained or lost by the difference. Clearly, any transition
that involves a loss should be carefully examined to see if the entire
effort is headed in the wrong direction, which I think was your original
point here.



 ), instead of operating on objects (in an

 object-oriented sense)


 Neither NN nor AGI has any intrinsic relationship to OO.


Clearly I need a better term here. Both NNs and AGIs tend to have neurons or
equations that reflect the presence (or absence) of various objects,
conditions, actions, etc. My fundamental assertion is that if you
differentiate the inputs so that everything in the entire network reflects
dp/dt instead of straight probabilities, then the network works identically,
but learning is GREATLY simplified.



 , instead, operates on the rate-of-changes in the

 probabilities of objects, or dp/dt. Presuming sufficient bandwidth to
 generally avoid superstitious coincidences, fast unsupervised learning then
 becomes completely trivial, as like objects cause simultaneous
 like-patterned changes in the inputs WITHOUT the overlapping effects of the
 many other objects typically present in the input (with numerous minor
 exceptions).


 You have already presumed that something supplies the system with objects
 that are meaningful.  Even before your first mention of dp/dt, there has to
 be a mechanism that is so good that it never invents objects such as:

 Object A:  A person who once watched all of Tuesday Welds movies in the
 space of one week or

 Object B:  Something that is a combination of Julius Caesar's pinky toe
 and a sour grape that Brutus' just spat out or

 Object C:  All of the molecules involved in a swiming gala that happen to
 be 17.36 meters from the last drop of water that splashed from the pool.

 You have supplied no mechanism that is able to do that, but that mechanism
 is 90% of the trouble, if learning is what you are about.


With prior unsupervised learning you are 100% correct. However none of the
examples you gave involved temporal simultaneity. I will discuss B above
because it is close enough to be interesting.

If indeed someone just began to notice something interesting about Caesar's
pinkie toe *as* they just began to notice the taste of a sour grape, then
yes, that probably would be leaned via the mechanisms I am talking about.
However, if one was present perfect tense while the other was just
beginning, then it wouldn't with my approach but would with prior
unsupervised learning methods. For example, Caesar's pinkie toe had been
noticed and examined, then before the condition passed they tasted a sour
grape, then temporal simultaneity of the dp/dt edges wouldn't exist to learn
from. Of course, in both cases, the transforms would work identically given
identical prior learning/programming.



 Instead, you waved your hands and said fast unsupervised learning
  then becomes completely trivial  this statement is a declaration
 that a good mechanism is available.

 You then also talk about like objects.  But the whole concept of like
 is extraordinarily troublesome.  Are Julius Caesar and Brutus like each
 other?  Seen from our distance, maybe yes, but from the point of view of
 Julius C., probably not so much.  Is a G-type star like a mirror?  I don't
 know any stellar astrophysicists who would say so, but then again OF COURSE
 they are, because they are almost indistinguishable, because if you hold a
 mirror up in the right way it can reflect the sun and the two visual images
 can be identical.

 These questions can be resolved, sure enough, but it is the whole business
 of resolving these questions (rather than waving a hand over them and
 declaring them to be trivial) that is the point.


I think

Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-25 Thread Steve Richfield
Andrew,

On 12/24/08, J. Andrew Rogers and...@ceruleansystems.com wrote:


 On Dec 24, 2008, at 10:33 PM, Steve Richfield wrote:

 Of course you could simply subtract successive samples from one another -
 at some considerable risk, since you are now sampling at only half the
 Nyquist-required speed to make your AGI/NN run at its intended speed. In
 short, if inputs are not being electronically differentiated, then sampling
 must proceed at least twice as fast as the NN/AGI cycles.



 Or... you could be using something like compressive sampling, which safely
 ignores silly things like the Nyquist limit.


While compressive sampling needn't be performed so frequently, neither does
it (directly) produce the dp/dt values that are needed. Further, while the
samples are less frequent, they must be carefully timed, which may be more
difficult then frequent sampling. As I understand it, compressive sampling
is really great to reduce storage at the cost of greatly increasing the
demodulation effort. However, here we don't have any need for storage,
just the dp/dt values.

In most cases, I suspect that simple electronic differentiation will work
best, eliminating the need for ANY computational logic to compute dp/dt.

Thanks for the comment.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-25 Thread Steve Richfield
Vladimir,

On 12/24/08, Vladimir Nesov robot...@gmail.com wrote:

 On Thu, Dec 25, 2008 at 9:33 AM, Steve Richfield
 steve.richfi...@gmail.com wrote:
 
  Any thoughts?
 

 I can't tell this note from nonsense. You need to work on
 presentation,


I am having the usual problem that what is obvious to me may not be
obvious to anyone else. It would help me a LOT if you could be more
specific, so I can avoid burying people in paper.

if your idea can actually hold some water. If you think
 you understand the idea enough to express it as math, by all means do
 so, it'll make your own thinking clearer if nothing else.



\xAD\xF4 kn * sn   = \xAD\xF4 kn * ∫ (∂sn/∂t) ∂t  Basic neuron with 
∂sn/∂t inputs

= \xAD\xF4 ∫ kn * (∂sn/∂t) ∂t  OK to include efficacy kn

= ∫ \xAD\xF4 kn * (∂sn/∂t) ∂t  OK to integrate the overall result

∂(\xAD\xF4 kn * sn)/∂t = \xAD\xF4 kn * (∂sn/∂t) Computing 
∂result/∂t for next
neuron

QED Identical internal functionality of multiplying inputs by efficacies and
summing them, produces identical represented results, regardless of whether
the signals directly represent objects or ∂object/∂t.

Does this help?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-25 Thread Steve Richfield
Richard,Richard,

On 12/25/08, Richard Loosemore r...@lightlink.com wrote:

 Steve Richfield wrote:

  There are doubtless exceptions to my broad statement, but generally,
 neuron functionality is WIDE open to be pretty much ANYTHING you choose,
 including that of an AGI engine's functionality on its equations.
  In the reverse, any NN could be expressed in a shorthand form that
 contains structure, synapse functions, etc., and an AGI engine could be
 built/modified to function according to that shorthand.
  In short, mapping between NN and AGI forms presumes flexibility in the
 functionality of the target form. Where that flexibility is NOT present,
 e.g. because of orthogonal structure, etc., then you must ask whether
 something is being gained or lost by the difference. Clearly, any transition
 that involves a loss should be carefully examined to see if the entire
 effort is headed in the wrong direction, which I think was your original
 point here.



 There is a problem here.

 When someone says X and Y can easily be mapped from one form to the other
 there is an implication that they are NOt suggesting that we go right down
 to the basic constituents of both X and Y in order to effect the mapping.

 Thus:  Chalk and Cheese can easily be mapped from one to the other 
 trivially true if we are prepared to go down to the common denominator of
 electrons, protons and neutrons.  But if we stay at a sensible level then,
 no, these do not map onto one another.


The problem here is that you were thinking present existing NN and AGI
systems, neither of which work (yet) in any really useful way, that it was
obviously impossible to directly convert from one system with its set of bad
assumptions to another system with a completely different set of bad
assumptions. I completely agree, but I assert that the answer to that
particular question is of no practical interest to anyone.

On the other hand, converting between NN and AGI systems built on the SAME
set of assumptions would be simple. This situation doesn't yet exist. Until
then, converting a program from one dysfunctional platform to another is
uninteresting. When the assumptions get ironed out, then all systems will be
built on the same assumptions, and there will be few problems going between
them, EXCEPT:

Things need to be arranged in arrays for automated learning, which much more
fits the present NN paradigm than the present AGI paradigm.

Similarly, if you claim that NN and regular AGI map onto one another, I
 assume that you are saying something more substantial than that these two
 can both be broken down into their primitive computational parts, and that
 when this is done they seem equivalent.


Even this breakdown isn't required if both systems are built on the same
correct assumptions. HOWEVER, I see no way to transfer fast learning from an
NN-like construction to an AGI-like construction. Do you? If there is no
answer to this question, then this unanswerable question would seem to
redirect AGI efforts to NN-like constructions if they are ever to learn like
we do.

NN and regular AGI, they way they are understood by people who understand
 them, have very different styles of constructing intelligent systems.


Neither of which work (yet). Of course, we are both trying to fill in the
gaps.

Sure, you can code both in C, or Lisp, or Cobol, but that is to trash the
 real meaning of are easily mapped onto one another.


One of my favorite consulting projects involved coding an AI program to
solve complex problems that were roughly equivalent to solving algebraic
equations. This composed the Yellow pages for 28 different large phone
directories. The project was for a major phone company and had to be written
entirely in COBOL. Further, it had to run at n log n speed and NOT n^2
speed, which I did by using successive sorts instead of list processing
methods. It would have been rather difficult to achieve the needed
performance in C or Lisp, even though COBOL would seem to be everyone's
first choice as the last choice on the list of prospective platforms.

), instead of operating on objects (in an

object-oriented sense)


Neither NN nor AGI has any intrinsic relationship to OO.

  Clearly I need a better term here. Both NNs and AGIs tend to have neurons
 or equations that reflect the presence (or absence) of various objects,
 conditions, actions, etc. My fundamental assertion is that if you
 differentiate the inputs so that everything in the entire network reflects
 dp/dt instead of straight probabilities, then the network works identically,
 but learning is GREATLY simplified.


 Seems like a simple misunderstanding:  you were not aware that object
 oriented does not mean the same as saying that there are fundamental atomic
 constituents of a representation.


A typical semantic overloading problem. Atomic consitituent orientation
doesn't really work either, because in later stages, individual
terms/neurons can represent entire

[agi] Levels of Self-Awareness?

2008-12-24 Thread Steve Richfield
This is more of a question than a statement.

There appears to be several levels of self-awareness, e.g.

1.  Knowing that you are an individual in a group, have a name, etc. Even
kittens and puppies quickly learn their names, know to watch others when
their names are called, etc.

2.  Understanding that they have some (limited) ability to modify their own
behavior, reactions, etc., so that you can explain to them how something
they did was inappropriate, and they can then modify their behavior. Can
filter what they say, etc. I know one lady with two Masters degree who
apparently has NOT reached this level.

3.  Understanding that the process of thinking itself is a skill that no
one has completely mastered, that there are advanced techniques to be
learned, that there are probably as-yet undiscovered techniques for really
advanced capabilities, etc. Further, are capable of internalizing new
thinking techniques. There appears to be several people on this list who
have apparently NOT reached this level.

4.  Any theories as to what the next level might be?

Note that the above relates to soul, especially in that an individual at a
higher level might look upon individuals at a lower level as soulless
creatures. Given that various people span several levels, wouldn't this
consign much of the human race as being soulless creatures?

Clearly, it would seem that no AGI researcher can program a level of
self-awareness that they themselves have not reached, tried and failed to
reach, etc. Hence, this may impose a cap on a future AGI's potential
abilities, especially if thegold is in #4, #5, etc.

Has someone already looked into this?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Levels of Self-Awareness?

2008-12-24 Thread Steve Richfield
Philip,

On 12/24/08, Philip Hunt cabala...@googlemail.com wrote:

 2008/12/24 Steve Richfield steve.richfi...@gmail.com:
 
  Clearly, it would seem that no AGI researcher can program a level of
  self-awareness that they themselves have not reached, tried and failed to
  reach, etc.

 This is not at all clear to me. It is certainly prossible for
 programmers to program computer to do tasks better than they can (e.g.
 play chess)


Yes, but these programmers already know how to play chess. They (probably)
can't program a game in which they themselves don't have any skill at all.

In the case of higher forms of self-awareness, programmers in effect don't
even know the rules of the game to be programmed, yet the game will have
a vast overall effect on everything the AGI thinks.

To illustrate, much human thought goes into dispute resolution - a field
rich with advanced concepts that are generally unknown to the general
population and AGI programmers. Since this has to much to do with the
subtleties of common errors in human thinking, there is no practical way for
an AGI to figure this out for itself short of participating in thousands of
disputes - that humans would simply not tolerate.

Once these concepts are understood, the very act of thinking is changed
forever. Someone who is highly trained and experienced in dispute resolution
thinks quite differently than you probably do, and probably regards your
thinking as immature and generally low-level. In short, their idea of
self-awareness is quite different than yours.

Regardless of tools, I don't see how such a thing could be programmed except
by someone who is already able to think at that level.

Then, how about the NEXT level, whatever that might be?


 and I see no reason why it shouldn't be possible for self
 awareness.


My point is that lower-level self-awareness is MUCH simpler to contemplate
than is higher-level, and further, that different people (and AGI
researchers) function at various levels.

Indeed it would be rather trivial to give an AGI access to
 its source code.


Why should it be any better at modifying its source code than we would be at
writing it? The problem of levels still remains.

 Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Introducing Steve's Theory of Everything in cognition.

2008-12-24 Thread Steve Richfield
Ben, et al,

After ~5 months of delay for theoretical work, here are the basic ideas as
to how really fast and efficient automatic learning could be made almost
trivial. I decided NOT to post the paper (yet), but rather, to just discuss
the some of the underlying ideas in AGI-friendly terms.

Suppose for a moment that a NN or AGI program (they can be easily mapped
from one form to the other), instead of operating on objects (in an
object-oriented sense), instead, operates on the rate-of-changes in the
probabilities of objects, or dp/dt. Presuming sufficient bandwidth to
generally avoid superstitious coincidences, fast unsupervised learning then
becomes completely trivial, as like objects cause simultaneous
like-patterned changes in the inputs WITHOUT the overlapping effects of the
many other objects typically present in the input (with numerous minor
exceptions).

But, what would Bayesian equations or NN neuron functionality look like in
dp/dt space? NO DIFFERENCE (math upon request). You could trivially
differentiate the inputs to a vast and complex existing AGI or NN, integrate
the outputs, and it would perform *identically* (except for some little
details discussed below). Of course, while the transforms would be
identical, unsupervised learning would be quite a different matter, as now
the nearly-impossible becomes trivially simple.

For some things (like short-term memory) you NEED an integrated
object-oriented result. Very simple - just integrate the signal. How about
muscle movements? Note that muscle actuation typically causes acceleration,
which doubly integrates the driving signal, which would require yet another
differentiation of a differentiated signal to, when doubly integrated by the
mechanical system, produce movement to the desired location.

Note that once input values are stored in a matrix for processing, the baby
has already been thrown out with the bathwater. You must START with
differentiated input values and NOT static measured values. THIS is what the
PCA folks have been missing in their century-long quest for an efficient
algorithm to identify principal components, as their arrays had already
discarded exactly what they needed. Of course you could simply subtract
successive samples from one another - at some considerable risk, since you
are now sampling at only half the Nyquist-required speed to make your AGI/NN
run at its intended speed. In short, if inputs are not being electronically
differentiated, then sampling must proceed at least twice as fast as the
NN/AGI cycles.

But - how about the countless lost constants of integration? They all come
out in the wash - except for where actual integration at the outputs is
needed. Then, clippers and leaky integrators, techniques common to
electrical engineering, will work fine and produce many of the same
artifacts (like visual extinction) seen in natural systems.

It all sounds SO simple, but I couldn't find any prior work in this
direction using Google. However, the collective memory of this group is
pretty good, so perhaps someone here knows of some prior effort that did
something like this. I would sure like to put SOMETHING in the References
section of my paper.

Loosemore: THIS is what I was talking about when I explained that there is
absolutely NO WAY to understand a complex system through direct observation,
except by its useless anomalies. By shifting an entire AGI or NN to operate
on derivatives instead of object values, it works *almost* (the operative
word in this statement) exactly the same as one working in object-oriented
space, only learning is transformed from the nearly-impossible to the
trivially simple. Do YOU see any observation-based way to tell how we are
operating behind our eyeballs, object-oriented or dp/dt? While there are
certainly other explanations for visual extinction, this is the only one
that I know of that is absolutely impossible to engineer around. No one has
(yet) proposed any value to visual extinction, and it is a real problem for
hunters, so if it were avoidable, then I suspect that ~200 million years of
evolution would have eliminated it long ago.

From this comes numerous interesting corollaries.

Once the dp/dt signals are in array form, it would become simple to
automatically recognize patterns representing complex phenomena at the level
of the neurons/equations in question. Of course, putting it in this array
form is effectively a transformation from AGI equations to NN construction,
a transformation that has been discussed in prior postings. In short, if you
want your AGI to learn at anything approaching biological speeds, it appears
that you absolutely MUST transform your AGI structure to a NN-like
representation, regardless of the structure of the processor on which it
runs.

Unless I am missing something really important here, this should COMPLETELY
transform the AGI field, regardless of the particular approach taken.

Any thoughts?

Steve Richfield

Re: [agi] Relevance of SE in AGI

2008-12-21 Thread Steve Richfield
Valentina,

Having written http://www.DrEliza.com, several NN programs, and LOT of
financial applications, and holding a CDP - widely recognized in financial
programming circles, here are my comments.

The real world is a little different than the theoretical world of CS, in
that people want results rather than proofs. However, especially in the
financial world, errors CAN be expensive. Hence, the usual approaches
involve extensive internal checking (lots of Assert statements, etc.),
careful code reviews (that often uncover errors that testing just can't
catch because a tester may not think of all of the ways that a piece of code
might be stressed), and code-coverage analysis to identify what has NOT been
exercise/exorcised.

I write AI software pretty much the same way that I have written financial
software.

Note that really good internal checking can almost replace early testing,
because as soon as something produces garbage, it will almost immediately
get caught. Hence, just write it and throw it into the rest of the code, and
let its environment test it. Initially, it might contain temporary code to
display its results, which will soon get yanked when everything looks OK.

Finally, really good error handling is an absolute MUST, because no such
complex application is ever completely wrung out. If it isn't fail-soft,
then it probably will never ever make it as a product. This pretty much
excuses C/C++ from consideration, but still leaves C# in the running.

I prefer programming in environments that check everything possible, like
Visual Basic or .NET. These save a LOT of debugging effort by catching
nearly all of the really hard bugs that languages like C/C++ seem to make in
bulk. Further, when you think that your application is REALLY wrung out, you
can then re-compile with most of the error checking turned off to get C-like
speed.

Note that these things can also be said for Java, but most implementations
don't provide compilers that can turn off error checking, which cuts their
speed to ~1/3 that of other approaches. Losing 2/3 of the speed is a high
price to pay for a platform.

Steve Richfield
==
On 12/20/08, Valentina Poletti jamwa...@gmail.com wrote:

 I have a question for you AGIers.. from your experience as well as from
 your background, how relevant do you think software engineering is in
 developing AI software and, in particular AGI software? Just wondering..
 does software verification as well as correctness proving serve any use in
 this field? Or is this something used just for Nasa and critical
 applications?

 Valentina
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-18 Thread Steve Richfield
Richard,

On 12/18/08, Richard Loosemore r...@lightlink.com wrote:

 Rafael C.P. wrote:

 Cognitive computing: Building a machine that can learn from experience
 http://www.physorg.com/news148754667.html


 Neuroscience vaporware.


It isn't neuroscience yet, because they haven't done any science yet.

It isn't vaporware yet because they have made no claims of functionality.

In short, it has a LONG way to go before it can be considered to be
neuroscience vaporware.

Indeed, this article failed to make any case for any rational hope for
success.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread Steve Richfield
Yan,

Your quest incorporates some questionable presumptions, that you will
literally be betting your (future) life on.

1.  That AGI as presently conceived won't be just another dead end along the
way to intelligent machines. There have already been several dead ends and
the present incarnation of AGI implicitly presumes that there will be no
solution to the fast learning problem (that I think I have a solution to).
If the present incarnation of AGI falls to the wayside, your unipolar
background would provide excellent qualifications for the unemployment line.

2.  AGI is already becoming politicized, meaning that by the time you get
your PhD, all of the good leadership slots will be filled with people who
will be jealously guarding them from upstarts (like you will then be).
However, your PhD will still help you get a good minimum wage job as an
intern somewhere.

It appears that the only really good reason to get a PhD is to raise money
for a startup. If you have a personality for gladhanding investors,
promoting technologies, writing and presenting business plans, etc., then I
would strongly recommend your getting a PhD. However, if you see your path
as running in other directions, then you might want to reconsider.

Lotsa luck,

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-12 Thread Steve Richfield
Ben,

On 12/12/08, Ben Goertzel b...@goertzel.org wrote:

   There isn't much that an MIMD machine can do better than a
 similar-sized
   SIMD machine.
 
  Hey, that's just not true.
 
  There are loads of math theorems disproving this assertion...
 
 
  Oops, I left out the presumed adjective real-world. Of course there are
  countless diophantine equations and other math trivia that aren't
  vectorizable.
 
  However, anything resembling a brain in that the process can be done by
  billions of slow components must by its very nature vectorizable. Hence,
 in
  the domain of our discussions, I think my statement still holds

 I'm not so sure, but for me to explore this area would require a lot
 of time and I don't
 feel like allocating it right now...


No need, so long as
1.  You see some possible future path to vectorizability, and
2.  My or similar vector processor chips aren't a reality yet.

I'm also not so sure our current models of brain mechanisms or
 dynamics are anywhere near
 accurate, but that's another issue...


I finally cracked thetheory of everything in cognition puzzle discussed
here ~4 months ago, which comes with an understanding of the super-fast
learning observed in biological systems, e.g. visual systems the tune
themselves up in the first few seconds after an animal's eyes open for the
first time. I am now trying to translate it from Steveze to readable
English which hopefully should be done in a week or so. Also, insofar as
possible, I am translating all formulas into grammatically correct English
statements, for the mathematically challenged readers. Unless I missed
something really BIG, it will change everything from AGI to NN to ???. Most
especially, AGI is largely predicated on the INability to perform such fast
learning, which is where experts enter the picture. With this theory,
modifying present AGI approaches to learn fast shouldn't be all that
difficult.

After any off-line volunteers have first had their crack, I'll post it here
for everyone to beat it up.

Do I hear any volunteers out there in Cyberspace who want to help hold my
feet to the fire off-line regarding those pesky little details that so
often derail grand theories?

 Indeed, AGI and physics simulation may be two of the app areas that have
  the easiest times making use of these 80-core chips...
 
 
  I don't think Intel is even looking at these. They are targeting embedded
  applications.

 Well, my bet is that a main app of multicore chips is ultimately gonna
 be gaming ...
 and gaming will certainly make use of fancy physics simulation ...


Present gaming video chips have special processors that are designed to
perform the 3D to 2D transformations needed for gaming, and for maintaining
3D models. It is hard (though not impossible) compete with custom hardware
that has been refined for a particular application.

Also, it would seem to be a terrible waste of tens of terraflops just to
operate a video game.

and
 I'm betting it will
 also make use of early-stage AGI...


There is already some of that creeping into some games, including actors who
perform complex jobs in changing virtual envrionments.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-12 Thread Steve Richfield
Andi and Ben,

On 12/12/08, wann...@ababian.com wann...@ababian.com wrote:

 I don't remember what references there were earlier in this thread, but I
 just saw a link on reddit to some guys in Israel using a GPU to greatly
 accelerate a Bayesian net.  That's certainly an AI application:

 http://www.cs.technion.ac.il/~marks/docs/SumProductPaper.pdf

 http://www.reddit.com/r/programming/comments/7j1gr/accelerating_bayesian_network_200x_using_a_gpu/


My son was trying to get me interested in doing this ~3 years ago, but I
blew him off because I couldn't see a workable business model around it. It
is 100% dependent on pasting together a bunch of hardware that is designed
to do something ELSE, and even a tiny product change would throw software
compatibility and other things out the window.

Also, the architecture I am proposing promises ~3 orders of magnitude more
speed, along with a really fast global memory that completely obviates
the.complex caching they are proposing.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Steve Richfield
 information do you get from this movie, that might
prospectively affect your future actions, that couldn't prospectively be put
into words.

A friend of mine once fought and killed an attacking St Bernard dog with his
bare hands. While I suspect that a movie of this would have been
spectacular, the USEFUL information is how it is possible to kill a
vicious attacking animal with your bear hands. Secret - when they lunge to
bite you, you cram your fist as far down their throat as is humanly possible
and then expand it to block their airway. Of course, this expansion would
NOT be visible on a movie. The animal then suffocates as the battle
continues. This fellow has the scars on his upper arms that show what this
fight must have been like. Here, a still picture wouldn't convey much more
useful information than a description, but sometimes, a picture really IS
worth a thousand words in proving a point.


 And here's the reason I talk about understanding metacognitively about
 imaginative intelligence. (I don't mean to be disparaging - I understand
 comparably little re logic, say]. If you were a filmmaker, say, and had
 thought about the problems of filmmaking, you would probably be alive to the
 difference between what images show - people's actual faces and voices - and
 what they can't show - what lies behind - their hidden thoughts and
 emotions.  And you wouldn't have posed your objection.


We obviously still have some issues regarding data vs. prospectively useful
information to iron out.


Steve Richfield
===

  MT::

  *Even words for individuals are generalisations.
 *Ben Goertzel is a continuously changing reality. At 10.05 pm he will be
 different from 10.00pm, and so on. He is in fact many individuals.
 *Any statement about an individual, like Ben Goertzel, is also vague and
 open-ended.

 *The only way to refer to and capture individuals with high (though not
 perfect) precision is with images.
 *A movie of Ben chatting from 10.00pm to 10.05pm will be subject to
 extremely few possible interpretations, compared with a verbal statement
 about him.


 Even better than a movie, I had some opportunity to observe and interact
 with Ben during CONVERGENCE08, I dispute the above statement!

 I had sought to extract just a few specific bits of information from/about
 Ben. Using VERY specific examples:

 Bit#1: Did Ben understand that AI/AGI code and NN representation were
 interchangeable, at the prospective cost of some performance one way or the
 other. Bit#1=TRUE.

 Bit#2: Did Ben realize that there were prospectively ~3 orders of magnitude
 in speed available by running NN instead of AI/AGI representation on an
 array processor instead of a scalar (x86) processor. Bit#2 affected by
 question, now True, but utility disputed by the apparent unavailability of
 array processors.

 Bit#3: Did Ben realize that the prospective emergence of array processors
 (e.g. as I have been promoting) would obsolete much of his present
 work, because its structure isn't vectorizable, so he is in effect betting
 on continued stagnation in processor architecture, and may in fact be a
 small component in a large industry failure by denying market? Bit#3=
 probably FALSE.

 As always, I attempted to get the measure of the man, but as so often
 happens with leaders, there just isn't a bin to toss them in. Without an
 appropriate bin, I got lots of raw data (e.g., he has a LOT of hair), but
 not all that much usable data.

 Alternatively, the Director of RD for Google had a bin waiting for him, as
 like SO many people who rise to the top of narrowly-focused organizations,
 he had completely bought into the myths at Google without allowing for
 usurping technologies. I saw the same thing at Microsoft when I examined
 their RD operations in 1995. It takes a particular sort of narrow mind to
 rise to the top of a narrowly-focused organization. Here, there aren't many
 bits of description about the individuals, but I could easily write a book
 about thebin that the purest of them rise to fill.

 Steve Richfield


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-11 Thread Steve Richfield
Ben,

Before I comment on your reply, note that my former posting was about my
PERCEPTION rather than the REALITY of your understanding, with the
difference being taken up in the answer being less than 1.00 bit of
information.

Anyway, that said, on with a VERY interesting (to me) subject.

On 12/11/08, Ben Goertzel b...@goertzel.org wrote:

 Well, the conceptual and mathematical algorithms of NCE and OCP
 (my AI systems under development) would go more naturally on MIMD
 parallel systems than on SIMD (e.g. vector) or SISD systems.


There isn't much that an MIMD machine can do better than a similar-sized
SIMD machine. The usual problem is in finding a way to make such a large
SIMD machine. Anyway, my proposed architecture (now under consideration at
AMD) also provides for limited MIMD operation, where the processors could be
at different places in a single complex routine.

Anyway, I was looking at a 10,000:1 speedup over SISD, and then giving up
~10:1 to go from probabilistic logic equations to matrices that do the same
things, which is how I came up with the 1000:1 from the prior posting.

I played around a bunch with MIMD parallel code on the Connection Machine
 at ANU, back in the 90s


The challenge is in geometry - figuring out how to get the many processors
to communicate and coordinate with each other without spending 99% of their
cycles in coordination and communication.

However, indeed the specific software code we've written for NCE and OCP
 is intended for contemporary {distributed networks of multiprocessor
 machines}
 rather than vector machines or Connection Machines or whatever...

 If vector processing were to become a superior practical option for AGI,
 what would happen to the code in OCP or NCE?

 That would depend heavily on the vector architecture, of course.

 But one viable possibility is: the AtomTable, ProcedureRepository and
 other knowledge stores remain the same ... and the math tools like the
 PLN rules/formulas and Reduct rules remain the same ... but the MindAgents
 that use the former to carry out cognitive processes get totally
 rewritten...


I presume that everything is table driven, so the code could completely
vectorized to execute the table on any sort of architecture including SIMD.

However, if you are actually executing CODE, e.g. as compiled from a reality
representation, then things would be difficult for an SIMD architecture,
though again, you could also interpret tables containing the same
information at the usual 10:1 slowdown, which is what I was expecting
anyway.

This would be a big deal, but not the kind of thing that means you have to
 scrap all your implementation work and go back to ground zero


That's what I figured.

OO and generic design patterns do buy you *something* ...


OO is often impossible to vectorize.

Vector processors aside, though ... it would be a much *smaller*
 deal to tweak my AI systems to run on the 100-core chips Intel
 will likely introduce within the next decade.


There is an 80-core chip due out any time now. Intel has had BIG problems
finding anything to run on them, so I suspect that they would be more than
glad to give you a few if you promise to do something with them.

I listened to an inter-processor communications plan for the 80 core chip
last summer, and it sounded SLOW - like there was no reasonable plan for
global memory. I suspect that your plan in effect requires FAST global
memory (to avoid crushing communications bottlenecks), and this is NOT
entirely simple on MIMD architectures.

My SIMD architecture will deliver equivalent global memory speeds of ~100x
the clock speed, which still makes it a high-overhead operation on a machine
that peaks out at ~20K operations per clock cycle.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-11 Thread Steve Richfield
Ben,

On 12/11/08, Ben Goertzel b...@goertzel.org wrote:

  There isn't much that an MIMD machine can do better than a similar-sized
  SIMD machine.

 Hey, that's just not true.

 There are loads of math theorems disproving this assertion...


Oops, I left out the presumed adjective real-world. Of course there are
countless diophantine equations and other math trivia that aren't
vectorizable.

However, anything resembling a brain in that the process can be done by
billions of slow components must by its very nature vectorizable. Hence, in
the domain of our discussions, I think my statement still holds


  OO and generic design patterns do buy you *something* ...
 
 
  OO is often impossible to vectorize.

 The point is that we've used OO design to wrap up all
 processor-intensive code inside specific objects, which could then be
 rewritten to be vector-processing friendly...


As long as the OO is at a high enough level so as not to gobble up a bunch
of time in the SISD control processor, then no problem.

 There is an 80-core chip due out any time now. Intel has had BIG problems
  finding anything to run on them, so I suspect that they would be more
 than
  glad to give you a few if you promise to do something with them.

 Indeed, AGI and physics simulation may be two of the app areas that have
 the easiest times making use of these 80-core chips...


I don't think Intel is even looking at these. They are targeting embedded
applications.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-10 Thread Steve Richfield
Russell,

On 12/10/08, Russell Wallace [EMAIL PROTECTED] wrote:

 On Wed, Dec 10, 2008 at 5:47 AM, Steve Richfield
 [EMAIL PROTECTED] wrote:
  I don't see how, because it is completely unbounded and HIGHLY related to
  specific platforms and products. I could envision a version that worked
 for
  a specific class of problems on a particular platform, but it would
 probably
  be more work than it was worth UNLESS the user-base were really large,
 e.g.
  it might work well for something like Microsoft Windows or Office.

 Okay, Windows or Office would seem like reasonable targets.


Now, if Microsoft were only willing to both pay for it and to provide the
super-expert(s) to help program it, as I certainly don't know the common
subtle traps that users typically fall into.

 I already had circuit board repair in my sights. Perhaps you recall the
  story of Eleanor my daughter observing the incredible parallels between
  difficult circuit board repair and chronic illnesses?

 I recall a mention of it, but no details; have a link handy?


It's on the History part of the Dr. Eliza site.

 I could probably rough out a KB for this in ~1 week of work. I'm just not
  sure what to do with it once done. Did you have a customer or marketing
 idea
  in mind?

 I hadn't thought that far ahead, but given how much money is spent
 every year by people covering the size range from the punter trying to
 keep an old banger on the road up to the armed forces of NATO on
 maintaining and repairing equipment, I'd be astonished if there wasn't
 a market there for any tool that could make a real contribution.


Maybe I should adopt the ORCAD model, where I provide it for free for a
while, then start inching the price up and UP and UP.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques PS

2008-12-10 Thread Steve Richfield
Mike,

On 12/10/08, Mike Tintner [EMAIL PROTECTED] wrote:

  *Even words for individuals are generalisations.
 *Ben Goertzel is a continuously changing reality. At 10.05 pm he will be
 different from 10.00pm, and so on. He is in fact many individuals.
 *Any statement about an individual, like Ben Goertzel, is also vague and
 open-ended.

 *The only way to refer to and capture individuals with high (though not
 perfect) precision is with images.
 *A movie of Ben chatting from 10.00pm to 10.05pm will be subject to
 extremely few possible interpretations, compared with a verbal statement
 about him.


Even better than a movie, I had some opportunity to observe and interact
with Ben during CONVERGENCE08, I dispute the above statement!

I had sought to extract just a few specific bits of information from/about
Ben. Using VERY specific examples:

Bit#1: Did Ben understand that AI/AGI code and NN representation were
interchangeable, at the prospective cost of some performance one way or the
other. Bit#1=TRUE.

Bit#2: Did Ben realize that there were prospectively ~3 orders of magnitude
in speed available by running NN instead of AI/AGI representation on an
array processor instead of a scalar (x86) processor. Bit#2 affected by
question, now True, but utility disputed by the apparent unavailability of
array processors.

Bit#3: Did Ben realize that the prospective emergence of array processors
(e.g. as I have been promoting) would obsolete much of his present
work, because its structure isn't vectorizable, so he is in effect betting
on continued stagnation in processor architecture, and may in fact be a
small component in a large industry failure by denying market? Bit#3=
probably FALSE.

As always, I attempted to get the measure of the man, but as so often
happens with leaders, there just isn't a bin to toss them in. Without an
appropriate bin, I got lots of raw data (e.g., he has a LOT of hair), but
not all that much usable data.

Alternatively, the Director of RD for Google had a bin waiting for him, as
like SO many people who rise to the top of narrowly-focused organizations,
he had completely bought into the myths at Google without allowing for
usurping technologies. I saw the same thing at Microsoft when I examined
their RD operations in 1995. It takes a particular sort of narrow mind to
rise to the top of a narrowly-focused organization. Here, there aren't many
bits of description about the individuals, but I could easily write a book
about thebin that the purest of them rise to fill.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-09 Thread Steve Richfield
Larry Lefkowitz, Stephen Reed, et al,

First, Thanks Steve for your pointer to Larry Lefkowitz, and thanks Larry
for so much time and effort in trying to relate our two approaches..

After discussions with Larry Lefkowitz of Cycorp, I have had a bit of an
epiphany regarding machine knowledge that I would like to share for all to
comment on...

First, it wasn't as though there were points of incompatibility between
Cycorp's idea of machine knowledge and that used in DrEliza.com, but rather,
there were no apparent points of connection. How could two related things be
so completely different, especially when both are driven by the real world?

Then it struck me. Cycorp and others here on this forum seek to represent
the structures of real world domains in a machine, whereas Dr. Eliza seeks
only to represent the structure of the malfunctions within structures, while
making no attempt whatever to represent the structures in which those
malfunctions occur, as though those malfunctions have their very own
structure, as they truly do. This seems a bit like simulating the holes in
a semiconductor.

OF COURSE there were no points of connection.

Larry pointed out the limitations in my approach - which I already knew,
namely, Dr. Eliza will NEVER EVER understand normal operation when all it
has to go on are *AB*normalities.

Similarly, I pointed out that Cycorp's approach had the inverse problem, in
that it would probably take the quadrillion dollars that Matt Mahoney keeps
talking about to ever understand malfunctions starting from the wrong side
(as seen from Dr. Eliza's viewpoint) of things.

In short, I see both of these as being quite valid but completely
incompatible approaches, that accomplish very different things via very
different methods. Each could move toward the other's capabilities given
infinite resources, but only a madman (like Matt Mahoney?) would ever throw
money at such folly.

Back to my reason for contacting Cycorp - to see if some sort of web
standard to represent metadata could be hammered out. Neither Larry nor I
could see how Dr. Eliza's approach could be adapted to Cycorp, and further,
this is aside from Cycorp's present interests. Hence, I am on my own here.

Hence, it is my present viewpoint that I should proceed with my present
standard to accompany the only semi-commercial program that models *
malfunctions* rather than the real world, somewhat akin to the original
Eliza program. However, I should prominently label the standard and
appropriate fields therein appropriately so that there is no future
confusion between machine knowledge and Dr. Eliza's sort of inverse machine
knowledge.

Any thoughts?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-09 Thread Steve Richfield
Matt,

It appears that either you completely missed the point in my earlier post,
that

Knowledge + Inverse Knowledge ~= Understanding (hopefully)

There are few things in the world that are known SO well that from direct
knowledge thereof that you can directly infer all potential modes of
failure. Especially with things that have been engineered (or divinely
created), or evolved (vs accidental creations like mountains), the failures
tend to come in the FLAWS in the understanding of their creators.

Alternatively, it is possible to encode just the flaws, which tend to spread
via cause and effect chains and easily step out of the apparent structure.
A really good example is where a designer with a particular misunderstanding
of something produces a design that is prone to certain sorts of failures in
many subsystems. Of course, these failures are the next step in the cause
and effect chain that started with his flawed education and have nothing at
all to do with the interrelationships of the systems that are failing.

Continuing...

On 12/9/08, Matt Mahoney [EMAIL PROTECTED] wrote:

  Steve, the difference between Cyc and Dr. Eliza is that Cyc has much more
 knowledge. Cyc has millions of rules. The OpenCyc download is hundreds of MB
 compressed. Several months ago you posted the database file for Dr. Eliza. I
 recall it was a few hundred rules and I think under 1 MB.


You have inadvertently made my point, that in areas of inverse knowledge
that OpenCyc with its hundreds of MBs of data still falls short of Dr. Eliza
with 1% of that knowledge. Similarly, Dr. Eliza's structure would prohibit
it from being able to answer even simple questions regardless of the size of
its KB. This is because OpenCyc is generally concerned with how things work,
rather than how they fail, while Dr. Eliza comes at this from the other end.

 Both of these databases are far too small for AGI because neither has
 solved the learning problem.


... Which was exactly my point when I referenced the quadrillion dollars you
mentioned. If you want to be able to do interesting things for only ~$1M or
so, no problem IF you stick to an appropriate corner of the knowledge (as
Dr. Eliza does). However, if come out of the corners, then be prepared to
throw your $1Q at it.

Note here that I am NOT disputing your ~$1Q, but rather I am using it to
show that the approach is inefficient, especially if some REALLY valuable
parts of what it might bring, namely, the solutions to many of the most
difficult problems, can come pretty cheaply, ESPECIALLY if you get your
proposal working..

Are we on the same page now?

Steve Richfield

   --
 *From:* Steve Richfield [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, December 9, 2008 3:06:08 AM
 *Subject:* [agi] Machine Knowledge and Inverse Machine Knowledge...

 Larry Lefkowitz, Stephen Reed, et al,

 First, Thanks Steve for your pointer to Larry Lefkowitz, and thanks Larry
 for so much time and effort in trying to relate our two approaches..

 After discussions with Larry Lefkowitz of Cycorp, I have had a bit of an
 epiphany regarding machine knowledge that I would like to share for all to
 comment on...

 First, it wasn't as though there were points of incompatibility between
 Cycorp's idea of machine knowledge and that used in DrEliza.com, but rather,
 there were no apparent points of connection. How could two related things be
 so completely different, especially when both are driven by the real world?

 Then it struck me. Cycorp and others here on this forum seek to represent
 the structures of real world domains in a machine, whereas Dr. Eliza seeks
 only to represent the structure of the malfunctions within structures, while
 making no attempt whatever to represent the structures in which those
 malfunctions occur, as though those malfunctions have their very own
 structure, as they truly do. This seems a bit like simulating the holes in
 a semiconductor.

 OF COURSE there were no points of connection.

 Larry pointed out the limitations in my approach - which I already knew,
 namely, Dr. Eliza will NEVER EVER understand normal operation when all it
 has to go on are *AB*normalities.

 Similarly, I pointed out that Cycorp's approach had the inverse problem, in
 that it would probably take the quadrillion dollars that Matt Mahoney keeps
 talking about to ever understand malfunctions starting from the wrong side
 (as seen from Dr. Eliza's viewpoint) of things.

 In short, I see both of these as being quite valid but completely
 incompatible approaches, that accomplish very different things via very
 different methods. Each could move toward the other's capabilities given
 infinite resources, but only a madman (like Matt Mahoney?) would ever throw
 money at such folly.

 Back to my reason for contacting Cycorp - to see if some sort of web
 standard to represent metadata could be hammered out. Neither Larry nor I
 could see how Dr. Eliza's approach could

Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-09 Thread Steve Richfield
Russell,

On 12/9/08, Russell Wallace [EMAIL PROTECTED] wrote:

 As an application domain for Dr. Eliza, medicine has the obvious
 advantage of usefulness, but the disadvantage that it's hard to assess
 performance -- specific data is largely unavailable for privacy
 reasons, and most of us lack the expertise to properly assess it even
 if it were available.


I think this is like dog food - if the dogs like it, then it is a success. I
suspect that its monetary value will far more relate to people liking it
than to its success rate.

IMHO, it all has much to do with the structure of the corner it occupies,
and has little to do with the specific Dr. Eliza technology. If there are a
few oddball conditions that it must deal with and everything else is handled
by the professionals, then it will be a wild success. If there are a large
number of disjoint unlikely things for it to deal with in its corner, no
super-experts to program it, and no relatively clean boundary between its
last resort corner and ordinary professional technology, then it will
probably be a failure.

Medicine fits this well because pretty much everything is covered EXCEPT
central metabolic control problems, malnutrition, and inadvertent poisoning.



 Is there any chance of applying it to debugging software,


I don't see how, because it is completely unbounded and HIGHLY related to
specific platforms and products. I could envision a version that worked for
a specific class of problems on a particular platform, but it would probably
be more work than it was worth UNLESS the user-base were really large, e.g.
it might work well for something like Microsoft Windows or Office.

or repairing machines?


I already had circuit board repair in my sights. Perhaps you recall the
story of Eleanor my daughter observing the incredible parallels between
difficult circuit board repair and chronic illnesses? Here, the technology
has not progressed all that much in the last 40 years, and most of the
really clever methods for finding elusive problems that stump the experts
have nothing at all to do with the specific circuits.

As a bonus, these methods are NOT taught in engineering schools and many are
NOT widely known. Engineers are notoriously bad at repairing their own
designs - which further illustrates the potential need.

I could probably rough out a KB for this in ~1 week of work. I'm just not
sure what to do with it once done. Did you have a customer or marketing idea
in mind?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

2008-12-09 Thread Steve Richfield
Matt,

On 12/9/08, Matt Mahoney [EMAIL PROTECTED] wrote:

  No, I don't believe that Dr. Eliza knows nothing about normal health, or
 that Cyc knows nothing about illness.


Of course you are right. In Dr. Eliza's case, it is quick to ask questions
to establish subsystem normalcy to eliminate candidate problems. Further,
Cyc already has long lists of malfunctions for every subsystem.

However, Dr. Eliza can't do anything with normalcy other than dismissing
certain specific abnormalities, and Cyc can't do much of anything with an
abnormality other than parroting information out about it when asked.

Now, it you want either of these programs to really USE their knowledge
structure to do much more than just checking something off or parroting
something out, then you quickly see the distinction that I was pointing out.

Steve Richfield


   --
 *From:* Steve Richfield [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, December 9, 2008 3:21:18 PM
 *Subject:* Re: [agi] Machine Knowledge and Inverse Machine Knowledge...

 Matt,

 It appears that either you completely missed the point in my earlier post,
 that

 Knowledge + Inverse Knowledge ~= Understanding (hopefully)


 There are few things in the world that are known SO well that from direct
 knowledge thereof that you can directly infer all potential modes of
 failure. Especially with things that have been engineered (or divinely
 created), or evolved (vs accidental creations like mountains), the failures
 tend to come in the FLAWS in the understanding of their creators.

 Alternatively, it is possible to encode just the flaws, which tend to
 spread via cause and effect chains and easily step out of the apparent
 structure. A really good example is where a designer with a particular
 misunderstanding of something produces a design that is prone to certain
 sorts of failures in many subsystems. Of course, these failures are the next
 step in the cause and effect chain that started with his flawed education
 and have nothing at all to do with the interrelationships of the systems
 that are failing.

 Continuing...

 On 12/9/08, Matt Mahoney [EMAIL PROTECTED] wrote:

  Steve, the difference between Cyc and Dr. Eliza is that Cyc has much
 more knowledge. Cyc has millions of rules. The OpenCyc download is hundreds
 of MB compressed. Several months ago you posted the database file for Dr.
 Eliza. I recall it was a few hundred rules and I think under 1 MB.


 You have inadvertently made my point, that in areas of inverse knowledge
 that OpenCyc with its hundreds of MBs of data still falls short of Dr. Eliza
 with 1% of that knowledge. Similarly, Dr. Eliza's structure would prohibit
 it from being able to answer even simple questions regardless of the size of
 its KB. This is because OpenCyc is generally concerned with how things work,
 rather than how they fail, while Dr. Eliza comes at this from the other end.

  Both of these databases are far too small for AGI because neither has
 solved the learning problem.


 ... Which was exactly my point when I referenced the quadrillion dollars
 you mentioned. If you want to be able to do interesting things for only ~$1M
 or so, no problem IF you stick to an appropriate corner of the knowledge (as
 Dr. Eliza does). However, if come out of the corners, then be prepared to
 throw your $1Q at it.

 Note here that I am NOT disputing your ~$1Q, but rather I am using it to
 show that the approach is inefficient, especially if some REALLY valuable
 parts of what it might bring, namely, the solutions to many of the most
 difficult problems, can come pretty cheaply, ESPECIALLY if you get your
 proposal working..

 Are we on the same page now?

 Steve Richfield

   --
 *From:* Steve Richfield [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, December 9, 2008 3:06:08 AM
 *Subject:* [agi] Machine Knowledge and Inverse Machine Knowledge...

 Larry Lefkowitz, Stephen Reed, et al,

 First, Thanks Steve for your pointer to Larry Lefkowitz, and thanks Larry
 for so much time and effort in trying to relate our two approaches..

 After discussions with Larry Lefkowitz of Cycorp, I have had a bit of an
 epiphany regarding machine knowledge that I would like to share for all to
 comment on...

 First, it wasn't as though there were points of incompatibility between
 Cycorp's idea of machine knowledge and that used in DrEliza.com, but rather,
 there were no apparent points of connection. How could two related things be
 so completely different, especially when both are driven by the real world?

 Then it struck me. Cycorp and others here on this forum seek to represent
 the structures of real world domains in a machine, whereas Dr. Eliza seeks
 only to represent the structure of the malfunctions within structures, while
 making no attempt whatever to represent the structures in which those
 malfunctions occur, as though those malfunctions have their very own

Re: [agi] Religious attitudes to NBIC technologies

2008-12-08 Thread Steve Richfield
Everyone,

The problem here is that WE don't have anything to point to as OUR religion,
so that everyone else has the power of stupidity in general and the 1st
amendment in particular, yet we don't have any such power.

I believe that it is possible to fill in this gap, but I don't wish to
discuss incomplete solutions on public forums. However, if you have any
ideas just how OUR religion should be structured, then please feel free to
send them to me, preferably off-line. It would be a real shame to do a bad
job of this, so I'm keeping my detailed thoughts to myself pending a live
birth.

Note Buddhism's belief structure that does NOT include a Deity.

Note Islam's various provisions for unbelievers to get a free pass, and
sometimes even break a rule here and there, so long as they pretend to
believe.

Any thoughts?

Steve Richfield

On 12/8/08, Philip Hunt [EMAIL PROTECTED] wrote:

 2008/12/8 Bob Mottram [EMAIL PROTECTED]:
  People who are highly religious tend to be very past positive
  according the Zimbardo classification of people according to their
  temporal orientation. [...]
  I agree that in time we will see more polarization around a variety of
  technology related issues.

 You're probably right. Part of the problem is that these people
 [correctly] believe that science and technology are destroying their
 worldview. And as the gaps in scientific knowledge decrease, there's
 less roo for the God of the gaps to occupy.

 Having said that, I'm not aware that nanotechnology or AI are
 specifically prohibited by any of the major religions. And if one
 society forgoes science, they'll just get outcompeted by their
 neighbours.

 --
 Philip Hunt, [EMAIL PROTECTED]
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques

2008-12-06 Thread Steve Richfield
Matt,

On 12/6/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Sat, 12/6/08, Steve Richfield [EMAIL PROTECTED] wrote:

  Internet AGIs are the technology of the future, and always will be. There
 will NEVER EVER in a million years be a thinking Internet silicon
 intelligence that will be able to solve substantial real-world problems
 based only on what exists on the Internet. I think that my prior email was
 pretty much a closed-form proof of that. However, there are MUCH simpler
 methods that work TODAY, given the metadata that is presently missing from
 the Internet.

 The internet has about 10^14 to 10^15 bits of knowledge as searchable text.
 AGI requires 10^17 to 10^18 bits.


This presumes that there isn't some sort of agent at work that filters a
particular important type of information, so that even a googol of text
wouldn't be any closer. As I keep explaining, that agent is there and
working well, to filter the two things that I keep mentioning. Hence, you
are WRONG here.

If we assume that the internet doubles every 1.5 to 2 years with Moore's
 Law, then we should have enough knowledge in 15-20 years.


Unfortunately, I won't double my own postings, and few others will double
their own output. Sure, there will be some additional enlargement of the
Internet, but its growth is linear once past its introduction, which we are,
and short of exponential growth of population, which is on a scale of a
century or so. In short, Moore's law simply doesn't apply here, any more
than 9 women can make a baby in a month.

However, much of this new knowledge is video, so we also need to solve
 vision and speech along with language.


Which of course has been stymied by the lack of metadata - my point all
along.

 While VERY interesting, your proposal appears to leave the following
 important questions unanswered:
  1.  How is it an AGI? I suppose this is a matter of definitions. It looks
 to me more like a protocol.

 AGI means automating the economy so we don't have to work. It means not
 just solving the language and vision problems, but also training the
 equivalent of 10^10 humans to make money for us. After hardware costs come
 down, custom training for specialized roles will be the major expense. I
 proposed surveillance as the cheapest way for AGI to learn what we want. A
 cheaper alternative might be brain scanning, but we have not yet developed
 the technology. (It will be worth US$1 quadrillion if you can do it).

 Or another way to answer your question, AGI is a lot of dumb specialists
 plus an infrastructure to route messages to the right experts.


I suspect that your definition here is unique. Perhaps other on this forum
would like to proclaim which of us is right/wrong. I thought that the
definition more or less included an intelligent *computer*.

 2.  As I explained earlier on this thread, all human-human languages have
 severe semantic limitations, such that (applying this to your porposal),
 only very rarely will there ever exist an answer that PRECISELY answers a
 question, so some sort of acceptable error must go into the equation. In
 the example you used in your paper, Jupiter is NOT the largest planet that
 is known, as the astronomers have identified larger planets in other solar
 systems. There may be a good solution to this, e.g. provide the 3 best
 answers that are semantically disjoint.

 People communicate in natural language 100 to 1000 times faster than any
 artificial language, in spite of its supposed limitations. Remember that the
 limiting cost is transferring knowledge from human brains to AGI, 10^17 to
 10^18 bits at 2 bits per second per person.


Unfortunately, when societal or perceptual filters are involved, there will
remain HUGE holes in even an infinite body of data. Of course, our society
has its problems precisely because of those holes, so more data doesn't
necessarily get you any further.

As for Jupiter, any question you ask is going to get more than one answer.
 This is not a new problem.
 http://www.google.com/search?q=what+is+the+largest+planet%3F

 In my proposal, peers compete for reputation and have a financial incentive
 to provide useful information to avoid being blocked or ignored in an
 economy where information has negative value.


Great! At least that way, I know that the things I see will be good
Christian content.

 This is why it is important for an AGI protocol to provide for secure
 authentication.

  3.  Your paper addresses question answering, which as I have explained
 here in the past, is a much lower form of art than is problem solving, where
 you simply state an unsatisfactory situation and let the computer figure out
 why things are as they are and how to improve them.

 Problem solving pre-dates AGI by decades. We know how to solve problems in
 many narrow domains. The problem I address is finding the right experts.


Hmmm, an even higher form, but will it work? In my experience of solving a
few cases having supposedly incurable

Re: [agi] Seeking CYC critiques

2008-12-05 Thread Steve Richfield
Matt,

On 12/4/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Wed, 12/3/08, Steve Richfield [EMAIL PROTECTED] wrote:

  I appears obvious to me that the first person who proposes the
 following things together as a workable standard, will own the future
 'web. This because the world will enter the metadata faster than anyone is
 going to build a semantic web or anything like it without these items. In
 short, this is a sort of calculated retrograde step to get the goodies NOW
 and not sometime in the future.

 I disagree. Google has already figured out the semantic web. People
 communicate in natural language 100 to 1000 times faster than in any
 artificial language. Time is money.


We seem to have a disconnection here, which we each blame on the other. I'll
reiterate my point in different words to hopefully drill down into the
disconnection...

If only people included two absolutely critical pieces of metadata in their
postings, then you and Google would be absolutely right. Unfortunately, they
do NOT ever include this metadata, and it simply cannot be gleaned from the
postings themselves. As explained before, these include:
1.  The syntax of statements of differential symptomology, in short, what
people/machines/countries who/that have a particular condition typically say
to communicate a symptom that is DIFFERENT what what people say who have a
similar symptom of something quite different. This typically requires an
EXPERIENCED human expert to code.
2.  Carefully constructed questions to elicit statements meeting criteria #1
above.

No one has yet proposed ANY way of mining the Internet to engage in useful
problem solving without these two pieces of metadata, yet supposedly smart
people continue wasting their efforts and other people's money on such
folly.

 Perhaps in years to come, people can omit some/all of this metadata and
 future AI interfaces to the web will still work, bit I simply see no reason
 to wait until then to smarten the 'web. Once the metadata is in place, any
 bright programmer can implement theInternet Singularity by simply
 populating his tables based on the metadata.

 What? According to http://www.dreliza.com/singularity.php the singularity
 already happened in 2001 when Steve Richfield had his intelligence greatly
 increased... :-)


It had to start somewhere. Only time will tell if this statement is correct,
so it is useless to argue this point right now. Whether or not I and my
efforts continue the thread the ultimately succeeds, I have little doubt
that whatever happens will include someone who has had their intelligence
greatly increased with my or similar methods. This certainly provides a MUCH
better jumping off point to singularities of all sorts than anyone else
has yet proposed. The passage you quoted from contains a detailed
explanation of how this works, and I'll gladly help anyone who wants to
follow this path through the process of collecting another ~20 IQ points in
just one difficult day (following ~2 weeks of preparation, and followed by
months of recovery). Note that this ONLY works for smart people (which
probably includes everyone on this forum) who now have low daytime body
temperatures, a condition known as central hypothermia (which probably
excludes 50% of the people here). The quick screening test is seeing if
your temperature stays 98.2F, even in the afternoon when it should be at
its highest and usually reaching ~99F for helathy people. This will also
fix numerous minor health problems that you may already have.

BTW, for easier demos, http://DrEliza.com has new knowledge that should be
wrung out in the next few days, on how to save teeth that your dentist
says are hopeless and absolutely must be extracted. I had a lot of really
bad dentistry long ago, and have been struggling to save what little is left
ever since, and hence I have become quite expert in this domain. I still
have at least some part of every tooth left.

To illustrate, just yesterday, a third opinion yet again proclaimed my
#31 molar to be headed for the garbage can. I then presented my plan to
save ~half of it by extracting just the cracked, decayed, and unsalvageable
root, keeping the remainder which would then be too weak to function on its
own (and which might even break part of my jawbone if left as-is), and
installing a 2-unit bridge to an adjacent tooth that had its own problems
and was also weak, but to pressure in the opposite direction. Together,
these two teeth should make one very good tooth - a little like one of the
long molars in the back of a dog's mouth. Here, my methods were accepted and
the various procedures are now being scheduled, even after a PhD dentist, an
endodontist, and an oral surgeon (all board certified) had all
proclaimed salvaging #31 to be completely hopeless until they heard my plan.
The total cost will be ~half of that of an implant. #31 will become my 5th
tooth to survive after being advised that extraction was the only viable
option. Each

Re: [agi] Seeking CYC critiques

2008-12-05 Thread Steve Richfield
Matt,

 If your program can't handle natural language with all its ambiguities,
 then it isn't AGI.


Internet AGIs are the technology of the future, and always will be. There
will NEVER EVER in a million years be a thinking Internet silicon
intelligence that will be able to solve substantial real-world problems
based only on what exists on the Internet. I think that my prior email was
pretty much a closed-form proof of that. However, there are MUCH simpler
methods that work TODAY, given the metadata that is presently missing from
the Internet.


 No one has yet proposed ANY way of mining the Internet to engage in
 useful problem solving without these two pieces of metadata, yet supposedly
 smart people continue wasting their efforts and other people's money on such
 folly.

 My AGI proposal ( http://www.mattmahoney.net/agi2.html ) uses natural
 language to communicate between peers. A peer is only required to understand
 a small subset, perhaps scanning for a few keywords and ignoring everything
 else. Individually, peers don't need to be very smart for the collective to
 achieve AGI.


While VERY interesting, your proposal appears to leave the following
important questions unanswered:
1.  How is it an AGI? I suppose this is a matter of definitions. It looks to
me more like a protocol.
2.  As I explained earlier on this thread, all human-human languages have
severe semantic limitations, such that (applying this to your porposal),
only very rarely will there ever exist an answer that PRECISELY answers a
question, so some sort of acceptable error must go into the equation. In
the example you used in your paper, Jupiter is NOT the largest planet that
is known, as the astronomers have identified larger planets in other solar
systems. There may be a good solution to this, e.g. provide the 3 best
answers that are semantically disjoint.
3.  Your paper addresses question answering, which as I have explained here
in the past, is a much lower form of art than is problem solving, where you
simply state an unsatisfactory situation and let the computer figure out why
things are as they are and how to improve them. Note that
http://www.DrEliza.com makes no attempt to answer questions, or even work on
easy problems (because they are too damn hard for Dr. Eliza), but rather, it
confines itself to very difficult corners of chosen sub-domains. It may
never be able to match an average doctor, but can easily handle the fallout
from the best of them. It will never match an average dentist, but it can
save a lot of teeth that the best of them have given up on.

Of course, your approach COULD be easily extended to function in problem
solving, by simply providing a mechanism for users to attach the metadata
that I have mentioned is needed for problem solving. In short, I really like
your proposal for an alternative Internet protocol, as the Internet
obviously needs a bunch of them, because the present set is woefully
inadequate.

IMHO you should recast your proposal as an RFC and put it out there. It
sounds like you could easily utilize a USENET group for early demos. Note
that Microsoft maintains some test groups on some of its servers, that Dr.
Eliza already uses without problems for its inter-incarnation communication.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques

2008-12-03 Thread Steve Richfield
Steve,

Based on your attached response, How about this alternative approach:

Send (one of) them an email pointing out
http://www.dreliza.com/standards.php which will obviously usurp their own
efforts if they fail to participate, and offer them an opportunity to
suggest amendments these standards to incorporate (some of) their own
capabilities.

Seeing that Dr. Eliza's approach is quite different, they should then figure
out that their only choices are to join or die. I wonder how they would
respond? You know these guys. How would YOU play this hand?

Any thoughts?

Steve Richfield

On 12/2/08, Stephen Reed [EMAIL PROTECTED] wrote:

  Steve Richfield said:

 If I understand you correctly, Cycorp's code should be *public domain*,
 and as such, I should be able to simply mine for the features that I am
 looking for. It sounds like Cycorp doesn't have a useful product (yet)
 whereas it looks like I do, so it is probably I who should be doing this,
 not Cycorp.


 Regretfully, the KRAKEN source code is not public domain, despite the fact
 that US tax dollars paid for it.


 While at Cycorp, John DeOliveira and I lobbied for an open-source version
 of Cyc, that one of us dubbed OpenCyc.  Doug Lenat saw the advantages of
 releasing a limited form of Cyc technology, especially to preclude some
 other possible ontology from becoming the de facto standard ontology, e.g.
 for the Semantic Web.  However, Cycorp is bedeviled by its own traditional,
 proprietary nature and Lenat did not want to release the source code for the
 object store, lisp runtime, inference engine, applications and utilities.
 The first release of OpenCyc that I prepared contained many, but not all, of
 the full Cyc concept terms, and their defining assertions.  No rules, nor
 numerous other commonsense assertions about these concepts were released.
 The provided OpenCyc runtime was binary only, without source code, and with
 its HTML browser as its sole released application.  A Java API to Cyc, that
 I wrote, was also released with its source code under the Apache License.

 The KRAKEN application is  not provided with OpenCyc, and it was growing
 stale from lack of maintenance when I was let go from the company in August
 2006.

 -Steve

 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860


  --
 *From:* Steve Richfield [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Monday, December 1, 2008 10:22:37 PM
 *Subject:* Re: [agi] Seeking CYC critiques

 Steve,

 If I understand you correctly, Cycorp's code should be *public domain*,
 and as such, I should be able to simply mine for the features that I am
 looking for. It sounds like Cycorp doesn't have a useful product (yet)
 whereas it looks like I do, so it is probably I who should be doing this,
 not Cycorp.

 Any thoughts?

 Who should I ask for code from?

 Steve Richfield
 ==
 On 12/1/08, Stephen Reed [EMAIL PROTECTED] wrote:

  Steve Richfield said:
 KRAKEN contains lots of good ideas, several of which were already on my
 wish list for Dr. Eliza sometime in the future. I suspect that a merger of
 technologies might be a world-beater.

 I wonder if the folks at Cycorp would be interested in such an effort?

 If you can find a sponsor for the effort and then solicit Cycorp to join
 in collaboration, I believe that they would be interested.  The Cycorp
 business model as I knew it back in 2006, depended mostly upon government
 research sponsorship to (1) accomplish the research that the sponsor wanted,
 e.g. produce deliverables for the DARPA Rapid Knowledge Formation project,
 and (2) incrementally add more facts and rules to the Cyc KB, write more
 supporting code for Cyc.  Cycorp, did not then, and likely even now does not
 have internal funding for non-sponsored enhancements.

 -Steve


 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860


  --
 *From:* Steve Richfield [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Monday, December 1, 2008 3:19:37 PM
 *Subject:* Re: [agi] Seeking CYC critiques

 Steve,

 The KRAKEN paper was quite interesting, and has a LOT in common with my
 own Dr. Eliza. However, I saw no mention of Dr. Eliza's secret sauce, that
 boosts it from answering questions to solving problems given symptoms. The
 secret sauce has two primary ingredients:
 1.  The syntax of differential symptom statements - how people state a
 symptom that separates it from similar symptoms of other conditions.
 2.  Questions, the answers to which will probably carry #1 above
 recognizable differential symptom statements.
 Both of the above seem to require domain *experienced* people to code, as
 book learning doesn't seem to convey what people typically say, or what you
 have to say to them

Re: [agi] Seeking CYC critiques

2008-12-03 Thread Steve Richfield
will be plainly resolved for all to see.

Thanks very much for your independent view of this. Any more thoughts?

Steve Richfield
===


 On Wed, Dec 3, 2008 at 3:55 PM, Steve Richfield
 [EMAIL PROTECTED] wrote:
  Steve,
 
  Based on your attached response, How about this alternative approach:
 
  Send (one of) them an email pointing out
  http://www.dreliza.com/standards.php which will obviously usurp their
 own
  efforts if they fail to participate, and offer them an opportunity to
  suggest amendments these standards to incorporate (some of) their own
  capabilities.
 
  Seeing that Dr. Eliza's approach is quite different, they should then
 figure
  out that their only choices are to join or die. I wonder how they would
  respond? You know these guys. How would YOU play this hand?
 
  Any thoughts?
 
  Steve Richfield
  
  On 12/2/08, Stephen Reed [EMAIL PROTECTED] wrote:
 
  Steve Richfield said:
 
  If I understand you correctly, Cycorp's code should be public domain,
 and
  as such, I should be able to simply mine for the features that I am
 looking
  for. It sounds like Cycorp doesn't have a useful product (yet) whereas
 it
  looks like I do, so it is probably I who should be doing this, not
 Cycorp.
 
 
  Regretfully, the KRAKEN source code is not public domain, despite the
 fact
  that US tax dollars paid for it.
 
 
  While at Cycorp, John DeOliveira and I lobbied for an open-source
 version
  of Cyc, that one of us dubbed OpenCyc.  Doug Lenat saw the advantages
 of
  releasing a limited form of Cyc technology, especially to preclude some
  other possible ontology from becoming the de facto standard ontology,
 e.g.
  for the Semantic Web.  However, Cycorp is bedeviled by its own
 traditional,
  proprietary nature and Lenat did not want to release the source code for
 the
  object store, lisp runtime, inference engine, applications and
 utilities.
  The first release of OpenCyc that I prepared contained many, but not
 all, of
  the full Cyc concept terms, and their defining assertions.  No rules,
 nor
  numerous other commonsense assertions about these concepts were
 released.
  The provided OpenCyc runtime was binary only, without source code, and
 with
  its HTML browser as its sole released application.  A Java API to Cyc,
 that
  I wrote, was also released with its source code under the Apache
 License.
 
  The KRAKEN application is  not provided with OpenCyc, and it was growing
  stale from lack of maintenance when I was let go from the company in
 August
  2006.
 
  -Steve
 
  Stephen L. Reed
 
  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860
 
  
  From: Steve Richfield [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, December 1, 2008 10:22:37 PM
  Subject: Re: [agi] Seeking CYC critiques
 
  Steve,
 
  If I understand you correctly, Cycorp's code should be public domain,
 and
  as such, I should be able to simply mine for the features that I am
 looking
  for. It sounds like Cycorp doesn't have a useful product (yet) whereas
 it
  looks like I do, so it is probably I who should be doing this, not
 Cycorp.
 
  Any thoughts?
 
  Who should I ask for code from?
 
  Steve Richfield
  ==
  On 12/1/08, Stephen Reed [EMAIL PROTECTED] wrote:
 
  Steve Richfield said:
  KRAKEN contains lots of good ideas, several of which were already on my
  wish list for Dr. Eliza sometime in the future. I suspect that a merger
 of
  technologies might be a world-beater.
 
  I wonder if the folks at Cycorp would be interested in such an effort?
  If you can find a sponsor for the effort and then solicit Cycorp to
 join
  in collaboration, I believe that they would be interested.  The Cycorp
  business model as I knew it back in 2006, depended mostly upon
 government
  research sponsorship to (1) accomplish the research that the sponsor
 wanted,
  e.g. produce deliverables for the DARPA Rapid Knowledge Formation
 project,
  and (2) incrementally add more facts and rules to the Cyc KB, write
 more
  supporting code for Cyc.  Cycorp, did not then, and likely even now
 does not
  have internal funding for non-sponsored enhancements.
 
  -Steve
 
 
  Stephen L. Reed
 
  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860
 
  
  From: Steve Richfield [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, December 1, 2008 3:19:37 PM
  Subject: Re: [agi] Seeking CYC critiques
 
  Steve,
 
  The KRAKEN paper was quite interesting, and has a LOT in common with my
  own Dr. Eliza. However, I saw no mention of Dr. Eliza's secret sauce,
 that
  boosts it from answering questions to solving problems given symptoms.
 The
  secret sauce has two primary ingredients:
  1.  The syntax of differential symptom statements - how people state

Re: [agi] Seeking CYC critiques

2008-12-02 Thread Steve Richfield
Christopher,

On 12/2/08, Christopher Carr [EMAIL PROTECTED] wrote:

 Long time lurker here.

 If I understand you, Steve, your are saying (among other things) that
 English is less polysemous and pragmatically less complicated than, say,
 Russian.


Certainly not. I am saying that with enough words, long and complex
sentences, etc., that you can more accurately convey more things than in
Russian. That comes with a LOT of semantic ambiguity, multiple levels of
overloading, etc.

Everyone has been concentrating on disambiguation, which is a challenge, but
even with perfect disambiguation there are STILL some severe limits to all
languages.

Is English your L1?


Yes.

Do you speak Russian?


2 years in HS + 2 quarters in college. Enough for greetings and to find the
men's room, but not much more, though I do have a good Moscow accent.
However, that IS enough to get more attention from Russian speakers than
other conference attendees receive.

If English is indeed your first language, it is perhaps not surprising that
 English seems more semantically precise or straightforward, as -- short of
 being a trained linguist -- you wold have less meta-awareness of its
 nuances. It's not as if the Arabic and Russian examples you provide have no
 English analogs.


My comments come more from competent human translaters, than from my own
personal experiences.

An interesting thing about Russian is how the language has shifted since the
fall of the Soviet Union. They have several words that all map to the
English you, which is easy for RussianEnglish, but hard for
EnglishRussian. The Communist Party adopted tui, the closest form
typically spoken between family members, to refer to other Communist Party
members. Then, with the fall of the Soviet Union, tui with its communist
overloading, is now seldom used. The synonyms of you is a good example of
a Russian richness not shared by English. However, Russian has no indefinite
relationship form of you, the only form that English has, in part because
their written language is a relatively recent invention. Of course, if the
relationship were an adjective, it could be omitted. Interestingly, English
doesn't even have these adjectives in its lexicon, which makes some BIG gaps
in the representable continuum.

Steve Richfield
=

 Steve Richfield wrote:

 Mike,

 On 12/1/08, *Mike Tintner* [EMAIL PROTECTED] mailto:
 [EMAIL PROTECTED] wrote:

I wonder whether you'd like to outline an additional list of
English/language's shortcomings here. I've just been reading
Gary Marcus' Kluge - he has a whole chapter on language's
shortcomings, and it would be v. interesting to compare and analyse.

  The real world is a wonderful limitless-dimensioned continuum of
 interrelated happenings. We have but a limited window to this, and have an
 even more limited assortment of words that have very specific meanings.
 Languages like Arabic vary pronunciation or spelling to convey additional
 shades of meaning, and languages like Chinese convey meaning via joined
 concepts. These may help, but they do not remove the underlying problem.
 This is like throwing pebbles onto a map and ONLY being able to communicate
 which pebble is closest to the intended location. Further, many words have
 multiple meanings, which is like only being able to specify certain disjoint
 multiples of pebbles, leaving it to AI to take a WAG (Wild Ass Guess) which
 one was intended.
  This becomes glaring obvious in language translation. I learned this
 stuff from people on the Russian national language translator project. Words
 in these two languages have very different shades of meaning, so that in
 general, a sentence in one language can NOT be translated to the other
 language with perfect accuracy, simply because the other language lacks
 words with the same shading. This is complicated by the fact that the
 original author may NOT have intended all of the shades of meaning, but was
 stuck with the words in the dictionary.
  For example, a man saying sit down in Russian to a woman, is conveying
 something like an order (and not a request) to sit down, shut up, and don't
 move. To remove that overloading, he might say please sit down in
 Russian. Then, it all comes down to just how he pronounces the please as
 to what he REALLY means, but of course, this is all lost in print. So, just
 how do you translate please sit down so as not to miss the entire meaning?
  One of my favorite pronunciation examples is excuse me.
  In Russian, it is approximately eezveneetsya minya and is typically
 spoken with flourish to emphasize apology.
  In Arabic, it is approximately afwan without emphasis on either
 syllable, and is typically spoken curtly, as if to say yea, I know I'm an
 idiot. It is really hard to pronounce these two syllables without emphases,
 but with flourish.
  There is much societal casting of meaning to common concepts.
  The underlying issue here is the very concept of translation

Re: [agi] Seeking CYC critiques

2008-12-01 Thread Steve Richfield
Steve,

The KRAKEN paper was quite interesting, and has a LOT in common with my own
Dr. Eliza. However, I saw no mention of Dr. Eliza's secret sauce, that
boosts it from answering questions to solving problems given symptoms. The
secret sauce has two primary ingredients:
1.  The syntax of differential symptom statements - how people state a
symptom that separates it from similar symptoms of other conditions.
2.  Questions, the answers to which will probably carry #1 above
recognizable differential symptom statements.
Both of the above seem to require domain *experienced* people to code, as
book learning doesn't seem to convey what people typically say, or what you
have to say to them to get them to state their symptom in a differential
way. Also, I suspect that knowledge coded today wouldn't work well in 50
years, when common speech has shifted.

I finally gave up on having Dr. Eliza answer questions, because the round
trip error rate seemed to be inescapably high. This is the product of:

1.  The user's flaws in their world model.
2.  The user's flaws in formulating their question.
3.  The computer's errors in parsing the question.
4.  The computer's errors in formulating an answer.
5.  The user's errors in understanding the answer.
6.  The user's errors from filing the answer into a flawed world model.

Between each of these is:

x.5  English's shortcomings in providing a platform to accurately state the
knowledge, question, or answer.

While each of these could be kept to 5%, it seemed completely hopeless to
reduce the overall error rate to low enough to actually make it good for
anything useful. Of course, everyone on this forum concentrates on #3 above,
when in the real world, this is often/usually swamped by the others. Hence,
I am VERY curious. Has KRAKEN found a worthwhile/paying niche in the world
with itsw question answering, where people actually use it to their benefit?
If so, then how did they deal with the round trip error rate?

KRAKEN contains lots of good ideas, several of which were already on my wish
list for Dr. Eliza sometime in the future. I suspect that a merger of
technologies might be a world-beater.

I wonder if the folks at Cycorp would be interested in such an effort?

BTW, http://www.DrEliza.com is up and down these days, with plans for a new
and more reliable version to be installed next weekend.

Any thoughts?

Steve Richfield
==
On 11/29/08, Stephen Reed [EMAIL PROTECTED] wrote:

  Hi Robin,
 There are no Cyc critiques that I know of in the last few years.  I was
 employed seven years at Cycorp until August 2006 and my non-compete
 agreement expired a year later.

 An interesting competition was held by Project 
 Halohttp://www.projecthalo.com/halotempl.asp?cid=30in which Cycorp 
 participated along with two other research groups to
 demonstrate human-level competency answering chemistry questions.  Results
 are 
 herehttp://www.projecthalo.com/content/docs/ontologies_in_chemistry_ISWC2.pdf.
 Although Cycorp performed principled deductive inference giving detailed
 justifications, it was judged to have performed inferior due to the
 complexity of its justifications and due to its long running times.  The
 other competitors used special purpose problem solving modules whereas
 Cycorp used its general purpose inference engine, extended for chemistry
 equations as needed.

 My own interest is in natural language dialog systems for rapid knowledge
 formation.  I was Cycorp's first project manager for its participation in
 the the DARPA Rapid Knowledge Formation project where it performed to
 DARPA's satisfaction, but subsequently its RKF tools never lived up to
 Cycorp's expectations that subject matter experts could rapidly extend the
 Cyc KB without Cycorp ontological engineers having to intervene.  A Cycorp
 paper describing its KRAKEN system is 
 herehttp://www.google.com/url?sa=tsource=webct=rescd=1url=http%3A%2F%2Fwww.cyc.com%2Fdoc%2Fwhite_papers%2Fiaai.pdfei=IDgySdKoIJzENMzqpJcLusg=AFQjCNG1VlgQxAKERyiHj4CmPohVeZxRywsig2=o50LFe4D6TRC3VwC7ZNPxw
 .

 I would be glad to answer questions about Cycorp and Cyc technology to the
 best of my knowledge, which is growing somewhat stale at this point.

 Cheers.
 -Steve


 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860


  --
 *From:* Robin Hanson [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Saturday, November 29, 2008 9:46:09 PM
 *Subject:* [agi] Seeking CYC critiques

 What are the best available critiques of CYC as it exists now (vs. soon
 after project started)?

 Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu
 Research Associate, Future of Humanity Institute at Oxford University
 Associate Professor of Economics, George Mason University
 MSN 1D3, Carow Hall, Fairfax VA 22030-
 703-993-2326  FAX: 703-993-2323

  --
   *agi

Re: [agi] Seeking CYC critiques

2008-12-01 Thread Steve Richfield
Mike,

On 12/1/08, Mike Tintner [EMAIL PROTECTED] wrote:

  I wonder whether you'd like to outline an additional list of
 English/language's shortcomings here. I've just been reading Gary Marcus'
 Kluge - he has a whole chapter on language's shortcomings, and it would be
 v. interesting to compare and analyse.


The real world is a wonderful limitless-dimensioned continuum of
interrelated happenings. We have but a limited window to this, and have an
even more limited assortment of words that have very specific meanings.
Languages like Arabic vary pronunciation or spelling to convey additional
shades of meaning, and languages like Chinese convey meaning via joined
concepts. These may help, but they do not remove the underlying problem.
This is like throwing pebbles onto a map and ONLY being able to communicate
which pebble is closest to the intended location. Further, many words have
multiple meanings, which is like only being able to specify certain disjoint
multiples of pebbles, leaving it to AI to take a WAG (Wild Ass Guess) which
one was intended.

This becomes glaring obvious in language translation. I learned this stuff
from people on the Russian national language translator project. Words in
these two languages have very different shades of meaning, so that in
general, a sentence in one language can NOT be translated to the other
language with perfect accuracy, simply because the other language lacks
words with the same shading. This is complicated by the fact that the
original author may NOT have intended all of the shades of meaning, but was
stuck with the words in the dictionary.

For example, a man saying sit down in Russian to a woman, is conveying
something like an order (and not a request) to sit down, shut up, and don't
move. To remove that overloading, he might say please sit down in
Russian. Then, it all comes down to just how he pronounces the please as
to what he REALLY means, but of course, this is all lost in print. So, just
how do you translate please sit down so as not to miss the entire meaning?

One of my favorite pronunciation examples is excuse me.

In Russian, it is approximately eezveneetsya minya and is typically spoken
with flourish to emphasize apology.

In Arabic, it is approximately afwan without emphasis on either syllable,
and is typically spoken curtly, as if to say yea, I know I'm an idiot. It
is really hard to pronounce these two syllables without emphases, but with
flourish.

There is much societal casting of meaning to common concepts.

The underlying issue here is the very concept of translation, be it into a
human language, or a table form in an AI engine.. Really good translations
have more footnotes than translation, where these shades of meaning are
explained, yet modern translation programs produce no footnotes, which
pretty much consigns them to the trash translation pile, even with perfect
disambiguation, which of course is impossible. Even the AI engines, that can
carry these subtle overloadings, are unable to determine what nearby meaning
the author actually intended.

Hence, no finite language can convey specific meanings from within a
limitlessly-dimensional continuum of potential meanings. English does better
than most other languages, but it is still apparently not good enough even
for automated question answering, which was my original point. Everywhere
semantic meaning is touched upon, both within the wetware and within
software, additional errors are introduced. This makes many answers
worthless and all answers suspect, even before they are formed in the mind
of the machine.

Have I answered your question?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques

2008-12-01 Thread Steve Richfield
Mike,

More than multiplicity is the issue of discrete-point semantics vs.
continuous real-world possibilities. Multiplicity could potentially be
addressed by requiring users to put (clarifications) following unclear words
(e.g. in response to diagnostic messages to clarify input). Dr. Eliza
already does some of this, e.g. when it encounters If ... then ... it
complains that it just wants to know the facts, and NOT how you think the
world works. However, such approaches are unable to address the discrete vs.
continuous issue, because every clarifying word has its own fuzziness, you
don't know what the user's world model (and hence its discrete points) is,
etc.

Somewhat of an Islamic scholar (needed for escape after being sold into
servitude in 1994), I am sometimes asked to clarify really simple-sounding
concepts like agent of Satan. The problem is that many people from our
culture simply have no place in their mental filing system for this
information, without which, it is simply not possible to understand things
like the present Middle East situation. Here, the discrete points that are
addressable by their world-model are VERY far apart.

For those of you who do understand agent of Satan, this very mental
incapacity MAKES them agents of Satan. This is related to a passage in the
Qur'an that states that most of the evil done in the world is done by people
who think that they are doing good. Sounds like George Bush, doesn't it? In
short, not only is this definition, but also this reality is circular. Here
is one of those rare cases where common shortcomings in world models
actually have common expressions referring to them. Too bad that these
expressions come from other cultures, as we could sure use a few of them.

Anyway, I would dismiss the multiplicity viewpoint, not because it is
wrong, but because it guides people into disambiguation, which is ultimately
unworkable. Once you understand that the world is a continuous domain, but
that language is NOT continuous, you will realize the hopelessness of such
efforts, as every question and every answer is in ERROR, unless by some
wild stroke of luck, it is possible to say EXACTLY what is meant.

As an interesting aside Bayesian programs tend (89%) to state their
confidence, which overcomes some (13%) of such problems.

Steve Richfield
=
On 12/1/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve,

 Thanks. I was just looking for a systematic, v basic analysis of the
 problems language poses for any program, which I guess mainly come down to
 multiplicity -

 multiple
 -word meanings
 -word pronunciations
 -word spellings
 -word endings
 -word fonts
 -word/letter layout/design
 -languages [mixed discourse]
 -accents
 -dialects
 -sentence constructions

 to include new and novel
 -words
 -pronunciations
 -spellings
 -endings
 -layout/design
 -languages
 -accents
 -dialects
 -sentence constructions

 -all of which are *advantages* for a GI as opposed to a narrow AI.  The
 latter wants the right meaning, the former wants many meanings - enables
 flexibility and creativity of explanation and association.

 Have I left anything out?

 Steve: MT::

  I wonder whether you'd like to outline an additional list of
 English/language's shortcomings here. I've just been reading Gary Marcus'
 Kluge - he has a whole chapter on language's shortcomings, and it would be
 v. interesting to compare and analyse.


 The real world is a wonderful limitless-dimensioned continuum of
 interrelated happenings. We have but a limited window to this, and have an
 even more limited assortment of words that have very specific meanings.
 Languages like Arabic vary pronunciation or spelling to convey additional
 shades of meaning, and languages like Chinese convey meaning via joined
 concepts. These may help, but they do not remove the underlying problem.
 This is like throwing pebbles onto a map and ONLY being able to communicate
 which pebble is closest to the intended location. Further, many words have
 multiple meanings, which is like only being able to specify certain disjoint
 multiples of pebbles, leaving it to AI to take a WAG (Wild Ass Guess) which
 one was intended.

 This becomes glaring obvious in language translation. I learned this stuff
 from people on the Russian national language translator project. Words in
 these two languages have very different shades of meaning, so that in
 general, a sentence in one language can NOT be translated to the other
 language with perfect accuracy, simply because the other language lacks
 words with the same shading. This is complicated by the fact that the
 original author may NOT have intended all of the shades of meaning, but was
 stuck with the words in the dictionary.

 For example, a man saying sit down in Russian to a woman, is conveying
 something like an order (and not a request) to sit down, shut up, and don't
 move. To remove that overloading, he might say please sit down in
 Russian. Then, it all comes down to just

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Steve Richfield
Jim,

YES - and I think I have another piece of your puzzle to consider...

A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
subsequently took me on as a sort of project - to figure out why most
people who met me then either greatly valued my friendship, or quite the
opposite, would probably kill me if they had the safe opportunity. After
much discussion, interviewing people in both camps, etc., he came up with
what appears to be a key to decision making in general...

It appears that people pigeonhole other people, concepts, situations,
etc., into a very finite number of pigeonholes - probably just tens of
pigeonholes for other people. Along with the pigeonhole, they keep
amendments, like Steve is like Joe, but with 

Then, there is the pigeonhole labeled other that all the mavericks are
thrown into. Not being at all like anyone else that most people have ever
met, I was invariably filed into the other pigeonhole, along with
Einstein, Ted Bundy, Jack the Ripper, Stephen Hawking, etc.

People are safe to the extent that they are predictable, and people in the
other pigeonhole got that way because they appear to NOT be predictable,
e.g. because of their worldview, etc. Now, does the potential value of the
alternative worldview outweigh the potential danger of perceived
unpredictability? The answer to this question apparently drove my own
personal classification in other people.

Dave's goal was to devise a way to stop making enemies, but unfortunately,
this model of how people got that way suggested no potential solution.
People who keep themselves safe from others having radically different
worldviews are truly in a mental prison of their own making, and there is no
way that someone whom they distrust could ever release them from that
prison.

I suspect that recognition, decision making, and all sorts of intelligent
processes may be proceeding in much the same way. There may be no
grandmother neuron/pidgeonhole, but rather a kindly old person with an
amendment that is related. If on the other hand your other grandmother
flogged you as a child, the filing might be quite different.

Any thoughts?

Steve Richfield

On 11/29/08, Jim Bromer [EMAIL PROTECTED] wrote:

 One of the problems that comes with the casual use of analytical
 methods is that the user becomes inured to their habitual misuse. When
 a casual familiarity is combined with a habitual ignorance of the
 consequences of a misuse the user can become over-confident or
 unwisely dismissive of criticism regardless of how on the mark it
 might be.

 The most proper use of statistical and probabilistic methods is to
 base results on a strong association with the data that they were
 derived from.  The problem is that the AI community cannot afford this
 strong a connection to original source because they are trying to
 emulate the mind in some way and it is not reasonable to assume that
 the mind is capable of storing all data that it has used to derive
 insight.

 This is a problem any AI method has to deal with, it is not just a
 probability thing.  What is wrong with the AI-probability group
 mind-set is that very few of its proponents ever consider the problem
 of statistical ambiguity and its obvious consequences.

 All AI programmers have to consider the problem.  Most theories about
 the mind posit the use of similar experiences to build up theories
 about the world (or to derive methods to deal effectively with the
 world).  So even though the methods to deal with the data environment
 are detached from the original sources of those methods, they can
 still be reconnected by the examination of similar experiences that
 may subsequently occur.

 But still it is important to be able to recognize the significance and
 necessity of doing this from time to time.  It is important to be able
 to reevaluate parts of your theories about things.  We are not just
 making little modifications from our internal theories about things
 when we react to ongoing events, we must be making some sort of
 reevaluation of our insights about the kind of thing that we are
 dealing with as well.

 I realize now that most people in these groups probably do not
 understand where I am coming from because their idea of AI programming
 is based on a model of programming that is flat.  You have the program
 at one level and the possible reactions to the data that is input as
 the values of the program variables are carefully constrained by that
 level.  You can imagine a more complex model of programming by
 appreciating the possibility that the program can react to IO data by
 rearranging subprograms to make new kinds of programs.  Although a
 subtle argument can be made that any program that conditionally reacts
 to input data is rearranging the execution of its subprograms, the
 explicit recognition by the programmer that this is useful tool in
 advanced programming is probably highly correlated with its more
 effective use

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Steve Richfield
Bringing this back to the earlier discussion, What could be happening, not
to say that it is provably happening but there certainly is no evidence
(that I know of) against it, is the following, with probabilities
represented internally by voltages that are proportional to the logarithm of
the probability: External representation varies, e.g. spike rate for spiking
neurons

Dendritic trees could be a sort of Bayesian AND, and the neurons themselves
could be a sort of Bayesian OR of the dendrites. If each dendrite were
completely unrelated to the others, e.g. one computed some aspect of tree,
another some aspect of sweet, another some aspect of angry, etc., then
the dendrites on other neurons could easily assemble whatever they needed,
with lots of other extraneous things OR'd onto the inputs. This sounds like
a mess, but it works. Consider: Any one individual thing only occurs rarely.
If not, it will be differentiated until it is rare. Additive noise on the
inputs of a Bayesian AND only affects the output when ALL of the other
inputs are non-zero. When these two rare events happen simultaneously,
whatever the dendrite is looking for and another event that adds to one of
its inputs, the output will be slightly increased. How slight? It appears
that CNS (Central Nervous System) neurons have ~50K synapses, of which ~200
have efficacies 0 at any one time. Hence, noise might contribute ~1% to the
output - too little to be concerned much about.

Why evolve such a convoluted system? Because cells are MUCH more expensive
than dendrites or synapses. By having a cell handle aspects of many
unrelated things while other cells are doing the same, and ANDing them as
needed, the cell count is minimized. Also, such systems are impervious to
minor damage, cells dying, etc.

Certainly, having a tree cell would only help if there were SO many uses
of exactly the same meaning of tree that it would be efficient to do all of
the ANDing in one place. However, a cell doing this could also do the same
for other unrelated things at the same time, bringing us back to the theory.
Hence, until I hear something to deny this theory, I am presuming it to be
correct.

OK, so why isn't this well known? Consider:
1.  The standards for publication of laboratory results are MUCH tighter
than in other areas. If they don't have proof, then they don't publish.
Hence, if you don't know someone who knows about CNS dendrites, you won't
even have anything to think about.
2.  As Loosemore pointed out, the guys in the lab do NOT have
skills applicable to cognitive, mathematical, or other key areas that the
very cells that they are studying are functioning in.

Flashback: I had finally tracked down an important article about observed
synaptic transfer functions and its author in person. Also present was
William Calvin, the neuroscience author who formerly had a laboratory at the
U of Washington. Looking over the functions in the article, I started to
comment on what they might be doing mathematically, whereupon the author
interjected that they had already found functions that fit very closely that
they has used as a sort of spline, which weren't anything at all like the
functions I was looking for. I noted that it appeared to me that both
functions produced almost identical results over the observed range, but
mine was derived from mathematical necessity while the ones the author used
as a spline just happened to fit well. The author then asked why even bother
looking for another function that fits after you already have one. At that
point, in exasperation, Calvin took up my side of the discussion, and after
maybe 15 minutes of discussion with the author while I sat quietly and
watched, the author FINALLY understood that these neurons do something in
the real world, and if you have a theory about what that might be, then you
must look at the difference between predicted and actual results to
confirm/deny that theory. Later when I computer-generated points to compare
with the laboratory results, they were spot-on to within measurement
accuracy.

Anyway, this seems to be a good working theory for how our wet engine works,
but it doesn't seem to provide much to help Ben, because inside a computer,
public variables don't cost thousands of times as much as a binary operator,
instead, they are actually cheaper. Hence, there is no reason to combine
unrelated things into what is equivalent to a public variable.

However, this all suggests that attention should be concentrated on
adjectives rather than nouns, adverbs instead of verbs, etc. I noticed this
when hand coding rules for Dr. Eliza - that the modifiers seemed to be much
more important than the referents.

Maybe this hint from wetware will help someone.

Steve Richfield
=
On 11/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 And we don't yet know whether the assembly keeps reconfiguring its
 reprsentation for conceptual knowledge ... though we know it's mainly

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-20 Thread Steve Richfield
: How do we and others identify
our invalid underlying assumptions to reach a dialectical synthesis?

This is EXACTLY what Ben's list is all about. We now finally appear to be
on the same page.

Here is a new one for Ben's list:

*There has been a presumption in some quarters that consciousness is
single-valued, when this is clearly not the case. Different people and
animals clearly have very different things happening behind their eyeballs.
Given different things to compare, obviously, contradictory conclusions
about consciousness are easy to reach. For example, people who grow up with
impairments, e.g. from low daytime body temperature (central hypothermia)
can become bright/brilliant because they learn to use what they still have
very efficiently (e.g. Loosemore). Then, when the impairment is removed,
they often become off-scale super-human smart. This clearly shows that
something quite different is happening behind their eyeballs, and that the
impairment is not necessary to sustain that difference, though it may be
needed to create that difference.*

X1 appears to be on its face wrong, as it appears to state that the word
consciousness has no physical referent, which is plainly false because we
can all point something/someone having it, whatever it might be, real or
not.
X2 is a clear belief in magic, which once understood, is no longer magic.
Hence, X2 appears to be an oxymoron.
X3 and X4 do not appear to be mutually contradictory, though they may both
be wrong. Perhaps restatement is necessary to differentiate them.

We discussed a prospective theory of everything in July/August that I
think points to an X5 that is a sort of refined X4.

I suspect that when X5 is finally stated in undeniable terms, that this and
many other disputes here and elsewhere fill quickly evaporate in a sort of
dialectical synthesis of AGI positions. The problem here is that everyone's
positions here are based on a presumption that an AGI can be constructed *
without* that theory of everything being in hand. I think that we have an
RRA proof here that this is NOT possible. Nonetheless, it IS interesting to
be a fly on the wall and watch people try.

Steve Richfield

On Wed, Nov 19, 2008 at 1:26 PM, Steve Richfield
[EMAIL PROTECTED]wrote:


   Ben:

 On 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:


 This sounds an awful lot like the Hegelian dialectical method...


 Your point being?

 We are all stuck in Hegal's Hell whether we like it or not. Reverse
 Reductio ad Absurdum is just a tool to help guide us through it.

 There seems to be a human tendency to say that something sounds an awful
 lot like (something bad) to dismiss it, but the crucial thing is often the
 details rather than the broad strokes. For example, the Communist Manifesto
 detailed the coming fall of Capitalism, which we may now be seeing in the
 current financial crisis. Sure, the solution proved to be worse than the
 problem, but that doesn't mean that the identification of the problems was
 in error.

 From what I can see, ~100% of the (mis?)perceived threat from AGI comes
 from a lack of understanding of RRAA (Reverse Reductio ad Absurdum), both by
 those working in AGI and those by the rest of the world. This clearly has
 the potential of affecting your own future success, so it is probably worth
 the extra 10 minutes or so to dig down to the very bottom of it, understand
 it, discuss it, and then take your reasoned position regarding it. After
 all, your coming super-intelligent AGI will probably have to master RRAA to
 be able to resolve intractable disputes, so you will have to be on top of
 RRAA if you are to have any chance of debugging your AGI.

 Steve Richfield
 ==

  On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield 
 [EMAIL PROTECTED] wrote:

 Martin,

 On 11/18/08, martin biehl [EMAIL PROTECTED] wrote:

 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.


 HERE is the crux of my argument, as other forms of logic fall short of
 being adequate to run a world with. Reverse Reductio ad Absurdum is the
 first logical tool with the promise to resolve most intractable disputes,
 ranging from the abortion debate to the middle east problem.

 Some people get it easily, and some require long discussions, so I'll
 post the Cliff Notes version here, and if you want it in smaller doses,
 just send me an off-line email and we can talk on the phone.

 Reductio ad absurdum has worked unerringly for centuries to test bad
 assumptions. This constitutes a proof by lack of counterexample that the
 ONLY way to reach an absurd result is by a bad assumption, as otherwise,
 reductio ad absurdum would sometimes fail.

 Hence, when two intelligent people reach conflicting conclusions, but
 neither can see any errors in the other's logic, it would seem that they
 absolutely MUST have at least one bad assumption. Starting from the
 absurdity

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Steve Richfield
Richard,

On 11/20/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 Steve Richfield wrote:

 Richard,
  Broad agreement, with one comment from the end of your posting...
  On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED] mailto:
 [EMAIL PROTECTED] wrote:

Another, closely related thing that they do is talk about low level
issues witout realizing just how disconnected those are from where
the real story (probably) lies.  Thus, Mohdra emphasizes the
importance of spike timing as opposed to average firing rate.

  There are plenty of experiments that show that consecutive closely-spaced
 pulses result when something goes off scale, probably the equivalent to
 computing Bayesian probabilities  100%, somewhat akin to the overflow
 light on early analog computers. These closely-spaced pulses have a MUCH
 larger post-synaptic effect than the same number of regularly spaced pulses.
 However, as far as I know, this only occurs during anomalous situations -
 maybe when something really new happens, that might trigger learning?
  IMHO, it is simply not possible to play this game without having a close
 friend with years of experience poking mammalian neurons. This stuff is
 simply NOT in the literature.

He may well be right that the pattern or the timing is more
important, but IMO he is doing the equivalent of saying Let's talk
about the best way to design an algorithm to control an airport.
 First problem to solve:  should we use Emitter-Coupled Logic in the
transistors that are in oour computers that will be running the
algorithms.

  Still, even with my above comments, you conclusion is still correct.


 The main problem is that if you interpret spike timing to be playing the
 role that you (and they) imply above, then you are commiting yourself to a
 whole raft of assumptions about how knowledge is generally represented and
 processed.  However, there are *huge* problems with that set of implicit
 assumptions  not to put too fine a point on it, those implicit
 assumptions are equivalent to the worst, most backward kind of cognitive
 theory imaginable.  A theory that is 30 or 40 years out of date.


OK, so how else do you explain that in fairly well understood situations
like stretch receptors, that the rate indicates the stretch UNLESS you
exceed the mechanical limit of the associated joint, whereupon you start
getting pulse doublets, triplets, etc. Further, these pulse groups have a
HUGE effect on post synaptic neurons. What does your cognitive science tell
you about THAT?



 The gung-ho neuroscientists seem blissfully unaware of this fact because
  they do not know enough cognitive science.


I stated a Ben's List challenge a while back that you apparently missed, so
here it is again.

*You can ONLY learn how a system works by observation, to the extent that
its operation is imperfect. Where it is perfect, it represents a solution to
the environment in which it operates, and as such, could be built in
countless different ways so long as it operates perfectly. Hence,
computational delays, etc., are fair game, but observed cognition and
behavior are NOT except to the extent that perfect cognition and behavior
can be described, whereupon the difference between observed and theoretical
contains the information about construction.*
**
*A perfect example of this is superstitious learning, which on its
surface appears to be an imperfection. However, we must use incomplete data
to make imperfect predictions if we are to ever interact with our
environment, so superstitious learning is theoretically unavoidable. Trying
to compute what is perfect for superstitious learning is a pretty
challenging task, as it involves factors like the regularity of disastrous
events throughout evolution, etc.*

If anyone has successfully done this, I would be very interested. This is
because of my interest in central metabolic control issues, wherein
superstitious red tagging appears to be central to SO many age-related
conditions. Now, I am blindly assuming perfection in neural computation
and proceeding on that assumption. However, if I could recognize and
understand any imperfections (none are known), I might be able to save
(another) life or two along the way with that knowledge.

Anyway, this suggests that much of cognitive science, which has NOT
computed this difference but rather is running with the raw data of
observation, is rather questionable at best. For reasons such as this, I
(perhaps prematurely and/or improperly) dismissed cognitive science rather
early on. Was I in error to do so?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-19 Thread Steve Richfield
Ben:

On 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:


 This sounds an awful lot like the Hegelian dialectical method...


Your point being?

We are all stuck in Hegal's Hell whether we like it or not. Reverse Reductio
ad Absurdum is just a tool to help guide us through it.

There seems to be a human tendency to say that something sounds an awful
lot like (something bad) to dismiss it, but the crucial thing is often the
details rather than the broad strokes. For example, the Communist Manifesto
detailed the coming fall of Capitalism, which we may now be seeing in the
current financial crisis. Sure, the solution proved to be worse than the
problem, but that doesn't mean that the identification of the problems was
in error.

From what I can see, ~100% of the (mis?)perceived threat from AGI comes from
a lack of understanding of RRAA (Reverse Reductio ad Absurdum), both by
those working in AGI and those by the rest of the world. This clearly has
the potential of affecting your own future success, so it is probably worth
the extra 10 minutes or so to dig down to the very bottom of it, understand
it, discuss it, and then take your reasoned position regarding it. After
all, your coming super-intelligent AGI will probably have to master RRAA to
be able to resolve intractable disputes, so you will have to be on top of
RRAA if you are to have any chance of debugging your AGI.

Steve Richfield
==

  On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield 
 [EMAIL PROTECTED] wrote:

 Martin,

 On 11/18/08, martin biehl [EMAIL PROTECTED] wrote:

 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.


 HERE is the crux of my argument, as other forms of logic fall short of
 being adequate to run a world with. Reverse Reductio ad Absurdum is the
 first logical tool with the promise to resolve most intractable disputes,
 ranging from the abortion debate to the middle east problem.

 Some people get it easily, and some require long discussions, so I'll post
 the Cliff Notes version here, and if you want it in smaller doses, just
 send me an off-line email and we can talk on the phone.

 Reductio ad absurdum has worked unerringly for centuries to test bad
 assumptions. This constitutes a proof by lack of counterexample that the
 ONLY way to reach an absurd result is by a bad assumption, as otherwise,
 reductio ad absurdum would sometimes fail.

 Hence, when two intelligent people reach conflicting conclusions, but
 neither can see any errors in the other's logic, it would seem that they
 absolutely MUST have at least one bad assumption. Starting from the
 absurdity and searching for the assumption is where the reverse in reverse
 reductio ad absurdum comes in.

 If their false assumptions were different, than one or both parties would
 quickly discover them in discussion. However, when the argument stays on the
 surface, the ONLY place remaining to hide an invalid assumption is that they
 absolutely MUSH share the SAME invalid assumptions.

 Of course if our superintelligent AGI approaches them and points out their
 shared invalid assumption, then they would probably BOTH attack the AGI, as
 their invalid assumption may be their only point of connection. It appears
 that breaking this deadlock absolutely must involve first teaching both
 parties what reverse reductio ad absurdum is all about, as I am doing here.

 For example, take the abortion debate. It is obviously crazy to be making
 and killing babies, and it is a proven social disaster to make this illegal
 - an obvious reverse reductio ad absurdum situation.

 OK, so lets look at societies where abortion is no issue at all, e.g.
 Muslim societies, where it is freely available, but no one gets them. There,
 children are treated as assets, where in all respects we treat them as
 liabilities. Mothers are stuck with unwanted children. Fathers must pay
 child support, They can't be bought or sold. There is no expectation that
 they will look after their parents in their old age, etc.

 In short, BOTH parties believe that children should be treated as
 liabilities, but when you point this out, they dispute the claim. Why should
 mothers be stuck with unwanted children? Why not allow sales to parties who
 really want them? There are no answers to these and other similar questions
 because the underlying assumption is clearly wrong.

 The middle east situation is more complex but constructed on similar
 invalid assumptions.

 Are we on the same track now?

 Steve Richfield
  

 2008/11/18 Steve Richfield [EMAIL PROTECTED]

  To all,

 I am considering putting up a web site to filter the crazies as
 follows, and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise

Re: Seed AI (was Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...)

2008-11-19 Thread Steve Richfield
Back to reality for a moment...

I have greatly increased the IQs of some pretty bright people since I
started doing this in 2001 (the details are way off topic here, so contact
me off-line for more if you are interested), and now, others are also doing
this. I think that these people give us a tiny glimpse into what directions
an AGI might do. Here are my impressions:

1. They come up with some really bright stuff, like Mike's FQ theory of how
like-minded groups of people tend to stagnate technology, which few people
can grasp in the minute or so that is available to interest other people.
Hence, their ideas do NOT spread widely, except among others who are bright
enough to get it fairly quickly. From what I have seen, their enhanced IQs
haven't done much for their life success as measured in dollars, but they
have gone in very different directions than they were previously headed, now
that they have some abilities that they didn't previously have.

2.  Enhancing their IQs did NOT seem to alter their underlying belief
system. For example, Dan was and still remains a Baptist minister. However,
he now reads more passages as being metaphorical. We have no problem
carrying on lively political and religious discussions from our VERY
different points of view, with each of us translating our thoughts into the
other's paradigm.

3.  Blind ambition seemed to disappear, being replaced with a long view of
things. They seem to be nicer people for the experience. However, given
their long view, I wouldn't ever recommend becoming an adversary, as they
have no problem with gambits - loosing a skirmish to facilitate winning a
greater battle. If you think you are winning, then you had best stop and
look where this might all end up.

4.  They view most people a little like honey bees - useful but stupid. They
often attempt to help others by pointing them in better directions, but
after little/no success for months/years, they eventually give up and just
let everyone destroy their lives and kill themselves. This results in what
might at first appear to be a callous disregard for human life, but which in
reality is just a realistic view of the world. I suspect that future AGIs
would encounter the same effect.

Hence, unless/until someone displays some reason why an AGI might want to
take over the world, I remain unconcerned. What DOES concern me is stupid
people who think that the population can be controlled, without allowing for
the few bright people who can figure out how to be the butterfly that starts
the hurricane, as chaos theory presumes non-computability of things that, if
computable, will be computed. The resulting hurricane might be blamed on the
butterfly, when in reality, there would have been a hurricane anyway - it
just would have been somewhat different. In short, don't blame the AGI for
the fallen bodies of those who would exert unreasonable control.

I see the hope for the future being in the hands of these cognitively
enhanced people. It shouldn't be too much longer until these people start
rising to the top of the AI (and other) ranks. Imagine Loosemore with dozens
more IQ points and the energy to go along with it. Hence, it will be these
people who will make the decisions as to whether we have AGIs and what their
place in the future is.

Then, modern science will be reformed enough to avoid having unfortunate
kids have their metabolic control systems trashed by general anesthetics,
etc. (now already being done at many hospitals, including U of W and
Evergreen here in the Seattle area), and we will stop making people who can
be cognitively enhanced. Note that for every such candidate person, there
are dozens of low IQ gas station attendants, etc., who was subjected to the
same stress, but didn't do so well. Then, either we will have our AGIs in
place, or with no next generation of cognitively enhanced people, we will be
back to the stone age of stupid people. Society has ~50 years to make their
AGI work before this generation of cognitively enhanced people is gone.

Alternatively, some society might intentionally trash kids metabolism just
to induce this phenomenon, as a means to secure control when things crash.
At that point, either there is an AGI to take over, or that society will
take over.

In short, this is a complex area that is really worth understanding if you
are interested in where things are going.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
To all,

I am considering putting up a web site to filter the crazies as follows,
and would appreciate all comments, suggestions, etc.

Everyone visiting the site would get different questions, in different
orders, etc. Many questions would have more than one correct answer, and in
many cases, some combinations of otherwise reasonable individual answers
would fail. There would be optional tutorials for people who are not
confident with the material. After successfully navigating the site, an
applicant would submit their picture and signature, and we would then
provide a license number. The applicant could then provide their name and
number to 3rd parties to verify that the applicant is at least capable of
rational thought. This information would look much like a driver's license,
and could be printed out as needed by anyone who possessed a correct name
and number.

The site would ask a variety of logical questions, most especially probing
into:
1.  Their understanding of Reverse Reductio ad Absurdum methods of resolving
otherwise intractable disputes.
2.  Whether they belong to or believe in any religion that supports various
violent acts (with quotes from various religious texts). This would exclude
pretty much every religion, as nearly all religions condone useless violence
of various sorts, or the toleration or exposure of violence toward others.
Even Buddhists resist MAD (Mutually Assured Destruction) while being unable
to propose any potentially workable alternative to nuclear war. Jesus
attacked the money changers with no hope of benefit for anyone. Mohammad
killed the Jewish men of Medina and sold their women and children into
slavery, etc., etc.
3.  A statement in their own words that they hereby disavow allegiance
to any non-human god or alien entity, and that they will NOT follow the
directives of any government led by people who would obviously fail this
test. This statement would be included on the license.

This should force many people off of the fence, as they would have to choose
between sanity and Heaven (or Hell).

Then, Ben, the CIA, diplomats, etc., could verify that they are dealing with
people who don't have any of the common forms of societal insanity. Perhaps
the site should be multi-lingual?

Any and all thoughts are GREATLY appreciated.

Thanks

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Martin,

On 11/18/08, martin biehl [EMAIL PROTECTED] wrote:

 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.


HERE is the crux of my argument, as other forms of logic fall short of being
adequate to run a world with. Reverse Reductio ad Absurdum is the first
logical tool with the promise to resolve most intractable disputes, ranging
from the abortion debate to the middle east problem.

Some people get it easily, and some require long discussions, so I'll post
the Cliff Notes version here, and if you want it in smaller doses, just
send me an off-line email and we can talk on the phone.

Reductio ad absurdum has worked unerringly for centuries to test bad
assumptions. This constitutes a proof by lack of counterexample that the
ONLY way to reach an absurd result is by a bad assumption, as otherwise,
reductio ad absurdum would sometimes fail.

Hence, when two intelligent people reach conflicting conclusions, but
neither can see any errors in the other's logic, it would seem that they
absolutely MUST have at least one bad assumption. Starting from the
absurdity and searching for the assumption is where the reverse in reverse
reductio ad absurdum comes in.

If their false assumptions were different, than one or both parties would
quickly discover them in discussion. However, when the argument stays on the
surface, the ONLY place remaining to hide an invalid assumption is that they
absolutely MUSH share the SAME invalid assumptions.

Of course if our superintelligent AGI approaches them and points out their
shared invalid assumption, then they would probably BOTH attack the AGI, as
their invalid assumption may be their only point of connection. It appears
that breaking this deadlock absolutely must involve first teaching both
parties what reverse reductio ad absurdum is all about, as I am doing here.

For example, take the abortion debate. It is obviously crazy to be making
and killing babies, and it is a proven social disaster to make this illegal
- an obvious reverse reductio ad absurdum situation.

OK, so lets look at societies where abortion is no issue at all, e.g. Muslim
societies, where it is freely available, but no one gets them. There,
children are treated as assets, where in all respects we treat them as
liabilities. Mothers are stuck with unwanted children. Fathers must pay
child support, They can't be bought or sold. There is no expectation that
they will look after their parents in their old age, etc.

In short, BOTH parties believe that children should be treated as
liabilities, but when you point this out, they dispute the claim. Why should
mothers be stuck with unwanted children? Why not allow sales to parties who
really want them? There are no answers to these and other similar questions
because the underlying assumption is clearly wrong.

The middle east situation is more complex but constructed on similar invalid
assumptions.

Are we on the same track now?

Steve Richfield
 

 2008/11/18 Steve Richfield [EMAIL PROTECTED]

  To all,

 I am considering putting up a web site to filter the crazies as follows,
 and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise reasonable individual answers
 would fail. There would be optional tutorials for people who are not
 confident with the material. After successfully navigating the site, an
 applicant would submit their picture and signature, and we would then
 provide a license number. The applicant could then provide their name and
 number to 3rd parties to verify that the applicant is at least capable of
 rational thought. This information would look much like a driver's license,
 and could be printed out as needed by anyone who possessed a correct name
 and number.

 The site would ask a variety of logical questions, most especially probing
 into:
 1.  Their understanding of Reverse Reductio ad Absurdum methods of
 resolving otherwise intractable disputes.
 2.  Whether they belong to or believe in any religion that supports
 various violent acts (with quotes from various religious texts). This would
 exclude pretty much every religion, as nearly all religions condone useless
 violence of various sorts, or the toleration or exposure of violence toward
 others. Even Buddhists resist MAD (Mutually Assured Destruction) while being
 unable to propose any potentially workable alternative to nuclear war. Jesus
 attacked the money changers with no hope of benefit for anyone. Mohammad
 killed the Jewish men of Medina and sold their women and children into
 slavery, etc., etc.
 3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Bob,

On 11/18/08, Bob Mottram [EMAIL PROTECTED] wrote:

 2008/11/18 Steve Richfield [EMAIL PROTECTED]:
  I am considering putting up a web site to filter the crazies as
 follows,
  and would appreciate all comments, suggestions, etc.


 This all sounds peachy in principle, but I expect it would exclude
 virtually everyone except perhaps a few of the most diehard
 philosophers.


My goal is to identify those people who:
1.  Are capable of rational thought, whether or not they chose to use that
ability. I plan to test this with some simple problem solving.
2.  Are not SO connected with some shitforbrains religious group/belief that
they would predictably use dangerous technology to harm others. I plan to
test this by simply demanding a declaration, which would send most such
believers straight to Hell.

Beyond that, I agree that it starts to get pretty hopeless.

I think most people have at least a few beliefs which
 cannot be strictly justified rationally, and that would include many
 AI researchers.


... and probably include both of us as well.

Irrational or inconsistent beliefs originate from
 being an entity with finite resources - finite experience and finite
 processing power and time with which to analyze the data.  Many people
 use quick lookups handed to them by individuals considered to be of
 higher social status, principally because they don't have time or
 inclination to investigate the issues directly themselves.


However, when someone (like me) points out carefully selected passages that
are REALLY crazy, then do they re-evaluate, or continue to accept everything
they see in the book?

In religion and politics people's beliefs and convictions are in
 almost every case gotten at second-hand, and without examination, from
 authorities who have not themselves examined the questions at issue
 but have taken them at second-hand from other non-examiners, whose
 opinions about them were not worth a brass farthing. - Mark Twain


I completely agree. The question here is whether these people are capable of
questioning and re-evaluation. If so, then they get their license.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Ben,

On 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:


  3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people who would obviously fail this
 test. This statement would be included on the license.



 Hmmm... don't I fail this test every time I follow the speed limit ?   ;-)


I don't think I stated this well, and perhaps you might be able to say it
better.

If your government wants you to go out and kill people, or help others to go
out and kill people, and you don't see some glimmer of understanding from
the leaders that this is really stupid, then perhaps you shouldn't
contribute to such insanity.

Then, just over this fence to help define the boundary...

Look at the Star Wars anti-missile defense system. It can't possibly ever
work well, as countermeasures are SO simple to implement. However, it was
quite effective in bankrupting the Soviet Union, while people like me were
going around and lecturing about horrible waste of public resources it was.

In short, I think that re-evaluation is necessary at about the point where
blood starts flowing. What are your thoughts?

 As another aside, it seems wrong to accuse Buddhists of condoning violence
 because they don't like MAD (which involves stockpiling nukes) ... you could
 accuse them of foolishness perhaps (though I don't necessarily agree) but
 not of condoning violence


I have hours of discussion with Buddhists invested in this. I have no
problem at all with them getting themselves killed, but I have a BIG problem
with their asserting their beliefs to get OTHERS killed. If we had a
Buddhist President who kept MAD from being implemented, there is a pretty
good chance that we would not be here to have this discussion.

As an aside, when you look CAREFULLY at the events that were unfolding as
MAD was implemented, there really isn't anything at all against Buddhist
beliefs in it - just a declaration that if you attack me, that I will attack
in return, but without restraint against civilian targets.

 My feeling is that with such a group of intelligent and individualistic
 folks as transhumanists and AI researchers are, any  litmus test for
 cognitive sanity you come up with is gonna be quickly revealed to be full
 of loopholes that lead to endless philosophical discussions... so that in
 the end, such a test could only be used as a general guide, with the
 ultimate cognitive-sanity-test to be made on a qualitative basis


I guess that this is really what I was looking for - just what is that
basis? For example, if someone can lie and answer questions in a logical
manner just to get their license, then they have proven that they can be
logical, whether or not they chose to be. I think that is about as good as
is possible.

 In a small project like Novamente, we can evaluate each participant
 individually to assess their thought process and background.  In a larger
 project like OpenCog, there is not much control over who gets involved, but
 making people sign a form promising to be rational and cognitively sane
 wouldn't seem to help much, as obviously there is nothing forcing people to
 be honest...


... other than their sure knowledge that they will go directly to Hell for
even listening and considering such as we are discussing here.

The Fiq is a body of work outside the Koran that is part of Islam, which
includes stories of Mohamed's life, etc. Therein the boundary is precisely
described.

Islam demands that anyone who converts from Islam be killed.

One poor fellow watched both of his parents refuse to renounce Islam, and
then be killed by invaders. When it came to his turn, he quickly renounced
to save his life. Now that he was being considered for execution, the ruling
from Mohamed: If they ask you again, then renounce again. and he was
released.

BTW, it would be really stupid of me to try to enforce a different standard
than you and other potential users of such a site would embrace, so my goal
here is not only to discuss potential construction of such a site, but also
to discuss just what that standard is. Hence, take my words as open for
editing.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Richard and Bill,

On 11/18/08, BillK [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
  I see how this would work:  crazy people never tell lies, so you'd be
 able
  to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

 They sincerely believe the garbage they spread around.


In 1994 I was literally sold into servitude in Saudi Arabia as a sort of
slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air
Force. I managed to escape that situation with the help of the same
Wahhabist Sunni Muslims that are now causing so many problems. With that
background, I think I understand them better than most people.

As in all other societies, they are not given the whole truth, e.g. most
have never heard of the slaughter at Medina, and believe that Mohamed never
hurt anyone at all.

My hope and expectation is that, by allowing people to research various
issues as they work on their test, that a LOT of people who might otherwise
fail the test will instead reevaluate their beliefs, at least enough to come
up with the right answers, whether or not they truly believe them. At least
that level of understanding assures that they can carry on a reasoned
conversation. This is a MAJOR problem now. Even here on this forum, many
people still don't get *reverse* reductio ad absurdum.

BTW, I place most of the blame for the middle east impasse on the West
rather than on the East. The Koran says that most of the evil in the world
is done by people who think they are doing good, which brings with it a good
social mandate to publicly reconsider and defend any actions that others
claim to be evil. The next step is to proclaim evil doers as unwitting
agents of Satan. If there is still no good defense, then they drop the
unwitting. Of course, us stupid uncivilized Westerners have fallen into
this, and so 19 brave men sacrificed their lives just to get our attention,
but even that failed to work as planned. Just what DOES it take to get our
attention - a nuke in NYC? What the West has failed to realize is that they
are playing a losing hand, but nonetheless, they just keep increasing the
bet on the expectation that the other side will fold. They won't. I was as
much intending my test for the sort of stupidity that nearly all Americans
harbor as that carried by Al Queda. Neither side seems to be playing with a
full deck.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Matt and Mark,

I think you both missed my point, but in different ways, namely, that there
is a LOT of traffic here on this forum over a problem that appears easy to
resolve once and for all time, and further, that the solution may work for
much more important worldwide social problems.

Continuing with responses to specific points...

On 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

   Seed AI is a myth.
 Ah.  Now I get it.  You are on this list solely to try to slow down
 progress as much as possible . . . . (sorry that I've been so slow to
 realize this)


No. Like you, we are all trying to put this OT issue out of our misery. I do
appreciate Matt's efforts, misguided though they may be.

Continuing with Matt's comments...

  *From:* Matt Mahoney [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, November 18, 2008 8:23 PM
 *Subject:* **SPAM** Re: [agi] My prospective plan to neutralize AGI and
 other dangerous technologies...


   Steve, what is the purpose of your political litmus test?



 I had no intention at all of imposing any sort of political test, beyond
simply looking for some assurance that they weren't about to use the
technology to kill anyone who wasn't in desperate need of being killed.

   If you are trying to assemble a team of seed-AI programmers with the
 correct ethics, forget it. Seed AI is a myth.



 I agree, though my reasoning may be a bit different than yours. Why would
any thinking machine ever want to produce a better thinking machine?
Besides, I can take bright but long-term low-temp people like Loosemore, who
appears to be an absolutely perfect candidate, and make them super-human
intelligent by simply removing the impairment that they have learned to live
with. In Loosemore's case, this is probably the equivalent of several
alcoholic drinks, yet he is pretty bright even with that impairment. I would
ask you to imagine what he would be without that impairment, but it may
well be beyond anyone here's ability to imagine, and well on the way to a
seed, though I suspect that with much more intelligence than he already
has, that he would question that goal.

Thanks everyone for your comments.

Steve Richfield
=

   --- On *Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED]*wrote:

 From: Steve Richfield [EMAIL PROTECTED]
 Subject: Re: [agi] My prospective plan to neutralize AGI and other
 dangerous technologies...
 To: agi@v2.listbox.com
 Date: Tuesday, November 18, 2008, 6:39 PM

 Richard and Bill,

 On 11/18/08, BillK [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
  I see how this would work:  crazy people never tell lies, so you'd be
 able
  to nail 'em when they gave the wrong answers.

 Yup. That's how they pass lie detector tests as well.

 They sincerely believe the garbage they spread around.


 In 1994 I was literally sold into servitude in Saudi Arabia as a sort of
 slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air
 Force. I managed to escape that situation with the help of the same
 Wahhabist Sunni Muslims that are now causing so many problems. With that
 background, I think I understand them better than most people.

 As in all other societies, they are not given the whole truth, e.g. most
 have never heard of the slaughter at Medina, and believe that Mohamed never
 hurt anyone at all.

 My hope and expectation is that, by allowing people to research various
 issues as they work on their test, that a LOT of people who might otherwise
 fail the test will instead reevaluate their beliefs, at least enough to come
 up with the right answers, whether or not they truly believe them. At least
 that level of understanding assures that they can carry on a reasoned
 conversation. This is a MAJOR problem now. Even here on this forum, many
 people still don't get *reverse* reductio ad absurdum.

 BTW, I place most of the blame for the middle east impasse on the West
 rather than on the East. The Koran says that most of the evil in the world
 is done by people who think they are doing good, which brings with it a good
 social mandate to publicly reconsider and defend any actions that others
 claim to be evil. The next step is to proclaim evil doers as unwitting
 agents of Satan. If there is still no good defense, then they drop the
 unwitting. Of course, us stupid uncivilized Westerners have fallen into
 this, and so 19 brave men sacrificed their lives just to get our attention,
 but even that failed to work as planned. Just what DOES it take to get our
 attention - a nuke in NYC? What the West has failed to realize is that they
 are playing a losing hand, but nonetheless, they just keep increasing the
 bet on the expectation that the other side will fold. They won't. I was as
 much intending my test for the sort of stupidity that nearly all Americans
 harbor as that carried by Al Queda. Neither side seems to be playing with a
 full deck.

 Steve Richfield

Re: [agi] Whole Brain Emultion (WBE) - A Roadmap

2008-11-06 Thread Steve Richfield
Richard,

On 11/5/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 When the system is built, there will inevitably be bugs:  chunks of data
 that are corrupted along the way.  But if those bugs cause the final system
 to misbehave, there will be no way to track them down, because there will
 effectively be no way to test functional subsystems.  The debugging will be
 almost blind.


Note that people can suffer an amazing amount of brain damage, so a few
errors shouldn't be completely disastrous. I am more worried about being
able to read out component values to sufficient precision.

Also, my scanning UV fluorescence microscope plan would lose NOTHING, even
though there may be minor malfunctions during processing. The trick is to
look into the surface of the brain, then cut off some of what you have
already diagrammed and do some more. If you cut off a little too little or a
little too much, you are still OK provided that you don't cut off more than
~6 microns at a time.

It would be comparable to you trying to implement the software required to
 run the entire air traffic control system of the United States by copying
 down the code that is read out to you over a noisy telephone line by someone
 who does not understand the code they are reading to you.


Not really, because there are LOTS of opportunities for error correction,
e.g. if a neuron is performing some sort of Beyesian computation, then its
synaptic efficacies should add up to 1.0, etc. However, this sort of
correction requires better NN/computational theory than we now have. I also
claim (but you will disagree, so spare yourself the wear and tear on your
keyboard) that exactly these same lapses in theory will eventually doom
present AGI efforts, even though there are no neuron-equivalents in the
code.

At the end of the day, if you end up with some problems in the code because
 you transcribed it wrong, how would you even begin to debug it?


If you got the basic neurons right, it will self-correct all by itself.

And if you heard that someone was thinking of doing such a project, would
 you not expect them to have a comprehensive plan for dealing with this
 problem, before they rush in and ask for billions of dollars to start
 collecting data?


Hopefully, some of that money will go toward refining the theory.

This report - which is supposed to be a comprehensive look at the
 feasibility of WBE - makes almost no mention of this difficulty except
 toward the end, where it includes a passing reference to the fact that new
 types of debugging techniques will be required.


Obviously they have a screw loose, but I believe that this problem IS
doable, but also agree with you that it is a BIG problem because it
absolutely requires new mathematics to ever get there.

Given that this is one of the most serious objections to the WBE idea, I
 would have expected at least half of the document to deal with the issue.

 The fact that they have not done this confirms my suspicion that work on
 WBE is, at this point in time, a wild goose chase.  Good for keeping
 neuroscientists employed, but of little value otherwise.


Neuroscientists are probably the most-wrong group you could find. They are
NOT oriented toward making working hardware, there isn't a mathematician
among them, etc.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] OT: More thoughts Technological Censorship

2008-11-05 Thread Steve Richfield
 like I have
 much time to deal with these matters, which aren't strictly my own
 business.  But you can make use of the above offer of small assistance if
 you like.


Thanks yet again for your offer.

My feeling that there are real futurists, and then there are people who
pretend to be futurists. How do you tell the difference? Just ask them what
they are DOING to get there.

There is an interesting in-between person who might be used to better
position the line between real futurists and pretend futurists, and that is
Aubrey de Grey, who will also be at the conference. Watch and listen to him
before forming an opinion. Aubrey has published his 7 barriers to longevity,
but has done NO real-world wet lab research, seen no patients, helped no
elderly people, etc. Here, Aubrey is a pure theorist - a little like me on
this AGI forum, and as such, is completely at the mercy of the often
erroneous research community who,just like the Computer Science community,
has its own assortment of well-fastened blinders on. It appears to me that
Aubrey has drawn some well reasoned conclusions from some rather
questionable data. At minimum he has propelled various efforts (and possibly
stunted others), which almost certainly has some value, regardless of the
validity of his conclusions.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] OT: More thoughts Technological Censorship

2008-11-05 Thread Steve Richfield
Ben,

On 11/5/08, Ben Goertzel [EMAIL PROTECTED] wrote:


 Glad the convergence08 wiki issues were resolved ...

 About futurists: I don't think that term should be reserved for those who
 are actively working as scientists or engineers ...

 About Aubrey: I don't buy the argument that theoretical biology isn't real
 biology.  I think we need more theoretical biology.  In physics no one says
 a theoretician isn't a real physicist!!!


As I explained, I am on fence here. In physics, a theoretical
physicist clearly identifies the unproven assumptions, and experimental
physicists get right onto testing those assumptions. Further, they all work
in the same Physics Building on the University, talk things over with each
over in the coffee room, etc. It is EXTREMELY rare for someone like Einstein
to figure something important out in a vacuum (no pun intended).

More to your interests. After I wrote my first NN program, I took a 2-year
job at the University of Washington Department of Neurological Surgery, and
not only learned just about everything then known about neurons, but I also
learned how shaky that knowledge was, etc. I got more from the coffee room
than from everywhere else combined.

Fast forward 30 years. The head of the department is now the head of
research for the entire U of W medical center. He recently commented that
the BIGGEST loss to neural research was the loss of the coffee room when
molecular biology drove a wedge through the field.

In summary, I completely agree that we need theoretical people. However,
once a theory is on the table you simply can't stop there. If you know
something/anything new about aging, there is probably someone out there
whose life you could easily save (at least for a few years) with that
knowledge. If no such person exists, then your knowledge is probably
situated within some useless paradigm. In short, at least with longevity,
there is simply no excuse for not trying things out, as there is certainly
no shortage of experimental subjects.

Note that I have posted that I often work with elderly people to cure
whatever is presently killing them, but I have not (yet) worked with mice.
The explanation is simple: Human subjects find me - I don't even have to
look for them. They pay for their own lab work, etc. They are
self-maintaining, etc.

I have contacted Aubrey's Methuselah foundation for help in locating a
source of mice for others with my interest to practice on before moving onto
people, but so far, without success. They seem to be interested in my
approaches, but the labs who supply the mice are concerned about possible
press blowback when I turn mice over to the grandchildren of elderly people
to practice on before working on grandma, who of course has absolutely NO
other access to competent help, as the medical establishment has already
written her off and only wants to prescribe pain meds until she dies.

Now, as of yesterday, they can simply euthanize grandma, so what's the
problem?

Note that I had no trouble finding a veterinarian who would do the autopsies
of countless mice FOR FREE just to get this field moving.

I even talked to one lady whose time job it is to euthanize mice at the
University of Washington. Certainly, if the mice were to become someone's
pet as their owner tried various things, it would be a MUCH better future
for the mouse than a needle prick followed by nothing. I could just as well
been talking to a stone wall.

Hence, there is now an ever growing population of people who are
experimenting directly on grandma, having been given no other rational
choice by a system-gone-berserk.

In this crazy light, I cut Aubrey no slack at all, but still remain open
minded about whether he is a real futurist, or a pretent futurist. Perhaps
only time will tell.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] OT: More thoughts Technological Censorship

2008-11-05 Thread Steve Richfield
Ben,

On 11/5/08, Ben Goertzel [EMAIL PROTECTED] wrote:



 As I explained, I am on fence here. In physics, a theoretical
 physicist clearly identifies the unproven assumptions, and experimental
 physicists get right onto testing those assumptions.


 This isn't really true though.  For instance string theory is controversial
 because no one really knows how to use it to make experimental predictions
 about anything currently  measurable...


In part this very test is behind the Billions of dollars (don't you wish
that we had that kind of money) being spent on the new CERN accelerator.
They hope and sort of expect to see some evidence when the thing gets up to
full power.



  Further, they all work in the same Physics Building on the University,
 talk things over with each over in the coffee room, etc. It is EXTREMELY
 rare for someone like Einstein to figure something important out in a vacuum
 (no pun intended).


 Hmmm... the Institute for Advanced Study has no lab, for instance ...


I once visited Seppo Sari, a PhD physicist friend who was doing work there.
I saw with my own eyes his, the world's first variable-frequency tunable
laser. Seppo went on to bigger things, including working on the Star Wars
free electron laser at Boeing (hint, they are NOT free at all, but are
rather expensive), Stealth paint, etc. There were many other such
experiments in the basement there - a VERY impressive place to visit.

Anyway, experimental physics is very much alive and well there - you just
have to go into the basement to see it. They also have offices at other
experimental facilities, including CERN mentioned above.


 In summary, I completely agree that we need theoretical people. However,
 once a theory is on the table you simply can't stop there. If you know
 something/anything new about aging, there is probably someone out there
 whose life you could easily save (at least for a few years) with that
 knowledge. If no such person exists, then your knowledge is probably
 situated within some useless paradigm. In short, at least with longevity,
 there is simply no excuse for not trying things out, as there is certainly
 no shortage of experimental subjects.


 To me that's like saying: if you know something/anything new about energy,
 there is probably some way you can make a better power plant with that
 knowledge.


Probably, though not always true. Certainly, early discoveries quickly led
to our present energy-based society, and Tesla's attempts to make broadcast
energy work quickly put that concept out of our collective misery.

 But science doesn't work that way  It can be a long path from
 theoretical understanding to practical application, involving many people...


Sometime though not always true. In any case, it doesn't do much good to
build a grand theory based on erroneous models and observations, which is
where this discussion started. Right now I am able to do many of the very
same things, affecting the very same mechanisms, that Aubrey hopes to be
able to do in coming decades with esoteric technology that no one has any
idea how to do. The fact that I do this with SUCH pedestrian methods seems
to be exasperating to everyone.

Mapping this into your space, this is akin to my statements:

1.  That manually applying Reverse Reductio ad Absurdum methods to
intractable disputes will meet or exceed anything that any future AGI might
accomplish, or

2.  That trivial AI like Dr. Eliza can solve difficult problems that, for
many subtle reasons (what difficulty tells us about problem structure,
etc.), are at the very outer reaches of the hopes for AGIs.

My point here is NOT that AGIs are a waste of time and electricity, but
rather that some people (I don't think you are one of them) are targeting
the wrong applications.

 Anyway this is getting way off-topic for the AGI list..


Note the subject line - OT for Off Topic, so that people who wish to stay on
topic can simply skip over these postings. I suspect that good netiquette
will solve most/all of the prior complaints about postings here, and who
better to start this than me?! I also suspect that many of the members here
are NOT reading this thread because it has OT on it.

I'm really looking forward to meeting you at Convergence08. I'd gladly trade
a dinner for a cooks tour of Novamente, et al. Perhaps others here would
like to be in on this.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


  1   2   3   >