[agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread Brad Paulsen
Dear Fellow AGI List Members:

Just thought I'd remind the good members of this list about some strategies for 
dealing with certain types of postings.  

Unfortunately, the field of AI/AGI is one of those areas where anybody with a 
pulse and a brain thinks they can design a program that thinks.  Must be 
easy, right?  I mean, I can do it so how hard can it be to put me in a can? 
 Well, that's what some very smart people in the 1940's, '50's and into the 
1960's thought.  They were wrong.  Most of them now admit it.  So, on 
AI-related lists, we have to be very careful about the kinds of conversations 
on which we spend our valuable time.  Here are some guidelines.  I realize most 
people here know this stuff already.  This is just a gentle reminder.

If a posting makes grandiose claims, is dismissive of mainstream research,  
techniques, and institutions or the author claims to have special knowledge 
that has apparently been missed (or dismissed) by all of the brilliant 
scientific/technical minds who go to their jobs at major corporations and 
universities every day (and are paid for doing so), and also by every Nobel 
Laureate for the last 20 years, this posting should be ignored.  DO NOT RESPOND 
to these types of postings: positively or negatively.  The poster is, 
obviously, either irrational or one of the greatest minds of our time.  In the 
former case, you know they're full of it, I know they're full of it, but they 
will NEVER admit that.  You will never win an argument with an irrational 
individual.  In the latter case, stop and ask yourself: Why is somebody that 
fantastically smart posting to this mailing list?  He or she is, obviously, 
smarter than everyone here.  Why does he/she need us to validate his or her 
accomplishments/knowledge by posting on this list?  He or she should have 
better things to do and, besides, we probably wouldn't be able to understand 
(appreciate) his/her genius anyhow.

The only way to deal with postings like this is to IGNORE THEM.  Don't rise to 
the bait.  Like a bad cold, they will be irritating for a while, but they will, 
eventually, go away.

Cheers,

Brad

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread Bob Mottram
Good advice.  There are of course sometimes people who are ahead of the
field, but in conversation you'll usually find that the genuine inovators
have a deep - bordering on obsessive - knowledge of the field that they're
working in and are willing to demonstrate/test their claims to anyone even
remotely interested.




On 14/04/2008, Brad Paulsen [EMAIL PROTECTED] wrote:

  Dear Fellow AGI List Members:

 Just thought I'd remind the good members of this list about some
 strategies for dealing with certain types of postings.

 Unfortunately, the field of AI/AGI is one of those areas where anybody
 with a pulse and a brain thinks they can design a program that thinks.
 Must be easy, right?  I mean, I can do it so how hard can it be to put me
 in a can?  Well, that's what some very smart people in the 1940's, '50's
 and into the 1960's thought.  They were wrong.  Most of them now admit it.
 So, on AI-related lists, we have to be very careful about the kinds of
 conversations on which we spend our valuable time.  Here are some
 guidelines.  I realize most people here know this stuff already.  This is
 just a gentle reminder.

 If a posting makes grandiose claims, is dismissive of mainstream
 research,  techniques, and institutions or the author claims to have
 special knowledge that has apparently been missed (or dismissed) by all of
 the brilliant scientific/technical minds who go to their jobs at major
 corporations and universities every day (and are paid for doing so), and
 also by every Nobel Laureate for the last 20 years, this posting should be
 ignored.  DO NOT RESPOND to these types of postings: positively or
 negatively.  The poster is, obviously, either irrational or one of the
 greatest minds of our time.  In the former case, you know they're full of
 it, I know they're full of it, but they will NEVER admit that.  You will
 never win an argument with an irrational individual.  In the latter case,
 stop and ask yourself: Why is somebody that fantastically smart posting to
 this mailing list?  He or she is, obviously, smarter than everyone here.
 Why does he/she need us to validate his or her accomplishments/knowledge by
 posting on this list?  He or she should have better things to do and,
 besides, we probably wouldn't be able to understand (appreciate) his/her
 genius anyhow.

 The only way to deal with postings like this is to IGNORE THEM.  Don't
 rise to the bait.  Like a bad cold, they will be irritating for a while, but
 they will, eventually, go away.

 Cheers,

 Brad
 --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread A. T. Murray
Bob Mottram writes:

 Good advice.  There are of course sometimes 
 people who are ahead of the field, 

Like Ben Goertzel (glad to send him a referral
recently from South Africa on the OpenCog list :-)

 but in conversation you'll usually find that the 
 genuine inovators have a deep - bordering on obsessive - 
 knowledge of the field that they're working in and 
 are willing to demonstrate/test their claims 

http://mind.sourceforge.net/Mind.html 
has just been updated to demonstrate
the claim that AI has been solved.

 to anyone even remotely interested.

Arthur
-- 
http://mentifex.virtualentity.com/mind4th.html 
http://mentifex.virtualentity.com/m4thuser.html 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread Ben Goertzel
These things of course require a balance.

In many academic or corporate fora, radical innovation is frowned upon
so profoundly (in spite of sometimes being praised and desired, on the
surface, but in a confused and not fully sincere way), that it's continually
necessary to remind people of the need to open their minds and consider
the possibility that some of their assumptions are wrong.

OTOH, in **this** forum, we have a lot of openness and open-mindedness,
which is great ... but the downside is, people who THINK they have radical
new insights but actually don't, tend to get a LOT of
attention, often to the detriment of more interesting yet less radical on the
surface discussions.

I do find that most posters on this list seem to have put a lot of thought
(as well as a lot of feeling) into their ideas and opinions.  However, it's
frustrating when people re-tread issues over and over in a way that demonstrates
they've never taken the trouble to carefully study what's been done before.

I think it can often be super-valuable to approach some issue afresh, without
studying the literature first -- so as to get a brand-new view.  But
then, before
venting one's ideas in a public forum, one should check one's ideas against
the literature (in the idea-validation phase .. after the
idea-generation phase) to
see whether they're original, whether they're contradicted by well-thought-out
arguments, etc.

-- Ben G

-- Ben

On Mon, Apr 14, 2008 at 9:54 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 Good advice.  There are of course sometimes people who are ahead of the
 field, but in conversation you'll usually find that the genuine inovators
 have a deep - bordering on obsessive - knowledge of the field that they're
 working in and are willing to demonstrate/test their claims to anyone even
 remotely interested.






 On 14/04/2008, Brad Paulsen [EMAIL PROTECTED] wrote:
 
 
  Dear Fellow AGI List Members:
 
  Just thought I'd remind the good members of this list about some
 strategies for dealing with certain types of postings.
 
  Unfortunately, the field of AI/AGI is one of those areas where anybody
 with a pulse and a brain thinks they can design a program that thinks.
 Must be easy, right?  I mean, I can do it so how hard can it be to put me
 in a can?  Well, that's what some very smart people in the 1940's, '50's
 and into the 1960's thought.  They were wrong.  Most of them now admit it.
 So, on AI-related lists, we have to be very careful about the kinds of
 conversations on which we spend our valuable time.  Here are some
 guidelines.  I realize most people here know this stuff already.  This is
 just a gentle reminder.
 
  If a posting makes grandiose claims, is dismissive of mainstream research,
 techniques, and institutions or the author claims to have special
 knowledge that has apparently been missed (or dismissed) by all of the
 brilliant scientific/technical minds who go to their jobs at major
 corporations and universities every day (and are paid for doing so), and
 also by every Nobel Laureate for the last 20 years, this posting should be
 ignored.  DO NOT RESPOND to these types of postings: positively or
 negatively.  The poster is, obviously, either irrational or one of the
 greatest minds of our time.  In the former case, you know they're full of
 it, I know they're full of it, but they will NEVER admit that.  You will
 never win an argument with an irrational individual.  In the latter case,
 stop and ask yourself: Why is somebody that fantastically smart posting to
 this mailing list?  He or she is, obviously, smarter than everyone here.
 Why does he/she need us to validate his or her accomplishments/knowledge by
 posting on this list?  He or she should have better things to do and,
 besides, we probably wouldn't be able to understand (appreciate) his/her
 genius anyhow.
 
  The only way to deal with postings like this is to IGNORE THEM.  Don't
 rise to the bait.  Like a bad cold, they will be irritating for a while, but
 they will, eventually, go away.
 
  Cheers,
 
  Brad
 
  

  agi | Archives | Modify Your Subscription


  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments from a lurker...

2008-04-14 Thread Mark Waser
Steve  Perhaps you can relate your own experiences in this area.

Argument from Authority . . . . but what the heck . . . . :-)

Earliest scientific computing papers (one from the science side, one from the 
computing side)
  Computer Modeling of Muscle Phosphofructokinase Kinetics
  Journal of Theoretical Biology, Volume 103, Issue 2, 21 July 1983, Pages 
295-312
  Mark R. Waser, Lillian Garfinkel, Michael C. Kohn and David Garfinkel

  A Computer Program for Analyzing Enzyme Kinetic Data Using Graphical Display 
and Statistical Analysis
  Computers and Biomedical Research, Volume 17, Issue 3, June 1984, Pages 
289-301
  Serge D. Schremmer, Mark R. Waser, Michael C. Kohn and David Garfinkel
Hardware Integration Project - True Omni-font OCR device (1983-1984)
Developed software turning any Apple IIe and any fax machine into a 
true Omni-font OCR reader
pages were solved as cryptograms so even *random* fonts were 
interpretable
used 6502 assembly; unloaded the Apple IIe operating system as 
necessary (memory problems?  what memory problems?)

AI Project - Case Method Credit Expert System Shell  Builder (1984-1985)
Developed in Pascal for Citicorp's FastFinance Leasing System
Used by technophobic executives without any problems

AI Project - Expert System for Army Logistics Procurement (1986-1987)
Developed for/Deployed at Fort Belvoir, VA; Presented at Army Logistics 
Conference in Williamsburg
Part of the Project Manager's Support System

AI Project - Project Impact Advisor (1986-1987)
Rewrote boss's prototype system implemented in Lisp on special hardware 
as a PC-based Prolog system
Part of the Project Manager's Support System

AI/Hardware Project - Neural Network for Diagnosing Thallium Images of the 
Heart (1987-1988)
Successfully convinced top Air Force brass that Air Force doctors were 
misdiagnosing test pilot check-up images
Used Sigma Neural Network hardware boards

Hardware Project - Fax Network Switch (1990-1991)
Developed for/Deployed by the Australian Government/Embassy for all 
traffic between Canberra and Washington
Subsequently sold to Sony
Created multiple terminate-and-stay-resident programs to provide 
simultaneous 16-fax and dual T1-modem capability under MS-DOS
Used Brooktrout 4-port fax boards

Hardware Project - Secure Telephone Unit (1991-1992)
Developed initial prototype marrying COTS 80286 motherboard, modem,  
and TI TMS C32000 FPU with custom hardware and software
Enhanced and integrated commercially available TI TMS C32000 software 
for various voice codecs
Developed all control software (80286 assembly) 
Developed all software for debugging custom integrating hardware 
developed by other company employees

Hmmm . . . that's not even ten years with over fifteen to go . . . and I'm 
boring *myself* to tears despite skipping a bunch of non-relevant stuff . . . . 
 ;-)


Mark Good thing that you're smarter than that and know how to trash a machine 
so your stuff will work.
Steve Given that apparently no one else has been able to make commercial 
speech-to-text work with real-time AI, I'll accept that as a complement. 

You shouldn't have.  It was pure sarcasm.  You need to look harder at what is 
available out there.  Real-time speech-to-text is not the problem (though the 
accuracy rate is still below what is to be preferred -- a problem which your 
solution does *NOT* address).  Fitting real-time speech-to-text into a small 
enough, friendly enough footprint to work with real-time AI is not the problem 
(although *you* do seem to be having problems doing it with a *GOOD* 
engineering solution).  Coming up with a worthwhile AI is the problem BUT I 
haven't seen any sign of such a thing from you. 


Steve  It is unclear what happened for you to make your comments in the tone 
that you used. On first glance it appears that you simply didn't carefully read 
the article. For example, did you notice that Nuance actually has a patent on 
how they suck up 100.0% of the CPU, leaving nothing for concurrent AI programs? 
How about constructively addressing the technical ISSUES instead of sounding 
like an idiot by making snide comments.

If you can't prevent a program from sucking up 100% of your CPU, you aren't 
competent to be working at this level.  There are *all sorts* of ways to stop 
evil behavior like this to include:
  a.. pre-allocating memory to yourself (or your AI) before firing up the 
offending programming
  b.. replacing the operating system pointers to the memory allocation routines 
to your routines which will then lie to the offender about the amount of memory 
available
  c.. working on multiple linked boxes
The kludges that you are resorting to are just plain *BAD* engineering.  There 
are *ALWAYS* clean work-arounds -- if you're competent enough to find them.

Steve Then there is the fact that Dr. Eliza 

Re: [agi] Comments from a lurker...

2008-04-14 Thread Mark Waser
Well, that's embarrassing . . . . flame somebody and realize that you got part 
of it wrong yourself . . . . ;-)
__

Mark If you can't prevent a program from sucking up 100% of your CPU, you 
aren't competent to be working at this level.  There are *all sorts* of ways to 
stop evil behavior like this to include:
  a.. pre-allocating memory to yourself (or your AI) before firing up the 
offending programming 
  b.. replacing the operating system pointers to the memory allocation routines 
to your routines which will then lie to the offender about the amount of memory 
available 
  c.. working on multiple linked boxes
Duh.  Nothing like proposing memory solutions for a CPU problem . . . .  ;-)

How about the easily applicable solutions (without any work on your part) of 
running multiple virtual machines on the same box OR (as proposed before) 
multiple linked boxes.

  - Original Message - 
  From: Mark Waser 
  To: agi@v2.listbox.com 
  Sent: Monday, April 14, 2008 10:48 AM
  Subject: Re: [agi] Comments from a lurker...


  Steve  Perhaps you can relate your own experiences in this area.

  Argument from Authority . . . . but what the heck . . . . :-)

  Earliest scientific computing papers (one from the science side, one from the 
computing side)
Computer Modeling of Muscle Phosphofructokinase Kinetics
Journal of Theoretical Biology, Volume 103, Issue 2, 21 July 1983, Pages 
295-312
Mark R. Waser, Lillian Garfinkel, Michael C. Kohn and David Garfinkel

A Computer Program for Analyzing Enzyme Kinetic Data Using Graphical 
Display and Statistical Analysis
Computers and Biomedical Research, Volume 17, Issue 3, June 1984, Pages 
289-301
Serge D. Schremmer, Mark R. Waser, Michael C. Kohn and David Garfinkel
  Hardware Integration Project - True Omni-font OCR device (1983-1984)
  Developed software turning any Apple IIe and any fax machine into a 
true Omni-font OCR reader
  pages were solved as cryptograms so even *random* fonts were 
interpretable
  used 6502 assembly; unloaded the Apple IIe operating system as 
necessary (memory problems?  what memory problems?)

  AI Project - Case Method Credit Expert System Shell  Builder (1984-1985)
  Developed in Pascal for Citicorp's FastFinance Leasing System
  Used by technophobic executives without any problems

  AI Project - Expert System for Army Logistics Procurement (1986-1987)
  Developed for/Deployed at Fort Belvoir, VA; Presented at Army 
Logistics Conference in Williamsburg
  Part of the Project Manager's Support System

  AI Project - Project Impact Advisor (1986-1987)
  Rewrote boss's prototype system implemented in Lisp on special 
hardware as a PC-based Prolog system
  Part of the Project Manager's Support System

  AI/Hardware Project - Neural Network for Diagnosing Thallium Images of the 
Heart (1987-1988)
  Successfully convinced top Air Force brass that Air Force doctors 
were misdiagnosing test pilot check-up images
  Used Sigma Neural Network hardware boards

  Hardware Project - Fax Network Switch (1990-1991)
  Developed for/Deployed by the Australian Government/Embassy for all 
traffic between Canberra and Washington
  Subsequently sold to Sony
  Created multiple terminate-and-stay-resident programs to provide 
simultaneous 16-fax and dual T1-modem capability under MS-DOS
  Used Brooktrout 4-port fax boards

  Hardware Project - Secure Telephone Unit (1991-1992)
  Developed initial prototype marrying COTS 80286 motherboard, modem,  
and TI TMS C32000 FPU with custom hardware and software
  Enhanced and integrated commercially available TI TMS C32000 software 
for various voice codecs
  Developed all control software (80286 assembly) 
  Developed all software for debugging custom integrating hardware 
developed by other company employees

  Hmmm . . . that's not even ten years with over fifteen to go . . . and I'm 
boring *myself* to tears despite skipping a bunch of non-relevant stuff . . . . 
 ;-)

  
  Mark Good thing that you're smarter than that and know how to trash a 
machine so your stuff will work.
  Steve Given that apparently no one else has been able to make commercial 
speech-to-text work with real-time AI, I'll accept that as a complement. 

  You shouldn't have.  It was pure sarcasm.  You need to look harder at what is 
available out there.  Real-time speech-to-text is not the problem (though the 
accuracy rate is still below what is to be preferred -- a problem which your 
solution does *NOT* address).  Fitting real-time speech-to-text into a small 
enough, friendly enough footprint to work with real-time AI is not the problem 
(although *you* do seem to be having problems doing it with a *GOOD* 
engineering solution).  Coming up with a worthwhile AI is the 

Re: [agi] Between logical semantics and linguistic semantics

2008-04-14 Thread Stephen Reed
Lukasz,
Thanks for the information about Word Grammar, which for anyone else 
interested, is described here, 

You asked:
(4) I'm interested in how do you handle backtracking: giving up on
application of a construction when it leads to inconsistency.
Chart-based unification parsing can be optimized to share applications
of constructions which are parallel, and this can be extended to
operators which are (like unification) monotonic, e.g. cannot make
unsatisfiable/inconsistent state a satisfiable/consistent one. Merging
conjuncts new facts to old ones so it is monotonic in monotonic
logics. (Default/defeasible logics are nonmonotonic.)



My first solution to this problem is to postpone it by employing a controlled 
English, in which such constructions will be avoided if possible.  Secondly, 
Jerry Ball demonstrated his solution in Double R Grammar at the 2007 AAAI Fall 
Symposium, Cognitive Approaches to NLP.  His slide presentation is here, which 
I think fully addresses your issues.  To summarize Dr. Ball's ideas, which I 
will ultimately adopt for Texai:
Serial processing [word by word parsing] with algorithmic backtracking has no 
hope for on-line processing in real-time in a large coverage NLP system.The 
solution is serial processing without backtrackingIf current input is 
unexpected given the prior context, then accommodate the input by adjusting the 
representation [parse state] and coerce the input into the representationDr. 
Ball gives as an example, parsing the utterance no airspeed or altitude 
restrictions.Upon processing the word or, the conjunction is 
accommodated via function overriding in his grammar, not by backing up.

(4a) Does the fact that your parser is incremental mean that you do
 early commitment to constructions? (Double R Grammar seems to
 support early commitment when there is choice, but backtracking is
 still needed to get an interpretation when there are only ones without
 it.)

  
Yes, my parser makes the earliest possible commitment to a construction, but I 
allow subsequent elaboration of constructions as new constituents are 
recognized.  For example, in my use case sentence the book is on the table, I 
recognize a initial Situation Referring Expression construction to cover the 
partial utterance the book is, which is elaborated to form the final 
Situation Referring Expression when the remaining utterance on the table is 
processed.

I regret that some aspects of my implementation are difficult to follow because 
I am using Jerry Ball's Double R Grammar, but not his ACT-R Lisp engine,  using 
instead my own incremental, cognitively plausible, version of Luc Steel's Fluid 
Construction Grammar engine.  I combined these two systems because Jerry Ball's 
engine is not reversible, Luc Steel's grammar is not a good coverage of 
English, and the otherwise excellent Fluid Construction Grammar engine is not 
incremental. 

-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Lukasz Stafiniak [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, April 13, 2008 3:04:07 PM
Subject: Re: [agi] Between logical semantics and linguistic semantics

 On Wed, Apr 9, 2008 at 6:03 AM, Stephen Reed [EMAIL PROTECTED] wrote:

 I would be interested
 in your comments on my adoption of Fluid Construction Grammar as a solution
 to the NL  to semantics mapping problem.

(1) Word Grammar (WG) is a construction-free version of your approach.
It is based solely on spreading activation. It doesn't have a sharp
separation of syntax and semantics: there's only one net. Nodes
representing subgraphs corresponding to constructions can be organized
into inheritance hierarchies (extensibility). But pure WG makes
things very awkward logics-wise, making it work would be a lot of
research (the WG book doesn't discuss utterance generation IIRC, but
reversing parsing-interpretation seems quite direct: select the most
activated word which doesn't have a left landmark, introduce a
word-instance node for it, include spreading its activation through a
right-landmark (ignoring direction of the landmark) edge). Texai is
impure by its very nature, perhaps it could be made more (than just
sharing the spreading activation idea) of a mix WG*FCG.

(2) FCG is closer to traditional apporaches a la computational
linguistics than WG.

(3) One could give up some FCG features to simplify it, for example by
assuming one-to-one correspondence between constructions and atomic
predicates.

(4) I'm interested in how do you handle backtracking: giving up on
application of a construction when it leads to inconsistency.
Chart-based unification parsing can be optimized to share applications
of constructions which are parallel, and this can be extended to
operators which are (like unification) monotonic, e.g. cannot make
unsatisfiable/inconsistent state a satisfiable/consistent one. Merging

Re: [agi] Comments from a lurker...

2008-04-14 Thread Mark Waser
ROTFLMAO!  Excellent!  Thank you.
  - Original Message - 
  From: Steve Richfield 
  To: agi@v2.listbox.com 
  Sent: Sunday, April 13, 2008 6:09 PM
  Subject: Re: [agi] Comments from a lurker...


  Mark,


  On 4/13/08, Mark Waser [EMAIL PROTECTED] wrote: 
 I then asked if anyone in the room had a 98.6F body temperature, and NO 
ONE DID. 

Try this in a room with normal people.

  ~3/4 of the general population reaches ~98.6F sometime during the day. The 
remaining 1/4 of the population have a varying assortment of symptoms generally 
in the list of hypothyroid symptoms, even though only about 1/4 of those 
people have any thyroid-related issues. Then look at the patients who enter the 
typical doctor's practice. There, it is about 50% each way. Then, look at the 
patients in a geriatric practice, where typically NONE of the people reach 
98.6F anytime during the day.


You'll get almost the same answer.  98.6 is just the Fahrenheit value of a 
rounded Celsius value -- not an accurate gauge.

  Wrong.  Healthy people quickly move between set points at ~97.4F, ~98.0F, and 
98.6F. However, since medical researchers aren't process control people, they 
have missed the importance of this little detail.


My standard temperature is 96.8 -- almost two degrees low -- and this is 
perfectly NORMAL.

  Thereby demonstrating the obsolescence of your medical information.
   
  NOW I understand! Simply resetting someone from 97.something temperature to 
98.6F results in something like another ~20 IQ points. People usually report 
that it feels like waking up, perhaps for the first time in their entire 
lives. I can hardly imagine the level of impairment that you must be working 
though. NO WONDER that you didn't see the idiocy of making your snide comments.


Any good medical professional
 
understands this.

  Only if they have gray hair.

  This all comes from an old American Thyroid Association study that was 
published in JAMA to discredit Wilson's Thyroid Syndrome (Now Wilson's 
Temperature Syndrome, which has since been largely discredited for other 
reasons) that my article references. There, many healthy people had their 
temperatures taken at 8:00AM, and they found three groups:
  1.  People who were ~97.4F
  2.  People who were ~98.6F
  3.  People who were somewhere in between.

  However, if you take a healthy person and plot their temperature through the 
day, you find that they sleep at 97.4F, and pop up to 98.6F sometime during the 
first 3 hours after waking up. In short, the ATA study was ENTIRELY consistent 
with my model and observations. However, inexplicably, the authors concluded 
that people don't have any set temperature, without providing any explanation 
as to how they reached that conclusion.

  However, YOUR temperature is REALLY anomalous and WAY outside the range of 
the ATA's study, and possibly consistent with serious hypothyroidism. Have you 
had your TSH tested yet? If not, then fire your present incompetent doctor and 
find a board-certified endocrinologist.

Don't criticize others for your assumptions of what they believe.

  Why not, when I have read the articles, tested dozens of healthy (and many 
more unhealthy) people myself, and seen that in light of the observable facts, 
that some conventional medical dogma absolutely MUST be wrong.
   
  Please, please get your temperature fixed before making any more snide 
postings here. I find your snide comments to be painful, and I strongly suspect 
that you too will see the errors of your ways and correct them when you finally 
wake up as discussed above.

  Steve Richfield

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] He Wrote 200,000 Books (but Computers Did Some of the Work)

2008-04-14 Thread Mark Waser
http://www.nytimes.com/2008/04/14/business/media/14link.html?_r=1oref=slogin

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] He Wrote 200,000 Books (but Computers Did Some of the Work)

2008-04-14 Thread Stephen Reed
Publishing computer-generated books on demand, aggregating many small profits, 
is an interesting illustration of The Long Tail.  

Considering an AGI, I anticipate that knowledge and skill acquisition will be 
facilitated by this principle.  Obscure knowledge and skills can be acquired 
from, and delivered to, befriended users if the cost is sufficiently low (e.g. 
free).  A related economic principle that might interest readers is Wikinomics.
 
-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, April 14, 2008 11:29:12 AM
Subject: [agi] He Wrote 200,000 Books (but Computers Did Some of the Work) 

 
http://www.nytimes.com/2008/04/14/business/media/14link.html?_r=1oref=slogin
 
 agi | Archives   | Modify  Your Subscription   

 






  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] He Wrote 200,000 Books (but Computers Did Some of the Work)

2008-04-14 Thread Bob Mottram
This reminds me of Rod Brooks saying that AGI may already be here but nobody
has noticed it yet.  With an AGI running a nice little business for you
there may be no great incentive to advertise the fact openly to the world.

If done well with a suitably flexible AI this kind of automatic content
generation could become a whole new business model.  One potentially
lucrative area is that aging and somewhat vain individuals may want to
create their own autobiographies, using some semi-automated software system
to help generate the book, which they can then leave to their descendants.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Logical Satisfiability...Get used to it.

2008-04-14 Thread Jim Bromer
Ben G wrote: 

FWIW, I wasn't joking about your algorithm's putative
divine inspiration in my role as moderator, but rather in my role
as individual list participant ;-)

Sorry that my sense of humor got on your nerves. I've had that effect
on people before!

Really though: if you're going to post messages in forums populated
by scientific rationalists, claiming divine inspiration for your ideas, you
really gotta expect **at minimum** some good-natured ribbing... !
-- Ben G


I appreciate the fact that you did not intend your comments to be mean
spirited and that you were speaking as a participant not as the
moderator.  I also appreciate the fact that Wasser realized that I
misunderstood his comment and made that clear.

I have annoyed quite a few people myself.  I am a little too critical
at times, but my criticisms are usually intended to provoke a deeper
examination an idea of some kind.  (I only rarely use criticism as a
tool of wanton destruction!)

Concerning beliefs and scientific rationalism: Beliefs are the basis
of all thought.  To imply that religious belief might be automatically
different from rational beliefs is naïve.  However, I think there is
an advantage in defining what a rational thought is realitve to AI
programming and how scientific rationalism is different from simple
rationalism.  I am going to write a few messages about this when I get
a chance.

By the way, I don't really see how a simple n^4 or n^3 SAT solver in
itself would be that useful for any immediate AGI project, but the
novel logical methods that the solver will reveal may be more
significant.

Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-14 Thread Lukasz Stafiniak
On Mon, Apr 14, 2008 at 5:14 PM, Stephen Reed [EMAIL PROTECTED] wrote:

 My first solution to this problem is to postpone it by employing a
 controlled English, in which such constructions will be avoided if possible.
 Secondly, Jerry Ball demonstrated his solution in Double R Grammar at the
 2007 AAAI Fall Symposium, Cognitive Approaches to NLP.  His slide
 presentation is here, which I think fully addresses your issues.  To
 summarize Dr. Ball's ideas, which I will ultimately adopt for Texai:

Thanks, very interesting slides. I think he forgets to mention
Dynamic Syntax (Ruth Kempson, Dov Gabbay).

 Serial processing [word by word parsing] with algorithmic backtracking has
 no hope for on-line processing in real-time in a large coverage NLP system.

I think that Double R accomodation approach can be approximated by
incremental right-to-left parsing. Something along the lines of
http://www.speagram.org/wiki/Grammar/ChartParser but still needs much
work (the approach was developed when I've been in computational
semantics phase, it ignores cognitive linguistics, and is too
fragmented: only categorical semantics (and agreement, by use of
variables in types) are processed, with relational and referential
semantics postponed to latter stages). The up side is that it can
handle general Context Free Grammars.

I didn't know that Microsoft uses some kind of right-to-left parsing,
I thought it is my invention :-)

 I regret that some aspects of my implementation are difficult to follow
 because I am using Jerry Ball's Double R Grammar, but not his ACT-R Lisp
 engine,  using instead my own incremental, cognitively plausible, version of
 Luc Steel's Fluid Construction Grammar engine.  I combined these two systems
 because Jerry Ball's engine is not reversible, Luc Steel's grammar is not a
 good coverage of English, and the otherwise excellent Fluid Construction
 Grammar engine is not incremental.

 -Steve

Perhaps you could get some linguist to capitalize on your work with a
publication?

Best Regards,
Łukasz

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-14 Thread Lukasz Stafiniak
2008/4/14 Lukasz Stafiniak [EMAIL PROTECTED]:
 On Mon, Apr 14, 2008 at 5:14 PM, Stephen Reed [EMAIL PROTECTED] wrote:
  
   Serial processing [word by word parsing] with algorithmic backtracking has
   no hope for on-line processing in real-time in a large coverage NLP system.

  I think that Double R accomodation approach can be approximated by
  incremental right-to-left parsing. Something along the lines of
  http://www.speagram.org/wiki/Grammar/ChartParser but still needs much

If you're confused by the equations, increments are left-to-right, and
between increments, there's right-to-left accomodation-like stage.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Between logical semantics and linguistic semantics

2008-04-14 Thread Stephen Reed
Łukasz wrote:

Perhaps you could get some linguist to capitalize on your work with a 
publication?

 
Coincidentally, my abstract for the Fifth International Conference on 
Construction Grammar has been accepted for presentation at its poster session 
this September.  Because the conference this year is to be held in Austin, I 
can easily attend.  My work on Construction Grammar has been previously 
discussed with Dr. Hans Boas at UT Austin,  Dr. Jerry Ball, while attending the 
2007 AAAI Fall Symposium, and with the technical staff at Cycorp in Austin 
which included two Ph.D computational linguists.

Here is a link to the brief abstract.

I hope that a successful demonstration of the Texai bootstrap English dialog 
system, sometime this year, will draw attention to construction grammar for 
natural language processing when semantics are the focus of the application.  
Likewise I hope to release an open source version of the Double R Grammar that 
has a good coverate of English.  

Reaching out to linguists, I already released on SourceForge what I believe is 
the world's largest freely available, open-source lexicon, derived from WordNet 
2.1, Wiktionary, The CMU Pronouncing Dictionary and the OpenCyc lexicon.  I 
announced this on the Linguists List.

Cheers,
-Steve

Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Lukasz Stafiniak [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, April 14, 2008 2:11:31 PM
Subject: Re: [agi] Between logical semantics and linguistic semantics

 On Mon, Apr 14, 2008 at 5:14 PM, Stephen Reed [EMAIL PROTECTED] wrote:

 My first solution to this problem is to postpone it by employing a
 controlled English, in which such constructions will be avoided if possible.
 Secondly, Jerry Ball demonstrated his solution in Double R Grammar at the
 2007 AAAI Fall Symposium, Cognitive Approaches to NLP.  His slide
 presentation is here, which I think fully addresses your issues.  To
 summarize Dr. Ball's ideas, which I will ultimately adopt for Texai:

Thanks, very interesting slides. I think he forgets to mention
Dynamic Syntax (Ruth Kempson, Dov Gabbay).

 Serial processing [word by word parsing] with algorithmic backtracking has
 no hope for on-line processing in real-time in a large coverage NLP system.

I think that Double R accomodation approach can be approximated by
incremental right-to-left parsing. Something along the lines of
http://www.speagram.org/wiki/Grammar/ChartParser but still needs much
work (the approach was developed when I've been in computational
semantics phase, it ignores cognitive linguistics, and is too
fragmented: only categorical semantics (and agreement, by use of
variables in types) are processed, with relational and referential
semantics postponed to latter stages). The up side is that it can
handle general Context Free Grammars.

I didn't know that Microsoft uses some kind of right-to-left parsing,
I thought it is my invention :-)

 I regret that some aspects of my implementation are difficult to follow
 because I am using Jerry Ball's Double R Grammar, but not his ACT-R Lisp
 engine,  using instead my own incremental, cognitively plausible, version of
 Luc Steel's Fluid Construction Grammar engine.  I combined these two systems
 because Jerry Ball's engine is not reversible, Luc Steel's grammar is not a
 good coverage of English, and the otherwise excellent Fluid Construction
 Grammar engine is not incremental.

 -Steve

Perhaps you could get some linguist to capitalize on your work with a
publication?

Best Regards,
Łukasz

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com







  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments from a lurker...

2008-04-14 Thread Matt Mahoney

--- Steve Richfield [EMAIL PROTECTED] wrote:

 Why go to all that work?! I have attached the *populated* Knowledge.mdb file
 that contains the knowledge that powers the chronic illness demo of Dr.
 Eliza. To easily view it, just make sure that any version of MS Access is
 installed on your computer (it is in Access 97 format) and double-click on
 the file. From there, select the Tables tab, and click on whatever table
 interests you.

I looked at your file.  Would I be correct that if I described a random health
problem to Dr. Eliza that it would suggest that my problem is due to one of:

- Low body temperature
- Fluorescent lights
- Consuming fructose in the winter
- Mercury poisoning from amalgam fillings and vaccines
- Aluminum cookware
- Hydrogenated vegetable oil
- Working a night shift
- Aspirin (causes macular degeneration)
- Or failure to accept divine intervention?

Is that it, or is there a complete medical database somewhere, or the
capability of acquiring this knowledge?  Do you have a medical background, or
have you consulted with doctors in building the database?

BTW, regarding processes that use 100% of CPU in Windows.  Did you try
Ctrl-Alt-Del to bring up the task manager, then right click on the process and
change its priority?




-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments from a lurker...

2008-04-14 Thread Mike Dougherty
On Mon, Apr 14, 2008 at 4:17 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
  You've merely been a *TROLL* and gotten the appropriate response.  Thanks
 for playing but we have no parting gifts for you.

 Who is the we you are referencing? Do you have a mouse in your pocket, or
 is that the Royal we?  YOU are the only snide asshole/troll whom I have
 had the displeasure of observing on this forum. Can you point to anyone ELSE
 here who acts as you do?

I don't want to participate in calling anyone a Troll.  What I have
observed of Matt's online presence, he was giving you an opportunity
to disprove the Troll status rather than transparently ignoring you.
I'm guessing he'll simply give up soon.

I have little interest in downloading your software and tables and
arcane howto for making it all work.  In my opinion, you really can't
call your product AGI until I can converse with it directly - either
via it's own email address or (for a 'real-time' Turing test) an IRC
channel.

How difficult would it be for you to extend the Dr Eliza interface
with an IRC bot frontend?

If it is as accurate as you claim, it might help a lot more people by
dispensing see a REAL doctor to get X checked out than as ... well,
whatever it is now.

Even with an accuracy rate that exceeds average doctors, I'll be as
likely to dismiss it as I would dismiss a real doctor - but the
machine doesn't need to play golf or drive expensive cars so it can
devote the time that people can't (or won't).  [I had a doctor say,
Your iron level is too low, eat more red meat. followed immediately
with, Your cholesterol is too high, eat less red meat.  I was
thinking, Your diagnosis is unusable, I want my co-pay back ]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-04-14 Thread Charles D Hixson

Jim Bromer wrote:

Ben G wrote: ...
...
Concerning beliefs and scientific rationalism: Beliefs are the basis
of all thought.  To imply that religious belief might be automatically
different from rational beliefs is naïve.  However, I think there is
an advantage in defining what a rational thought is realitve to AI
programming and how scientific rationalism is different from simple
rationalism.  I am going to write a few messages about this when I get
a chance.

By the way, I don't really see how a simple n^4 or n^3 SAT solver in
itself would be that useful for any immediate AGI project, but the
novel logical methods that the solver will reveal may be more
significant.

Jim Bromer
  
But religious beliefs *ARE* intrinsically different from rational 
beliefs.  They aren't the only such belief, but they are among them.  
Rational beliefs MUST be founded in other beliefs.  Rationalism does not 
provide a basis for generating beliefs ab initio, but only via reason, 
which requires both axioms and rules of inference.  (NARS may have 
eliminated the axioms, but I doubt it.  OTOH, I don't understand exactly 
how it works yet.)


Religion and other intrinsic beliefs are inherent in the construction of 
humans.  I suspect that every intelligent entity will require such 
beliefs.  Which particular religion is believed in isn't inherent, but 
is situational.  (Other factors may enter in, but I would need a clear 
explication of how that happened before I would believe that.)  Note 
that another inherent belief is People like me are better than people 
who are different.  The fact that a belief is inherent doesn't mean it 
can't be overcome (or at least subdued) by counter-programming, merely 
that one will need to continually ward against it, or it will re-assert 
itself even if you know that it's wrong.


Saying that a belief is non-rational isn't denigrating it.  It's merely 
a statement that it isn't a built-in rule.  Even the persistence of 
forms doesn't seem to be totally built-in, though there are definitely 
lots of mechanisms that will tend to create it.  So in that case what's 
built in is a tendency to perceive the persistence of objects.  In the 
case of religion it's a bit more difficult to perceive what the built-in 
process is.  Plausibly it's a combination of several tendency to 
perceive patterns shaped like ... in the world that aren't 
intrinsically connected, but which have been connected by culture.  Or 
it might be something else.  (The blame/attribute everything to the big 
alpha baboon theory isn't totally silly, but I find it unsatisfactory.  
It's at most a partial answer.)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments from a lurker...

2008-04-14 Thread Steve Richfield
Mike,

On 4/14/08, Mike Dougherty [EMAIL PROTECTED] wrote:

 I have little interest in downloading your software and tables and
 arcane howto for making it all work.  In my opinion, you really can't
 call your product AGI until I can converse with it directly - either
 via it's own email address or (for a 'real-time' Turing test) an IRC
 channel.


I will concede that AGI has little interest in Dr. Eliza, and I have little
interest in AGI as it seems to be individually defined here. Hence, I plead
no contest to this statement.



 How difficult would it be for you to extend the Dr Eliza interface
 with an IRC bot frontend?


I have looked extensively at this. There are a number of issues:

1.  It won't be widely useful without a LOT more knowledge. Remember, my
choice of CHRONIC illness - those conditions that doctors can do
little/nothing for, yet with the advancement of various sorts of alternative
health care approaches, many of these DO have effective interventions.
People typically having these conditions fall into some particular social
categories:
a.  The elderly, many of whom won't talk with anyone who doesn't have an MD.
If they had only gone down to the nearest clinic and saw the naturopath on
duty, many of them wouldn't have their chronic condition.
b.  The poor, who can only qualify for mainstream MD care without paying for
it themselves, and they don't have the money for such risky investments.
c.  In any case, most people with chronic health conditions do NOT have
Internet access!

2.  conversing with Dr. Eliza can be frustrating, because it insists on
talking about whatever it sees as pivotal, and has no internal ability to
converse about whatever it is that the patient thinks is important. More
often than not, some passing indirect mention of a seemingly irrelevant
symptom will turn out to be the clue that puts it all together, so Dr. Eliza
may start asking about that symptom to make sure that it is real since so
much depends on it. I really can't imagine Dr. Eliza ever competing for ANY
Turing-related prize, because it so completely lacks the personal touch.

3.  My present front runner plan is to lurk on many health-related sites,
analyze every posting, and wait until it sees enough to be really sure about
saying something (has seen enough to propose a complete cure), and then
post the questions or reply as appropriate. Alternatively, service emails,
which encourages people to write carefully thought out problem statements.

 If it is as accurate as you claim,


Obviously, it is no better than its knowledge base.



 it might help a lot more people by
 dispensing see a REAL doctor to get X checked out than as ... well,
 whatever it is now.


I agree.

An alternative plan that might be worth a LOT of money is to forge a
relationship with a nationwide medical provider like Group Health. Dr. Eliza
is pretty good at dragging out the details even if you don't look at its
opinions about them. If you like the advice and it requires medication, then
just click the button and show up at the Group Health pharmacy, show your
ID, and pick up your meds. If you reject its advice, at least your doctor
can read the health statement a LOT faster than he can listen to you talk.
No matter what happens, the provider would come out ahead.

There are a number of political pitfalls in this, but I am still looking for
just the right provider to do this with.

Even with an accuracy rate that exceeds average doctors, I'll be as
 likely to dismiss it as I would dismiss a real doctor - but the
 machine doesn't need to play golf or drive expensive cars so it can
 devote the time that people can't (or won't).


The whole thing hinges around *difficult* problems, *chronic* illnesses,
etc.If you doctor can fix a problem, then you don't need Dr. Eliza, though
the price is certainly right. However, when your doctor tells you to cancel
your magazine subscriptions, as mine once did, then at least some people
open their minds to alternative advice.

[I had a doctor say,
 Your iron level is too low, eat more red meat. followed immediately
 with, Your cholesterol is too high, eat less red meat.


Please excuse me for a moment while I change hats...

Iron (a pure free radical) levels are regulated by your central metabolic
control system to keep the total free radical level where it wants it to be.
Most doctors make such opinions without testing, and sometimes the levels
are low FOR A GOOD REASON. One fellow from Australia was downwind from some
British A-bombs that were tested, so he was full of free radicals from the
fallout. His iron levels were regulated to be low. He could (and did for a
while) eat iron pills like candy and his levels didn't move a bit.

Most of the iron hype is obsolete by decades, and comes from old Geritol
ads. I presume that your doctor had his/her share of gray hair?

Drug companies have literally  bought and paid for laboratories to lower the
normal range for cholesterol in order to sell more pills.