[agi] Tesla Journal Submission: Mentifex Mad Science

2010-08-07 Thread A. T. Murray
Mad Science Theory-Based Artificial Intelligence 

Abstract 

The patient insists that he has created an 
artificial Mind, a virtual entity capable of 
abstract thought and self-awareness. Further, 
his research is too dangerous to be published 
outside of the Tesla Journal, because Mentifex 
AI leads inexorably to an Extinction Level Event 
(ELE) known as the Technological Singularity. 
Crazies and mountebanks have flocked to the 
growing vanguard of self-styled Singularitarians, 
Transhumanists, Extropians, Netkooks, Lambda-
Calculoids, Associate Professors, Double-Baggers, 
AI Fellows, Earth-Firsters, Neats and Scruffies, 
Idiot Savants and Boulevardier Poseurs ad nauseam 
et ad infinitum. Various camps come together 
annually at the Rainbow Gathering, the 
Singularity Summit, and the Indianapolis 500. 

http://aicookbook.com/wiki/Main_Page 

http://lists.extropy.org/pipermail/extropy-chat/2010-August/ 

http://practicalai.org 

http://www.scn.org/~mentifex/tesla.html 

http://www.teslajournal.com 


Mentifex Mad Scientist
-- 
Mad people of comp.lang.lisp 
http://www.tfeb.org/lisp/mad-people.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] $35 ( 2GB RAM) it is

2010-08-07 Thread deepakjnath
This is done in a university in my city.! :) That is our Education Minister
:)

cheers,
Deepak

On Sat, Aug 7, 2010 at 6:04 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

 http://shockedinvestor.blogspot.com/2010/07/new-35-laptop-unveiled.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] $35 ( 2GB RAM) it is

2010-08-07 Thread Mike Tintner
sounds like a great achievement - or not?


From: deepakjnath 
Sent: Saturday, August 07, 2010 2:55 PM
To: agi 
Subject: Re: [agi] $35 ( 2GB RAM) it is


This is done in a university in my city.! :) That is our Education Minister :)

cheers,
Deepak


On Sat, Aug 7, 2010 at 6:04 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  http://shockedinvestor.blogspot.com/2010/07/new-35-laptop-unveiled.html 


  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
cheers,
Deepak

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Steve Richfield
John,

You brought up some interesting points...

On Fri, Aug 6, 2010 at 10:54 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
  On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.com
  wrote:
  statements of stupidity - some of these are examples of cramming
  sophisticated thoughts into simplistic compressed text.
 
  Definitely, as even the thoughts of stupid people transcends our
 (present)
  ability to state what is happening behind their eyeballs. Most stupidity
 is
  probably beyond simple recognition. For the initial moment, I was just
  looking at the linguistic low hanging fruit.

 You are talking about, those phrases, some are clichés,


There seems to be no clear boundary between clichés and other stupid
statements, except maybe that clichés are exactly quoted like that's just
your opinion while other statements are grammatically adapted to fit the
sentences and paragraphs that they inhabit.

Dr. Eliza already translates idioms before processing. I could add clichés
without changing a line of code, e.g. that's just your opinion might
translate into something like I am too stupid to to understand your
explanation.

Dr. Eliza has an extensive wildcard handler, so it should be able to handle
the majority of grammatically adapted statements in the same way, by simply
including appropriate wildcards in the pattern.

are like local K
 complexity minima, in a knowledge graph of partial linguistic structure,
 where neural computational energy is preserved, and the statements are
 patterns with isomorphisms to other experiential knowledge intra and inter
 agent.


That is, other illogical misunderstanding of the real world, which are
probably NOT shared with more intelligent agents. This present a serious
problem with understanding by more intelligent agents.

More intelligent agents have ways of working more optimally with the
 neural computational energy, perhaps by using other more efficient patterns
 thus avoiding those particular detrimental pattern/statements.


... and this present a communications problem with agents with radically
different intelligences, both greater and lesser.


 But the
 statements are catchy because they are common and allow some minimization
 of
 computational energy as well as they are like objects in a higher level
 communication protocol. To store them is less bits and transfer is less
 bits
 per second.


However, they have negative information content - if that is possible,
because they require a false model of the world to process, and produce
completely erroneous results. Of course, despite these problems, they DO
somewhat accurately communicate the erroneous nature of the thinking, so
there IS some value there.


 Their impact is maximal since they are isomorphic across
 knowledge and experience.


... the ultimate being: Do, or do not. There is no try.


 At some point they may just become symbols due to
 their pre-calculated commonness.


Egad, symbols to display stupidity. Could linguistics have anything that is
WORSE?!


  Language is both intelligence enhancing and limiting. Human language is a
  protocol between agents. So there is minimalist data transfer, I had no
  choice but to ... is a compressed summary of potentially vastly complex
  issues.
 
  My point is that they could have left the country, killed their
 adversaries,
  taken on a new ID, or done any number of radical things that they
 probably
  never considered, other than taking whatever action they chose to take. A
  more accurate statement might be I had no apparent rational choice but
 to
  

 The other low probability choices are lossily compressed out of the
 expressed statement pattern. It's assumed that there were other choices,
 usually factored in during the communicational complexity related
 decompression, being situational. The onus at times is on the person
 listening to the stupid statement.


I see. This example was in reality a gapped or ellipsis, where
reasonably presumed words were omitted. These are always a challenge, except
in common places like clichés where the missing words can be automatically
inserted.

Thanks again for your thoughts.

Steve
=


  The mind gets hung-up sometimes on this language of ours. Better off at
  times to think less using English language and express oneself with a
 wider
  spectrum communiqué. Doing a dance and throwing paint in the air for
  example, as some *primitive* cultures actually do, conveys information
 also
  and is medium of expression rather than using a restrictive human chat
  protocol.
 
  You are saying that the problem is that our present communication permits
  statements of stupidity, so we shouldn't have our present system of
  communication? Scrap English?!!! I consider statements of stupidity as a
 sort
  of communications checksum, to see if real interchange of ideas is even
  possible. Often, it is 

Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Ian Parker
I wanted to see what other people's views were.My own view of the risks is
as follows. If the Turing Machine is built to be as isomorphic with humans
as possible, it would be incredibly dangerous. Indeed I feel that the
biological model is far more dangerous than the mathematical.

If on the other hand the TM was *not* isomorphic and made no attempt to be,
the dangers would be a lot less. Most Turing/Löbner entries are chatterboxes
that work on databases. The database being filled as you chat. Clearly the
system cannot go outside its database and is safe.

There is in fact some use for such a chatterbox. Clearly a Turing machine
would be able to infiltrate militant groups however it was constructed. As
for it pretending to be stupid, it would have to know in what direction it
had to be stupid. Hence it would have to be a good psychologist.

Suppose it logged onto a jihardist website, as well as being able to pass
itself off as a true adherent, it could also look at the other members and
assess their level of commitment and knowledge. I think that the
true Turing/Löbner  test is not working in a laboratory environment but they
should log onto jihardist sites and see how well they can pass themselves
off. If it could do that it really would have arrived. Eventually it could
pass itself off as a *peniti* to use the Mafia term and produce arguments
from the Qur'an against the militant position.

There would be quite a lot of contracts to be had if there were a realistic
prospect of doing this.


  - Ian Parker

On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote:

  Philosophical question 2 - Would passing the TT assume human stupidity
 and
  if so would a Turing machine be dangerous? Not necessarily, the Turing
  machine could talk about things like jihad without
 ultimately identifying with
  it.
 

 Humans without augmentation are only so intelligent. A Turing machine would
 be potentially dangerous, a really well built one. At some point we'd need
 to see some DNA as ID of another extended TT.

  Philosophical question 3 :- Would a TM be a psychologist? I think it
 would
  have to be. Could a TM become part of a population simulation that would
  give us political insights.
 

 You can have a relatively stupid TM or a sophisticated one just like
 humans.
 It might be easier to pass the TT by not exposing too much intelligence.

 John

  These 3 questions seem to me to be the really interesting ones.
 
 
- Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Matt Mahoney
Wouldn't it depend on the other researcher's area of expertise?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Ben Goertzel b...@goertzel.org
To: agi agi@v2.listbox.com
Sent: Sat, August 7, 2010 9:10:23 PM
Subject: [agi] Help requested: Making a list of (non-robotic) AGI low hanging 
fruit apps

Hi,

A fellow AGI researcher sent me this request, so I figured I'd throw it
out to you guys


I'm putting together an AGI pitch for investors and thinking of low
hanging fruit applications to argue for. I'm intentionally not
involving any mechanics (robots, moving parts, etc.). I'm focusing on
voice (i.e. conversational agents) and perhaps vision-based systems.
Hellen Keller AGI, if you will :)

Along those lines, I'd like any ideas you may have that would fall
under this description. I need to substantiate the case for such AGI
technology by making an argument for high-value apps. All ideas are
welcome.


All serious responses will be appreciated!!

Also, I would be grateful if we
could keep this thread closely focused on direct answers to this
question, rather than
digressive discussions on Helen Keller, the nature of AGI, the definition of AGI
versus narrow AI, the achievability or unachievability of AGI, etc.
etc.  If you think
the question is bad or meaningless or unclear or whatever, that's
fine, but please
start a new thread with a different subject line to make your point.

If the discussion is useful, my intention is to mine the answers into a compact
list to convey to him

Thanks!
Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Ben Goertzel
His request explicitly said he is focusing on voice and vision.  I think
that is enough specificity...

ben

On Sat, Aug 7, 2010 at 9:22 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Wouldn't it depend on the other researcher's area of expertise?


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Ben Goertzel b...@goertzel.org
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, August 7, 2010 9:10:23 PM
 *Subject:* [agi] Help requested: Making a list of (non-robotic) AGI low
 hanging fruit apps

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Abram Demski
Ben,

-The oft-mentioned stock-market prediction;
-data mining, especially for corporate data such as customer behavior, sales
prediction, etc;
-decision support systems;
-personal assistants;
-chatbots (think, an ipod that talks to you when you are lonely);
-educational uses including human-like artificial teachers, but also
including smart presentation-of-material software which decides what
practice problem to ask you next, when to give tips, etc;
-industrial design (engineering);
...

Good luck to him!

--Abram

On Sat, Aug 7, 2010 at 9:10 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Mike Tintner

Why don't you kick it off with a suggestion of your own?

(I think there are only lower/basic *robotic* AGI apps- and suggest no one 
will come up with any answers for you. Why don't you disprove me?)


--
From: Ben Goertzel b...@goertzel.org
Sent: Sunday, August 08, 2010 2:10 AM
To: agi agi@v2.listbox.com
Subject: [agi] Help requested: Making a list of (non-robotic) AGI low 
hanging fruit apps



Hi,

A fellow AGI researcher sent me this request, so I figured I'd throw it
out to you guys


I'm putting together an AGI pitch for investors and thinking of low
hanging fruit applications to argue for. I'm intentionally not
involving any mechanics (robots, moving parts, etc.). I'm focusing on
voice (i.e. conversational agents) and perhaps vision-based systems.
Hellen Keller AGI, if you will :)

Along those lines, I'd like any ideas you may have that would fall
under this description. I need to substantiate the case for such AGI
technology by making an argument for high-value apps. All ideas are
welcome.


All serious responses will be appreciated!!

Also, I would be grateful if we
could keep this thread closely focused on direct answers to this
question, rather than
digressive discussions on Helen Keller, the nature of AGI, the definition 
of AGI

versus narrow AI, the achievability or unachievability of AGI, etc.
etc.  If you think
the question is bad or meaningless or unclear or whatever, that's
fine, but please
start a new thread with a different subject line to make your point.

If the discussion is useful, my intention is to mine the answers into a 
compact

list to convey to him

Thanks!
Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread David Jones
Hey Ben,

Faster, cheaper, and more robust 3D modeling for the movie industry. The
modeling allows different sources of video content to be extracted from
scenes, manipulated and mixed with others.

The movie industry has the money and motivation to extract data from images.
Making it easier, more robust and cheaper could drive innovation and
progress.

Why is it AGI-related? Because AGI requires knowledge. Knowledge can be
extracted from facts about the world. Facts can be extracted from images in
a general way using a limited set of algorithms and concepts.

Some say that computer vision is AI-complete and requires knowledge to do.
But, I have to disagree. Given sufficient data and good images from multiple
cameras or devices, unambiguous data can extract very accurate 3D models. If
this was AI-completed and required knowledge, that would not be possible.

Dave

On Sat, Aug 7, 2010 at 9:10 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Russell Wallace
If you can do better voice recognition, that's a significant
application in its own right, as well as having uses in other
applications e.g. automated first layer for call centers.

If you can do better image/video recognition, there are a great many
uses for that -- look at all the things people are trying to use image
recognition for at the moment.

If you can do both at the same time, that's going to have plenty of
uses for filtering, classifying and searching video. (Imagine being
able to search the Youtube archives like you can search the Web today.
I would guess Google would pay a few bob for technology that could do
that.)

On Sun, Aug 8, 2010 at 2:10 AM, Ben Goertzel b...@goertzel.org wrote:
 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition of 
 AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a 
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Steve Richfield
Ben,

Dr. Eliza with the Gracie interface to Dragon NaturallySpeaking makes a
really spectacular speech I/O demo - when it works, which is ~50% of the
time. The other 50% of the time, it fails to recognize enough to run with,
misses something critical, etc., and just sounds stupid, kinda like most
doctors I know. Even when it fails, it still babbles on with domain-specific
comments.

Results are MUCH better when a person with speech I/O and chronic illness
experience operates it.

Note that Gracie handles interruptions and other violations of
conversational structure. Further, it speaks in 3 voices, one for the
expert, one for the assistant, and one for the environment and OS.

Note that the Microsoft standard speech I/O has a mouth control that moves
simultaneously with the sound, that is pasted on an egghead face, so you can
watch it speak.

Note that the speech recognition works AMAZINGLY well, because the ONLY
thing it is interested in are long technical words and relevant phrases, and
NOT in the short connecting words that are what usually gets messed up. When
you watch what was recognized during casual conversation, what you typically
see is gobbledygook between the important stuff, which comes shining
through.

There are plans to greatly enhance all this, but like everything else on
this forum, it suffers from inadequate resources. If someone is looking for
something that is demonstrable right now to throw even modest resources
into...

That program was then adapted to a web server by adding logic to sense when
it was on a server, whereupon some additional buttons appear to operate and
debug it in a server environment. That adapted program is now up and
running, without any of the speech I/O stuff, on http://www.DrEliza.com.

I know, it isn't AGI, but neither is anything else these days.

Any interest?

Steve

On Sat, Aug 7, 2010 at 6:10 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Steve Richfield
Ian,

I recall several years ago that a group in Britain was operating just such a
chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex.
The program just sat there doing its thing on numerous sites, and whenever a
meeting was set up, it would issue a message to its human owners to alert
the police to go and arrest the pedophiles at the arranged time and place.
No human interaction was needed between arrests.

I can imagine an adaptation, wherein a program claims to be manufacturing
explosives, and is looking for other people to deliver those explosives.
With such a story line, there should be no problem arranging deliveries, at
which time you would arrest the would-be bombers.

I wish I could tell you more about the British project, but they were VERY
secretive. I suspect that some serious Googling would yield much more.

Hopefully you will find this helpful.

Steve
=
On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker ianpark...@gmail.com wrote:

 I wanted to see what other people's views were.My own view of the risks is
 as follows. If the Turing Machine is built to be as isomorphic with humans
 as possible, it would be incredibly dangerous. Indeed I feel that the
 biological model is far more dangerous than the mathematical.

 If on the other hand the TM was *not* isomorphic and made no attempt to
 be, the dangers would be a lot less. Most Turing/Löbner entries are
 chatterboxes that work on databases. The database being filled as you chat.
 Clearly the system cannot go outside its database and is safe.

 There is in fact some use for such a chatterbox. Clearly a Turing machine
 would be able to infiltrate militant groups however it was constructed. As
 for it pretending to be stupid, it would have to know in what direction it
 had to be stupid. Hence it would have to be a good psychologist.

 Suppose it logged onto a jihardist website, as well as being able to pass
 itself off as a true adherent, it could also look at the other members and
 assess their level of commitment and knowledge. I think that the
 true Turing/Löbner  test is not working in a laboratory environment but they
 should log onto jihardist sites and see how well they can pass themselves
 off. If it could do that it really would have arrived. Eventually it could
 pass itself off as a *peniti* to use the Mafia term and produce
 arguments from the Qur'an against the militant position.

 There would be quite a lot of contracts to be had if there were a realistic
 prospect of doing this.


   - Ian Parker

 On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote:

  Philosophical question 2 - Would passing the TT assume human stupidity
 and
  if so would a Turing machine be dangerous? Not necessarily, the Turing
  machine could talk about things like jihad without
 ultimately identifying with
  it.
 

 Humans without augmentation are only so intelligent. A Turing machine
 would
 be potentially dangerous, a really well built one. At some point we'd need
 to see some DNA as ID of another extended TT.

  Philosophical question 3 :- Would a TM be a psychologist? I think it
 would
  have to be. Could a TM become part of a population simulation that would
  give us political insights.
 

 You can have a relatively stupid TM or a sophisticated one just like
 humans.
 It might be easier to pass the TT by not exposing too much intelligence.

 John

  These 3 questions seem to me to be the really interesting ones.
 
 
- Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-07 Thread David Jones
Abram,

Thanks for the comments.

I think probability is just one way to deal with uncertainty. Defeasible
reasoning is another. Non-monotonic logic of various implementations.

I often think that probability is the wrong way to do some things regarding
AGI design.

Maybe things can't be known with super high confidence, but we still want as
high confidence as reasonably possible. Once we have that, we just have to
have working assumptions and working hypotheses. From there we need the
ability to update beliefs if we can find a reason to think the beliefs are
wrong...

Dave


On Fri, Aug 6, 2010 at 9:48 PM, Abram Demski abramdem...@gmail.com wrote:



 On Fri, Aug 6, 2010 at 8:22 PM, Abram Demski abramdem...@gmail.comwrote:


 (Without this sort of generality, your approach seems restricted to
 gathering knowledge about whatever events unfold in front of a limited
 quantity of high-quality camera systems which you set up. To be honest, the
 usefulness of that sort of knowledge is not obvious.)


 On second thought, this statement was a bit naive. You obviously intend the
 camera systems to be connected to robots or other systems which perform
 actual tasks in the world, providing a great variety of information
 including feedback from success/failure of actions to achieve results.

 What is unrealistic to me is not that this information could be useful, but
 that this level of real-world intelligence could be achieved with the
 super-high confidence bounds you are imagining. What I think is that
 probabilistic reasoning is needed. Once we have the object/location/texture
 information with those confidence bounds (which I do see as possible),
 gaining the sort of knowledge Cyc set out to contain seems inherently
 statistical.



 --Abram



 On Fri, Aug 6, 2010 at 4:44 PM, David Jones davidher...@gmail.comwrote:

 Hey Guys,

 I've been working on writing out my approach to create general AI to
 share and debate it with others in the field. I've attached my second draft
 of it in PDF format, if you guys are at all interested. It's still a work in
 progress and hasn't been fully edited. Please feel free to comment,
 positively or negatively, if you have a chance to read any of it. I'll be
 adding to and editing it over the next few days.

 I'll try to reply more professionally than I have been lately :) Sorry :S

 Cheers,

 Dave
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-07 Thread David Jones
Mike,

I took your comments into consideration and have been updating my paper to
make sure these problems are addressed.

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  1) You don't define the difference between narrow AI and AGI - or make
 clear why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101.
I think it's clear that my design defines general as being able to handle
the vast majority of things we want the AI to handle without requiring a
change in design.



 2) Learning about the world won't cut it -  vast nos. of progs. claim
 they can learn about the world - what's the difference between narrow AI and
 AGI learning?


The difference is in what you can or can't learn about and what tasks you
can or can't perform. If the AI is able to receive input about anything it
needs to know about in the same formats that it knows how to understand and
analyze, it can reason about anything it needs to.



 3) Breaking things down into generic components allows us to learn about
 and handle the vast majority of things we want to learn about. This is what
 makes it general!

 Wild assumption, unproven or at all demonstrated and untrue.


You are only right that I haven't demonstrated it. I will address this in
the next paper and continue adding details over the next few drafts.

As a simple argument against your counter argument...

If that were true that we could not understand the world using a limited set
of rules or concepts, how is it that a human baby, with a design that is
predetermined to interact with the world a certain way by its DNA, is able
to deal with unforeseen things that were not preprogrammed? That’s right,
the baby was born with a set of rules that robustly allows it to deal with
the unforeseen. It has a limited set of rules used to learn. That is
equivalent to a limited set of “concepts” (i.e. rules) that would allow a
computer to deal with the unforeseen.


 Interesting philosophically because it implicitly underlies AGI-ers'
 fantasies of take-off. You can compare it to the idea that all science can
 be reduced to physics. If it could, then an AGI could indeed take-off. But
 it's demonstrably not so.


No... it is equivalent to saying that the whole world can be modeled as if
everything was made up of matter. Oh, I forgot, that is the case :) It is a
limited set of concepts, yet it can create everything we know.



 You don't seem to understand that the problem of AGI is to deal with the
 NEW - the unfamiliar, that wh. cannot be broken down into familiar
 categories, - and then find ways of dealing with it ad hoc.


You don't seem to understand that even the things you think cannot be broken
down, can be.


Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com