RE: [agi] Epiphany - Statements of Stupidity

2010-08-08 Thread John G. Rose
Well, these artificial identities need to complete a loop. Say the
artificial identity acquires an email address, phone#, a physical address, a
bank account, logs onto Amazon and purchases stuff automatically it needs to
be able to put money into its bank account. So let's say it has a low profit
scheme to scalp day trading profits with its stock trading account. That's
the loop, it has to be able to make money to make purchases. And then
automatically file its taxes with the IRS. Then it's really starting to look
like a full legally functioning identity. It could persist in this fashion
for years. 

 

I would bet that these identities already exist. What happens when there are
many, many of them? Would we even know? 

 

John

 

From: Steve Richfield [mailto:steve.richfi...@gmail.com] 
Sent: Saturday, August 07, 2010 8:17 PM
To: agi
Subject: Re: [agi] Epiphany - Statements of Stupidity

 

Ian,

I recall several years ago that a group in Britain was operating just such a
chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex.
The program just sat there doing its thing on numerous sites, and whenever a
meeting was set up, it would issue a message to its human owners to alert
the police to go and arrest the pedophiles at the arranged time and place.
No human interaction was needed between arrests.

I can imagine an adaptation, wherein a program claims to be manufacturing
explosives, and is looking for other people to deliver those explosives.
With such a story line, there should be no problem arranging deliveries, at
which time you would arrest the would-be bombers.

I wish I could tell you more about the British project, but they were VERY
secretive. I suspect that some serious Googling would yield much more.

Hopefully you will find this helpful.

Steve
=

On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker ianpark...@gmail.com wrote:

I wanted to see what other people's views were.My own view of the risks is
as follows. If the Turing Machine is built to be as isomorphic with humans
as possible, it would be incredibly dangerous. Indeed I feel that the
biological model is far more dangerous than the mathematical.

 

If on the other hand the TM was not isomorphic and made no attempt to be,
the dangers would be a lot less. Most Turing/Löbner entries are chatterboxes
that work on databases. The database being filled as you chat. Clearly the
system cannot go outside its database and is safe.

 

There is in fact some use for such a chatterbox. Clearly a Turing machine
would be able to infiltrate militant groups however it was constructed. As
for it pretending to be stupid, it would have to know in what direction it
had to be stupid. Hence it would have to be a good psychologist.

 

Suppose it logged onto a jihardist website, as well as being able to pass
itself off as a true adherent, it could also look at the other members and
assess their level of commitment and knowledge. I think that the true
Turing/Löbner  test is not working in a laboratory environment but they
should log onto jihardist sites and see how well they can pass themselves
off. If it could do that it really would have arrived. Eventually it could
pass itself off as a peniti to use the Mafia term and produce arguments
from the Qur'an against the militant position.

 

There would be quite a lot of contracts to be had if there were a realistic
prospect of doing this.

 

 

  - Ian Parker 

On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote:

 Philosophical question 2 - Would passing the TT assume human stupidity and

 if so would a Turing machine be dangerous? Not necessarily, the Turing
 machine could talk about things like jihad without
ultimately identifying with
 it.


Humans without augmentation are only so intelligent. A Turing machine would
be potentially dangerous, a really well built one. At some point we'd need
to see some DNA as ID of another extended TT.


 Philosophical question 3 :- Would a TM be a psychologist? I think it would
 have to be. Could a TM become part of a population simulation that would
 give us political insights.


You can have a relatively stupid TM or a sophisticated one just like humans.
It might be easier to pass the TT by not exposing too much intelligence.

John


 These 3 questions seem to me to be the really interesting ones.


   - Ian Parker





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription

 http://www.listbox.com 

 


agi |  

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread wannabe
I don't know if it's low-hanging fruit, but it certainly seems like it
would require AGI to have a system that could given some picture or video
input, say what some object is.  And along those lines, accept verbal
instruction as to what it is if it's wrong in what it thinks.  I bring
that up because I'm trying to get through a book on formal semantics
called _What is Meaning?_, and I've been really struck that there clearly
is some ability to call things however we call them, but I surely don't
see how it's done.  It does not seem like it could be a simple thing.  And
we do call things by different names according to context and need.
andi


On Sat, August 7, 2010 9:10 pm, Ben Goertzel wrote:
 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-08 Thread Ian Parker
If you have a *physical* address an avatar needs to *physically* be there. -
Roxxy lives here with her friend Miss Al-Fasaq the belly dancer.

Chat lines as Steve describes are not too difficult. In fact the girls
(real) on a chat site have a sheet in front of them that gives the
appropriate response to a variety of questions. The WI (Women's Institute)
did an investigation of the sex industry, and one volunteer actually became
a *chatterbox*.

Do such entities exist? Probably not in the sex industry, at least not yet.
Why do I believe this? Basically because if the sex industry were moving in
this direction it would without a doubt be looking at some metric of brain
activity to give the customer the best erotic experience. You don't ask Are
you gay? You have men making love to men, women-men and women-women. Fing
out what gives the customer the biggest kick. You set the story of the porm
video.

In terms of security I am impressed by the fact that large numbers of bombs
have been constructed that don't work and could not work. Hydrogen
Peroxidehttp://en.wikipedia.org/wiki/Hydrogen_peroxide can
only be prepared in the pure state by chemical reactions. It is unlikely
(see notes on vapour pressure at 50C) that anything viable could be produced
by distillation on a kitchen stove.

Is this due to deliberately misleading information? Have I given the game
away? Certainly misleading information is being sent out. However it
is probably not being sent out by robotic entities. After all nothing has
yet achieved Turing status.

In the case of sex it may not be necessary for the client to believe that he
is confronted by a *real woman*. A top of the range masturbator/sex aid
may not have to pretend to be anything else.


  - Ian Parker

On 8 August 2010 07:30, John G. Rose johnr...@polyplexic.com wrote:

 Well, these artificial identities need to complete a loop. Say the
 artificial identity acquires an email address, phone#, a physical address, a
 bank account, logs onto Amazon and purchases stuff automatically it needs to
 be able to put money into its bank account. So let's say it has a low profit
 scheme to scalp day trading profits with its stock trading account. That's
 the loop, it has to be able to make money to make purchases. And then
 automatically file its taxes with the IRS. Then it's really starting to look
 like a full legally functioning identity. It could persist in this fashion
 for years.



 I would bet that these identities already exist. What happens when there
 are many, many of them? Would we even know?



 John



 *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]
 *Sent:* Saturday, August 07, 2010 8:17 PM
 *To:* agi
 *Subject:* Re: [agi] Epiphany - Statements of Stupidity



 Ian,

 I recall several years ago that a group in Britain was operating just such
 a chatterbox as you explained, but did so on numerous sex-related sites, all
 running simultaneously. The chatterbox emulated young girls looking for sex.
 The program just sat there doing its thing on numerous sites, and whenever a
 meeting was set up, it would issue a message to its human owners to alert
 the police to go and arrest the pedophiles at the arranged time and place.
 No human interaction was needed between arrests.

 I can imagine an adaptation, wherein a program claims to be manufacturing
 explosives, and is looking for other people to deliver those explosives.
 With such a story line, there should be no problem arranging deliveries, at
 which time you would arrest the would-be bombers.

 I wish I could tell you more about the British project, but they were VERY
 secretive. I suspect that some serious Googling would yield much more.

 Hopefully you will find this helpful.

 Steve
 =

 On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker ianpark...@gmail.com wrote:

 I wanted to see what other people's views were.My own view of the risks is
 as follows. If the Turing Machine is built to be as isomorphic with humans
 as possible, it would be incredibly dangerous. Indeed I feel that the
 biological model is far more dangerous than the mathematical.



 If on the other hand the TM was *not* isomorphic and made no attempt to
 be, the dangers would be a lot less. Most Turing/Löbner entries are
 chatterboxes that work on databases. The database being filled as you chat.
 Clearly the system cannot go outside its database and is safe.



 There is in fact some use for such a chatterbox. Clearly a Turing machine
 would be able to infiltrate militant groups however it was constructed. As
 for it pretending to be stupid, it would have to know in what direction it
 had to be stupid. Hence it would have to be a good psychologist.



 Suppose it logged onto a jihardist website, as well as being able to pass
 itself off as a true adherent, it could also look at the other members and
 assess their level of commitment and knowledge. I think that the
 true Turing/Löbner  test is not working in a laboratory environment but they
 

Re: [agi] How To Create General AI Draft2

2010-08-08 Thread Mike Tintner
Dave:No... it is equivalent to saying that the whole world can be modeled as if 
everything was made up of matter

And matter is... ?  Huh?

You clearly don't realise that your thinking is seriously woolly - and you will 
pay a heavy price in lost time.

What are your basic world/visual-world analytic units  wh. you are claiming 
to exist?  

You thought - perhaps think still - that *concepts* wh. are pretty fundamental 
intellectual units of analysis at a certain level, could be expressed as, or 
indeed, were patterns. IOW there's a fundamental pattern for chair or 
table. Absolute nonsense. And a radical failure to understand the basic 
nature of concepts which is that they are *freeform* schemas, incapable of 
being expressed either as patterns or programs.

You had merely assumed that concepts could be expressed as patterns,but had 
never seriously, visually analysed it. Similarly you are merely assuming that 
the world can be analysed into some kind of visual units - but you haven't 
actually done the analysis, have you? You don't have any of these basic units 
to hand, do you? If you do, I suggest, reply instantly, naming a few. You won't 
be able to do it. They don't exist.

Your whole approach to AGI is based on variations of what we can call 
fundamental analysis - and it's wrong. God/Evolution hasn't built the world 
with any kind of geometric, or other consistent, bricks. He/It is a freeform 
designer. You have to start thinking outside the box/brick/fundamental unit.


From: David Jones 
Sent: Sunday, August 08, 2010 5:12 AM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


Mike,

I took your comments into consideration and have been updating my paper to make 
sure these problems are addressed. 

See more comments below.


On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  1) You don't define the difference between narrow AI and AGI - or make clear 
why your approach is one and not the other

I removed this because my audience is for AI researchers... this is AGI 101. I 
think it's clear that my design defines general as being able to handle the 
vast majority of things we want the AI to handle without requiring a change in 
design.
 


  2) Learning about the world won't cut it -  vast nos. of progs. claim they 
can learn about the world - what's the difference between narrow AI and AGI 
learning?

The difference is in what you can or can't learn about and what tasks you can 
or can't perform. If the AI is able to receive input about anything it needs to 
know about in the same formats that it knows how to understand and analyze, it 
can reason about anything it needs to.
 

  3) Breaking things down into generic components allows us to learn about and 
handle the vast majority of things we want to learn about. This is what makes 
it general!

  Wild assumption, unproven or at all demonstrated and untrue.

You are only right that I haven't demonstrated it. I will address this in the 
next paper and continue adding details over the next few drafts.

As a simple argument against your counter argument... 

If that were true that we could not understand the world using a limited set of 
rules or concepts, how is it that a human baby, with a design that is 
predetermined to interact with the world a certain way by its DNA, is able to 
deal with unforeseen things that were not preprogrammed? That’s right, the baby 
was born with a set of rules that robustly allows it to deal with the 
unforeseen. It has a limited set of rules used to learn. That is equivalent to 
a limited set of “concepts” (i.e. rules) that would allow a computer to deal 
with the unforeseen. 
 
  Interesting philosophically because it implicitly underlies AGI-ers' 
fantasies of take-off. You can compare it to the idea that all science can be 
reduced to physics. If it could, then an AGI could indeed take-off. But it's 
demonstrably not so.

No... it is equivalent to saying that the whole world can be modeled as if 
everything was made up of matter. Oh, I forgot, that is the case :) It is a 
limited set of concepts, yet it can create everything we know.
 

  You don't seem to understand that the problem of AGI is to deal with the NEW 
- the unfamiliar, that wh. cannot be broken down into familiar categories, - 
and then find ways of dealing with it ad hoc.

You don't seem to understand that even the things you think cannot be broken 
down, can be.


Dave

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-08 Thread David Jones
Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny.

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Dave:No... it is equivalent to saying that the whole world can be modeled
 as if everything was made up of matter

 And matter is... ?  Huh?

 You clearly don't realise that your thinking is seriously woolly - and you
 will pay a heavy price in lost time.

 What are your basic world/visual-world analytic units  wh. you are
 claiming to exist?

 You thought - perhaps think still - that *concepts* wh. are pretty
 fundamental intellectual units of analysis at a certain level, could be
 expressed as, or indeed, were patterns. IOW there's a fundamental pattern
 for chair or table. Absolute nonsense. And a radical failure to
 understand the basic nature of concepts which is that they are *freeform*
 schemas, incapable of being expressed either as patterns or programs.

 You had merely assumed that concepts could be expressed as patterns,but had
 never seriously, visually analysed it. Similarly you are merely assuming
 that the world can be analysed into some kind of visual units - but you
 haven't actually done the analysis, have you? You don't have any of these
 basic units to hand, do you? If you do, I suggest, reply instantly, naming a
 few. You won't be able to do it. They don't exist.

 Your whole approach to AGI is based on variations of what we can call
 fundamental analysis - and it's wrong. God/Evolution hasn't built the
 world with any kind of geometric, or other consistent, bricks. He/It is a
 freeform designer. You have to start thinking outside the
 box/brick/fundamental unit.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Sunday, August 08, 2010 5:12 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 Mike,

 I took your comments into consideration and have been updating my paper to
 make sure these problems are addressed.

 See more comments below.

 On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  1) You don't define the difference between narrow AI and AGI - or make
 clear why your approach is one and not the other


 I removed this because my audience is for AI researchers... this is AGI
 101. I think it's clear that my design defines general as being able to
 handle the vast majority of things we want the AI to handle without
 requiring a change in design.



 2) Learning about the world won't cut it -  vast nos. of progs. claim
 they can learn about the world - what's the difference between narrow AI and
 AGI learning?


 The difference is in what you can or can't learn about and what tasks you
 can or can't perform. If the AI is able to receive input about anything it
 needs to know about in the same formats that it knows how to understand and
 analyze, it can reason about anything it needs to.



 3) Breaking things down into generic components allows us to learn about
 and handle the vast majority of things we want to learn about. This is what
 makes it general!

 Wild assumption, unproven or at all demonstrated and untrue.


 You are only right that I haven't demonstrated it. I will address this in
 the next paper and continue adding details over the next few drafts.

 As a simple argument against your counter argument...

 If that were true that we could not understand the world using a limited
 set of rules or concepts, how is it that a human baby, with a design that is
 predetermined to interact with the world a certain way by its DNA, is able
 to deal with unforeseen things that were not preprogrammed? That’s right,
 the baby was born with a set of rules that robustly allows it to deal with
 the unforeseen. It has a limited set of rules used to learn. That is
 equivalent to a limited set of “concepts” (i.e. rules) that would allow a
 computer to deal with the unforeseen.


  Interesting philosophically because it implicitly underlies AGI-ers'
 fantasies of take-off. You can compare it to the idea that all science can
 be reduced to physics. If it could, then an AGI could indeed take-off. But
 it's demonstrably not so.


 No... it is equivalent to saying that the whole world can be modeled as if
 everything was made up of matter. Oh, I forgot, that is the case :) It is a
 limited set of concepts, yet it can create everything we know.



 You don't seem to understand that the problem of AGI is to deal with the
 NEW - the unfamiliar, that 

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread deepakjnath
1. Basic object recognition can be used in camera phones to identify people
in front or objects in front. This can be used by blind people to navigate
their environment better.

2. AGI expert systems can be used to diagnose diseases.

thanks,
Deepak


On Sun, Aug 8, 2010 at 6:40 AM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-08 Thread David Jones
:) what you don't realize is that patterns don't have to be strictly limited
to the actual physical structure.

In fact, the chair patterns you refer to are not strictly physical
patterns. The pattern is based on how the objects can be used, what their
intended uses probably are, and what most common effective uses are.

So, chairs are objects that are used to sit on. You can identify objects
whose most likely use is for sitting based on experience.

If you think this is not a sufficient refutation of your argument, then
please don't argue with me regarding it anymore. I know your opinion and
respectfully disagree. If you don't accept my counter argument, there is no
point to continuing this back and forth ad finitum.

Dave

On Aug 8, 2010 9:29 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

 You're waffling.

You say there's a pattern for chair - DRAW IT. Attached should help you.

Analyse the chairs given in terms of basic visual units. Or show how any
basic units can be applied to them. Draw one or two.

You haven't identified any basic visual units  - you don't have any. Do you?
Yes/no.

No. That's not funny, that's a waste.. And woolly and imprecise through
and through.



 *From:* David Jones davidher...@gmail.com
*Sent:* Sunday, August 08, 2010 1:59 PM


To: agi
Subject: Re: [agi] How To Create General AI Draft2

Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny.

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Dave:No... it is equivalent to saying that the whole world can be modeled
 as if everything was made up of matter

 And matter is... ?  Huh?

 You clearly don't realise that your thinking is seriously woolly - and you
 will pay a heavy price in lost time.

 What are your basic world/visual-world analytic units  wh. you are
 claiming to exist?

 You thought - perhaps think still - that *concepts* wh. are pretty
 fundamental intellectual units of analysis at a certain level, could be
 expressed as, or indeed, were patterns. IOW there's a fundamental pattern
 for chair or table. Absolute nonsense. And a radical failure to
 understand the basic nature of concepts which is that they are *freeform*
 schemas, incapable of being expressed either as patterns or programs.

 You had merely assumed that concepts could be expressed as patterns,but had
 never seriously, visually analysed it. Similarly you are merely assuming
 that the world can be analysed into some kind of visual units - but you
 haven't actually done the analysis, have you? You don't have any of these
 basic units to hand, do you? If you do, I suggest, reply instantly, naming a
 few. You won't be able to do it. They don't exist.

 Your whole approach to AGI is based on variations of what we can call
 fundamental analysis - and it's wrong. God/Evolution hasn't built the
 world with any kind of geometric, or other consistent, bricks. He/It is a
 freeform designer. You have to start thinking outside the
 box/brick/fundamental unit.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Sunday, August 08, 2010 5:12 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 Mike,

 I took your comments into consideration and have been updating my paper to
 make sure these problems are addressed.

 See more comments below.

 On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  1) You don't define the difference between narrow AI and AGI - or make
 clear why your approach is one and not the other


 I removed this because my audience is for AI researchers... this is AGI
 101. I think it's clear that my design defines general as being able to
 handle the vast majority of things we want the AI to handle without
 requiring a change in design.



 2) Learning about the world won't cut it -  vast nos. of progs. claim
 they can learn about the world - what's the difference between narrow AI and
 AGI learning?


 The difference is in what you can or can't learn about and what tasks you
 can or can't perform. If the AI is able to receive input about anything it
 needs to know about in the same formats that it knows how to understand and
 analyze, it can reason about anything it needs to.



 3) Breaking things down into generic components allows us to learn about
 and handle the vast majority of things we want to learn about. This is what
 makes it general!

 Wild assumption, unproven or at all demonstrated and untrue.

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread Ian Parker
Just one point about Forex, your first entry. This is purely a time series
analysis as I understand it. It is narrow AI in fact. With AGI you would
expect interviews with the executives of listed companies, just as the big
investment houses do.

AGI would be data mining of everything about a company as well as time
series analysis.


  - Ian Parker

On 8 August 2010 02:35, Abram Demski abramdem...@gmail.com wrote:

 Ben,

 -The oft-mentioned stock-market prediction;
 -data mining, especially for corporate data such as customer behavior,
 sales prediction, etc;
 -decision support systems;
 -personal assistants;
 -chatbots (think, an ipod that talks to you when you are lonely);
 -educational uses including human-like artificial teachers, but also
 including smart presentation-of-material software which decides what
 practice problem to ask you next, when to give tips, etc;
 -industrial design (engineering);
 ...

 Good luck to him!

 --Abram

 On Sat, Aug 7, 2010 at 9:10 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Stocks; was Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread Abram Demski
Ian,

Be courteous-- Ben asked specifically that any arguments about which things
are narrow-ai should start a separate topic.

Yea, I did not intend to rule out any possible sources of information for
the stock market prediction task. Ben has worked on a system which looked on
the web for chatter about specific companies, for example.

Even if it was just stock data being used, it wouldn't be just time-series
analysis. It would at least be planning as well. Really, though, it includes
acting with the behavior of potential adversaries in mind (like
game-playing).

Even if it *were* just time-series analysis, though, I think it would be a
decent AGI application. That is because I think AGI technology should be
good at time-series analysis! In my opinion, a good AGI learning algorithm
should be useful for such tasks.

So, yes, many of my examples could be attacked via narrow AI; but I think
they would be handled *better* by AGI. That's why they are low-hanging
fruit-- they are (hopefully) on the border.

--Abram

On Sun, Aug 8, 2010 at 11:58 AM, Ian Parker ianpark...@gmail.com wrote:

 Just one point about Forex, your first entry. This is purely a time series
 analysis as I understand it. It is narrow AI in fact. With AGI you would
 expect interviews with the executives of listed companies, just as the big
 investment houses do.

 AGI would be data mining of everything about a company as well as time
 series analysis.


   - Ian Parker

 On 8 August 2010 02:35, Abram Demski abramdem...@gmail.com wrote:

 Ben,

 -The oft-mentioned stock-market prediction;
 -data mining, especially for corporate data such as customer behavior,
 sales prediction, etc;
 -decision support systems;
 -personal assistants;
 -chatbots (think, an ipod that talks to you when you are lonely);
 -educational uses including human-like artificial teachers, but also
 including smart presentation-of-material software which decides what
 practice problem to ask you next, when to give tips, etc;
 -industrial design (engineering);
 ...

 Good luck to him!

 --Abram

 On Sat, Aug 7, 2010 at 9:10 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread Steve Richfield
Ben

On Sat, Aug 7, 2010 at 6:10 PM, Ben Goertzel b...@goertzel.org wrote:

 I need to substantiate the case for such AGI
 technology by making an argument for high-value apps.


There is interesting hidden value in some stuff. In the case of Dr. Eliza,
it provide a communication pathway to sick people, which is EXACTLY what a
research institution needs to support itself.

I think you may be on to something here - looking for high-value.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-08 Thread Mike Tintner

There is nothing visual or physical or geometric or quasi geometric about what 
you're saying - no shapes or forms whatsoever to your idea of patterns or 
chair or sitting. Given an opportunity to discuss physical concretes - and 
what actually physically constitutes a chair, or any other 
concept/class-of-forms is fascinating and central to AGI - you retreat into 
vague abstractions while claiming to be interested in visual AGI. 

Fine, let's leave it there.


From: David Jones 
Sent: Sunday, August 08, 2010 4:12 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


:) what you don't realize is that patterns don't have to be strictly limited to 
the actual physical structure.

In fact, the chair patterns you refer to are not strictly physical patterns. 
The pattern is based on how the objects can be used, what their intended uses 
probably are, and what most common effective uses are.

So, chairs are objects that are used to sit on. You can identify objects whose 
most likely use is for sitting based on experience.

If you think this is not a sufficient refutation of your argument, then please 
don't argue with me regarding it anymore. I know your opinion and respectfully 
disagree. If you don't accept my counter argument, there is no point to 
continuing this back and forth ad finitum. 

Dave


  On Aug 8, 2010 9:29 AM, Mike Tintner tint...@blueyonder.co.uk wrote:


  You're waffling.

  You say there's a pattern for chair - DRAW IT. Attached should help you.

  Analyse the chairs given in terms of basic visual units. Or show how any 
basic units can be applied to them. Draw one or two.

  You haven't identified any basic visual units  - you don't have any. Do you? 
Yes/no. 

  No. That's not funny, that's a waste.. And woolly and imprecise through and 
through.




  From: David Jones 
  Sent: Sunday, August 08, 2010 1:59 PM

  To: agi
  Subject: Re: [agi] How To Create General AI Draft2



  Mike,

  We've argued about this over and over and over. I don't want to repeat 
previous arguments to you.

  You have no proof that the world cannot be broken down into simpler concepts 
and components. The only proof you attempt to propose are your example problems 
that *you* don't understand how to solve. Just because *you* cannot solve them, 
doesn't mean they cannot be solved at all using a certain methodology. So, who 
is really making wild assumptions?

  The mere fact that you can refer to a chair means that it is a recognizable 
pattern. LOL. That fact that you don't realize this is quite funny. 

  Dave


  On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled 
as if everything was made up of matter

And matter is... ?  Huh?

You clearly don't realise that your thinking is seriously woolly - and you 
will pay a heavy price in lost time.

What are your basic world/visual-world analytic units  wh. you are 
claiming to exist?  

You thought - perhaps think still - that *concepts* wh. are pretty 
fundamental intellectual units of analysis at a certain level, could be 
expressed as, or indeed, were patterns. IOW there's a fundamental pattern for 
chair or table. Absolute nonsense. And a radical failure to understand the 
basic nature of concepts which is that they are *freeform* schemas, incapable 
of being expressed either as patterns or programs.

You had merely assumed that concepts could be expressed as patterns,but had 
never seriously, visually analysed it. Similarly you are merely assuming that 
the world can be analysed into some kind of visual units - but you haven't 
actually done the analysis, have you? You don't have any of these basic units 
to hand, do you? If you do, I suggest, reply instantly, naming a few. You won't 
be able to do it. They don't exist.

Your whole approach to AGI is based on variations of what we can call 
fundamental analysis - and it's wrong. God/Evolution hasn't built the world 
with any kind of geometric, or other consistent, bricks. He/It is a freeform 
designer. You have to start thinking outside the box/brick/fundamental unit.


From: David Jones 
Sent: Sunday, August 08, 2010 5:12 AM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


Mike,

I took your comments into consideration and have been updating my paper to 
make sure these problems are addressed. 

See more comments below.


On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

  1) You don't define the difference between narrow AI and AGI - or make 
clear why your approach is one and not the other

I removed this because my audience is for AI researchers... this is AGI 
101. I think it's clear that my design defines general as being able to handle 
the vast majority of things we want the AI to handle without requiring a change 
in design.
 


  2) Learning about the world won't cut it -  

Re: Stocks; was Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-08 Thread Ian Parker
OK Ben is then one step ahead of Forex. Point is time series analysis,
although it is narrow AI can be extremely powerful. The situation about *
sentiment* is different from that of Poker where there is a single
adversary bluffing. A time series analysis encompasses the *ensemble* of
different opinions. Statistical programs can model this accurately.

Ben presumably has techniques for mining the data about companies. The
difficulty, as I see it, of translating this into a stock exchange
prediction is the weighting of different factors. What in fact you will need
to complete the task is something like conjoint analysis.

We need, for example, to get an index for innovation. We can see
how important this is and how important other factors are by doing something
like conjoint analysis. Management will affect long term stock values. Forex
is concerned with day to day fluctuations where management performance
(except in terms of the manipulation of shares) is not important.

Conjoint analysis has been used by managements to indicate how they should
be managing. Ben should be able to tell managements how they can optimise
the value of their company based on historical data.

This is real AGI and there is a close tie up between prediction and how a
company should be managed. We know as a matter of historical record, for
example, that where you have to reduce a budget deficit you do it with 2
parts reduction in public expenditure and 1 part rise in taxation. The
Con/Lib Dem coalition is going for a 3:1 ratio. There will no double be
other things that will come out of data-mining.

Sorry no disrespect intended.

On 8 August 2010 18:09, Abram Demski abramdem...@gmail.com wrote:

 Ian,

 Be courteous-- Ben asked specifically that any arguments about which things
 are narrow-ai should start a separate topic.

 Yea, I did not intend to rule out any possible sources of information for
 the stock market prediction task. Ben has worked on a system which looked on
 the web for chatter about specific companies, for example.

 Even if it was just stock data being used, it wouldn't be just time-series
 analysis. It would at least be planning as well. Really, though, it includes
 acting with the behavior of potential adversaries in mind (like
 game-playing).

 Even if it *were* just time-series analysis, though, I think it would be a
 decent AGI application. That is because I think AGI technology should be
 good at time-series analysis! In my opinion, a good AGI learning algorithm
 should be useful for such tasks.

 So, yes, many of my examples could be attacked via narrow AI; but I think
 they would be handled *better* by AGI. That's why they are low-hanging
 fruit-- they are (hopefully) on the border.

 --Abram

 On Sun, Aug 8, 2010 at 11:58 AM, Ian Parker ianpark...@gmail.com wrote:

 Just one point about Forex, your first entry. This is purely a time series
 analysis as I understand it. It is narrow AI in fact. With AGI you would
 expect interviews with the executives of listed companies, just as the big
 investment houses do.

 AGI would be data mining of everything about a company as well as time
 series analysis.


   - Ian Parker

 On 8 August 2010 02:35, Abram Demski abramdem...@gmail.com wrote:

 Ben,

 -The oft-mentioned stock-market prediction;
 -data mining, especially for corporate data such as customer behavior,
 sales prediction, etc;
 -decision support systems;
 -personal assistants;
 -chatbots (think, an ipod that talks to you when you are lonely);
 -educational uses including human-like artificial teachers, but also
 including smart presentation-of-material software which decides what
 practice problem to ask you next, when to give tips, etc;
 -industrial design (engineering);
 ...

 Good luck to him!

 --Abram

 On Sat, Aug 7, 2010 at 9:10 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the
 definition of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the