apparent.
- Original Message -
From: Stephen Reed
To: agi@v2.listbox.com
Sent: Friday, March 28, 2008 4:30 AM
Subject: Re: [agi] Microsoft Launches Singularity
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27
Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, March 28, 2008 8:07:26 AM
Subject: Re: [agi] Microsoft Launches Singularity
DIV {
MARGIN:0px;}
Steve,
You raise huge issues. I broadly agree with the
direction you're going with your multilevelled approach to physically
Charles: I don't think a General Intelligence could be built entirely out
of
narrow AI components, but it might well be a relatively trivial add-on.
Just consider how much of human intelligence is demonstrably narrow AI
(well, not artificial, but you know what I mean). Object recognition,
e.g.
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Charles: I don't think a General Intelligence could be built entirely
out
of
narrow AI components, but it might well be a relatively trivial add-
on.
Just consider how much of human intelligence is demonstrably narrow
AI
(well, not
John,
I'm developing this argument more fully elsewhere, so I'll just give a
partial gist. What I'm saying - and I stand to be corrected - is that I
suspect that literally no one in AI and AGI (and perhaps philosophy) present
or past understands the nature of the tools they are using.
All
So if I tell you to handle an object, or a piece of business, like say
removing a chair from the house - that word handle is open-ended and
gives you vast freedom within certain parameters as to how to apply your
hand(s) to that object. Your hands can be applied to move a given box, for
Ben:It's not just that we can CHOOSE the meanings of concepts from a fixed
menu
of possibilities ... we CREATE the meanings of concepts as we use them ...
this is how and why concept-meanings continually change over time in
individual minds and in cultures...
Yes. Good point.
://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 11:04:08 AM
Subject: Re: [agi] Microsoft Launches Singularity
John,
I'm developing
]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 11:04:08 AM
Subject: Re: [agi] Microsoft Launches Singularity
John,
I'm developing this argument more fully elsewhere, so I'll just give a
partial gist. What I'm saying - and I stand to be corrected - is that I
suspect
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 11:04:08 AM
Subject: Re: [agi] Microsoft Launches Singularity
John,
I'm developing
[Warning: A random blurb on the word theme].
Words and similar things are marvelous high-level training tools. They
provide a uniform interface that allows to access high-level concepts
through low-level standard input. They allow to perform supervised
training without special 'label signals'.
On 27/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
3. While philosophically, intellectually, most people dealing with this
area may expect words to have precise meanings, they know practically and
intuitively that this is impossible and work on the basis that words can
have different
Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 5:37:40 PM
Subject: Re: [agi] Microsoft Launches Singularity
It's true
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 5:30:12 PM
Subject: Re: [agi] Microsoft Launches Singularity
DIV {
MARGIN:0px;}
Steve,
Some odd thoughts in reply. Thanks BTW for
article.
1. You don't seem to get what's
From: Mike Tintner [mailto:[EMAIL PROTECTED]
I'm developing this argument more fully elsewhere, so I'll just give a
partial gist. What I'm saying - and I stand to be corrected - is that I
suspect that literally no one in AI and AGI (and perhaps philosophy)
present
or past understands the
Mentifex called; it wants its ASCII diagrams back.
-Chris
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
My take on this is completely different.
When I say Narrow AI I am specifically referring to something that is
so limited that it has virtually no chance of becoming a general
intelligence. There is more to general
AM
Subject: Re: Mark Launches Singularity :-) WAS Re: [agi] Microsoft Launches
Singularity
Mentifex called; it wants its ASCII diagrams back.
-Chris
--
agi | Archives | Modify Your Subscription
From: Charles D Hixson [mailto:[EMAIL PROTECTED]
I don't think a General Intelligence could be built entirely out of
narrow AI components, but it might well be a relatively trivial add-on.
Just consider how much of human intelligence is demonstrably narrow AI
(well, not artificial, but you
On 25/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
You're thinking too small. The AGI will distribute itself. And money is
likely to be:
- rapidly deflated,
- then replaced with a new, alternate currency that truly values
talent and effort (rather than just playing with the
My thinking is not too small. Anymore than any other person on this
distribution list. But that is not why this response. My response is to be
able to clarify what I meant. I'm not disagreeing - not was I trying to
sound brilliant.
I'm certainly not suggesting that I will be the one to invent
Now, let me ask you a question: Do you believe that all AI / AGI
researchers are toiling over all this for the challenge, or purely out of
interest? I doubt that as well. Surely there are those elements as drivers
- BUT SO IS MONEY.
Aki, you don't seem to understand the psychology of the
On 25/03/2008, Aki Iskandar [EMAIL PROTECTED] wrote:
You can call future currency whatever you like. Yes, it is like to change
form - but certainly not purpose. And Marxism, where maybe AGI or the real
deal with deflate currency, is an unlikely aftermath of the advent of AGI.
I think the
I see the pattern as much more of the same. You now have Microsoft SQL
Server, Microsoft Internet Information Server, Microsoft Exchange Server and
then you'll have Microsoft Intelligence Server or Microsoft Cognitive
Server. It'll be limited by licenses, resources and features. The cool part
Ben - you're absolutely correct. I don't have a good grasp of the psychology
of the
AGI researcher. This is because, at this point, I'm not an AGI researcher.
My only viewpoint is currently from the business side.
However, and despite not being trained in science, I have been a
professional
Hi Aki,
Even as a pure scientist, you can
accomplish more in research by producing wealth, than depending on gov't
grants. I say gov't grants because private investment is probably years
away from now. The topic of financing got a lot of attention at AGI 08.
Well, if you're an AGI
.listbox.com
Sent: Monday, March 24, 2008 11:42 PM
Subject: Re: [agi] Microsoft Launches Singularity
I agree with Mark.
The reason the readers of this forum should seek to control AGI development
is to ensure friendly behavior, rather than leaving this responsibility to an
Evil Company
Agreed. Thankfully - despite the different weights on motivators - we're
all motivated to create an AGI. And the why is much more important than
the how.
For the record, I believe that OpenCog is a great idea - and it may possibly
work. If not directly - certainly any off shoots from it would
Bob Mottram wrote:
On 25/03/2008, *Mark Waser* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
You're thinking too small. The AGI will distribute itself. And
money is likely to be:
* rapidly deflated, * then replaced with a new, alternate currency
that truly values talent and effort
My thinking is not too small.
My apologies. I should have said Your thinking looks/appears too small (to me
:-) I have a bad habit of shortening that to Your thinking is too small and
assuming that the recipient would unpack it.
So, the creators of the first several AGIs will be kings
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
However, I think you are right that there could be an intermediate
period when proto-AGI systems are a nuisance. However, these
proto-AGI systems will really only be souped up Narrow-AI systems, so I
believe their potential for mischief
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
However, I think you are right that there could be an intermediate
period when proto-AGI systems are a nuisance. However, these
proto-AGI systems will really only be souped up Narrow-AI systems, so I
believe their potential
Three factors will govern how the first AGI will behave. First, there will
be a strong incentive to build the first AGI as a non-aggressive,
non-selfish creature.
Absolutely, positively not!
Try the following Friendliness implementation on yourself.
1. The absolute hardest part
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
My take on this is completely different.
When I say Narrow AI I am specifically referring to something that is
so limited that it has virtually no chance of becoming a general
intelligence. There is more to general intelligence than just
Mark Waser wrote:
Three factors will govern how the first AGI will behave. First,
there will be a strong incentive to build the first AGI as a
non-aggressive, non-selfish creature.
Absolutely, positively not!
I'm sorry, Mark, but I am completely baffled by this.
Perhaps it is because I
http://www.codeplex.com/singularity
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
Ben - your email scared me. I thought the evil empire (I can say that since
I worked for them for a few years) achieved *some* level of cognition / AGI
... even the most rudimentary signs of intelligence / learned behavior -
prediction machine.
Whew! It's not that at all! I know they are
A more likely scenario is that someone else creates an AGI and then
Microsoft copies it some time later. But seriously, if someone does manage
to produce a working AGI it's probably game over for software engineering
and software companies as we know them today.
On 24/03/2008, Aki Iskandar
I agree with your statement, if someone does manage to produce a working
AGI it's probably game over for software engineering and software companies
as we know them today.But another equally likely scenario is that
Microsoft will buy it - and not reverse engineer it. Perhaps they can't
@v2.listbox.com
Sent: Monday, March 24, 2008 7:19 PM
Subject: Re: [agi] Microsoft Launches Singularity
I agree with your statement, if someone does manage to produce a working AGI
it's probably game over for software engineering and software companies as we
know them today.But another
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, March 24, 2008 8:09:56 PM
Subject: Re: [agi] Microsoft Launches Singularity
You're thinking too small. The AGI will distribute
41 matches
Mail list logo