RE: **JUNK** Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Peter Voss
no

 

From: Joseph Henry [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 15, 2008 1:56 PM
To: agi@v2.listbox.com
Subject: **JUNK** Re: [agi] META: A possible re-focusing of this list

 

Peter, do you think they would be less overwhelmed if they were given the
option of looking at the same content through the use of a forum?

I think it would be far easier to wade through...

- Joseph

  _  


agi |   Archives
 |
 Modify Your Subscription

  

No virus found in this incoming message.
Checked by AVG - http://www.avg.com
Version: 8.0.173 / Virus Database: 270.8.0/1722 - Release Date: 10/13/2008
7:50 AM




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Peter Voss
Not a single one of our current investors (>dozen) or potential investors
have used AGI lists to evaluate our project (or the competition)

 

Peter Voss

a2i2

 

From: Terren Suydam [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 15, 2008 1:25 PM
To: agi@v2.listbox.com
Subject: Re: [agi] META: A possible re-focusing of this list

 



This is a publicly accessible forum with searchable archives... you don't
necessarily have to be subscribed and inundated to find those nuggets. I
don't know any funding decision makers myself, but if I were in control of a
budget I'd be using every resource at my disposal to clarify my decision. If
I were considering Novamente for example I'd be looking for exactly the kind
of exchanges you and Richard Loosemore (for example) have had on the list,
to gain a better understanding of possible criticism, and because others may
be able to articulate such criticism far better than me.  Obviously the same
goes for anyone else on the list who would look for funding... I'd want to
see you defend your ideas, especially in the absence of peer-reviewed
journals (something the JAGI hopes to remedy obv).

Terren

--- On Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 3:37 PM


Terren,

I know a good number of VC's and government and private funding decision
makers... and believe me, **none** of them has remotely enough extra time to
wade through the amount of text that flows on this list, to find the nuggets
of real intellectual interest!!!

-- Ben G

On Wed, Oct 15, 2008 at 12:07 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:



One other important point... if I were a potential venture capitalist or
some other sort of funding decision-maker, I would be on this list and
watching the debate. I'd be looking for intelligent defense of (hopefully)
intelligent criticism to increase my confidence about the decision to fund.
This kind of forum also allows you to sort of advertise your approach to
those who are new to the game, particularly young folks who might one day be
valuable contributors, although I suppose that's possible in the more
tightly-focused forum as well.

--- On Wed, 10/15/08, Terren Suydam <[EMAIL PROTECTED]> wrote:

From: Terren Suydam <[EMAIL PROTECTED]>
Subject: Re: [agi] META: A possible re-focusing of this list


To: agi@v2.listbox.com

Date: Wednesday, October 15, 2008, 11:29 AM

 



Hi Ben,

I think that the current focus has its pros and cons and the more narrowed
focus you suggest would have *its* pros and cons. As you said, the con of
the current focus is the boring repetition of various anti positions. But
the pro of allowing that stuff is for those of us who use the conflict among
competing viewpoints to clarify our own positions and gain insight. Since
you seem to be fairly clear about your own viewpoint, it is for you a
situation of diminishing returns (although I will point out that a recent
blog post of yours on the subject of play was inspired, I think, by a point
Mike Tintner made, who is probably the most obvious target of your
frustration). 

For myself, I have found tremendous value here in the debate (which probably
says a lot about the crudeness of my philosophy). I have had many new
insights and discovered some false assumptions. If you narrowed the focus, I
would probably leave (I am not offering that as a reason not to do it! :-)
I would be disappointed, but I would understand if that's the decision you
made.

Finally, although there hasn't been much novelty among the debate (from your
perspective, anyway), there is always the possibility that there will be.
This seems to be the only public forum for AGI discussion out there (are
there others, anyone?), so presumably there's a good chance it would show up
here, and that is good for you and others actively involved in AGI research.

Best,
Terren


--- On Wed, 10/15/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 11:01 AM


Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current
computers, according to designs that can feasibly be implemented by
moderately-sized groups of people

2)
Discussions about whether the above is even possible -- or whether it is
impossible because of weird physics, or poorly-defined special
characteristics of human creativity, or the so-called "complex systems
problem", or because AGI intrinsically requires billions of people and
quadrillions of dollars, or whatever

Personally I am pretty b

RE: [agi] Question, career related

2008-08-21 Thread Peter Voss
Perhaps: http://adaptiveai.com/company/opportunities.htm 

 

From: Valentina Poletti [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 21, 2008 10:23 AM
To: agi@v2.listbox.com
Subject: [agi] Question, career related

 

Dear AGIers,

I am looking for a research opportunity in AGI or related neurophysiology. I
won prizes in maths, physics, computer science and general science when I
was younger and have a keen interest in those fields. I'm a pretty good
programmer, and have taught myself neurophysiology and some cognitive
science. I have an inclination towards math and logic. I was wondering if
anyone knows of any open such positions, or could give me advice, references
to whom I may speak.

Thanks.

  _  


agi |   Archives
 |
 Modify Your Subscription

  

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: [agi] More Info Please

2008-05-23 Thread Peter Voss
Thanks, Ben.

The technical details of our design and business plan details are indeed
confidential. All I can really say publicly is that we are confident that we
have pretty direct path to high-level AGI from where we are, and that we
have an extremely viable business plan to make this happen. Initial
commercialization next year will utilize the current 'low-grade' version of
our AGI engine that will be able to perform certain tasks that are quite
dumb (in human terms) but still commercially valuable. Our AGI 'brain' can
potentially be utilized in many different kinds of systems/ applications.

More details will probably become available late this year.

Peter

PS. I also have *some* doubts about the ultimate capabilities of our AGI
engine, but probably no greater than yours about NM  :)


-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 23, 2008 2:56 PM
To: agi@v2.listbox.com
Subject: Re: [agi] More Info Please

Peter has some technical info on his overall (adaptive neural net) based
approach to AI, on his company website, which is based on a paper he wrote
in the AGI volume Cassio and I edited for Springer (written 2002, published
2006).

However, he has kept his specific commercial product direction tightly under
wraps.

I believe Peter's ideas are interesting but I have my doubts that his
approach is really AGI-capable.  However, I don't feel comfortable going
into great deal on my reasons, because Peter seems to value secrecy
regarding his approach... I've had a mild amount of insider info regarding
the approach (e.g. due to visiting his site a few years ago, etc.) and don't
want to blab stuff on this list that he'd want me to keep secret...

Ben


On Fri, May 23, 2008 at 5:40 PM, Mike Tintner <[EMAIL PROTECTED]>
wrote:
> ... on this:
>
> http://www.adaptiveai.com/news/index.htm
>
> "  Towards Commercialization
>
> It's been a while. We've been busy. A good kind of busy.
>
> At the end of March we completed an important milestone: a demo system 
> consolidating our prior 10 months' work. This was followed by my 
> annual pilgrimage to our investors in Australia. The upshot of all 
> this is that we now have some additional seed funding to launch our 
> commercialization phase late this year.
>
> On the technical side we still have a lot of hard work ahead of us.
> Fortunately we have a very strong and highly motivated team, so that 
> over the next 6 months we expect to make as much additional progress 
> as we have over the past 12. Our next technical milestone is around 
> early October by which time we'll want our 'proto AGI' to be pretty 
> much ready to start earning a living.
>
> By the end of 2008 we should be ready to actively pursue 
> commercialization in addition to our ongoing R&D efforts. At that time 
> we'll be looking for a high-powered CEO to head up our business 
> division which we expect to grow to many hundreds of employees over a few
years.
>
> Early in 2009 we plan to raise capital for this commercial venture, 
> and if things go according to plan we'll have a team of around 50 by 
> the middle of the year.
>
> Well, exciting future plans, but now back to work.
>
> Peter "
>
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they will
surely become worms."
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] a2i2 is looking for Entry-Level AI Psychologist

2008-02-20 Thread Peter Voss
Adaptive A.I. Inc is looking for Entry-Level AI Psychologist

 

As all of our current staff members (now up to 17) are now quite experienced
and highly productive, we are again looking to fill a (Los Angeles based)
full-time, entry-level position.

 

We want someone smart, and highly motivated to work on AGI. Attitude and
work habits are more important than deep technical skills. 

 

For more details see: http://adaptiveai.com/company/opportunities.htmAt
this stage the work will entail quite a bit of system training and testing,
as well as sys admin.

 

Because we are expecting rapid expansion of our project and team over the
few years (we expect to more than triple our staff over the next year), this
position provides excellent advancement opportunities.

 

Our project has definite near-term commercial objectives, and we offer
competitive compensation in addition to the option of equity participation
in our company.

 

Please pass this on to anyone who might fit the bill.

 

Thanks.

 

Peter

 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] a2i2 looking for another AI Psychologist

2007-11-01 Thread Peter Voss
Yes, I did.

 

From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 01, 2007 9:54 AM
To: agi@v2.listbox.com
Subject: Re: [agi] a2i2 looking for another AI Psychologist

 

 

PV:We are looking for another AI Psychologist 

 

Could someone explain what an AI Psychologist is? Peter Voss seems to have
invented a job title.

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;>
&

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=59959449-71fd60

[agi] a2i2 looking for another AI Psychologist

2007-11-01 Thread Peter Voss
We are looking for another AI Psychologist to join our team:
http://adaptiveai.com/company/team_oct07.JPG 

 

Details: http://adaptiveai.com/company/opportunities.htm 

 

. Los Angeles based

. Full-time

. Entry-level to experienced: 35 to 70k package incl. some stock
options

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=59954018-83da2f

RE: [agi] a2i2 news update

2007-07-26 Thread Peter Voss
Just a quick note: While Rand's writings helped me a lot to clarify/solve a
number of crucial philosophical/moral questions, I certainly don't subscribe
to either all of her fictional characters' actions, or that of many of her
'true blue' followers. In fact, I don't think that Rand herself was a good
Objectivist! Still, I owe her a lot for numerous crucial insights.

 

What Rand meant by 'selfishness' is really rational, principled, long-term
self-interest. In my book this definitely includes having good EQ, and
caring about the welfare of others.

 

Altruism means selflessness. The logical, though unconventional, conclusion
is that it refers to actions taken irrespective of the effects they have on
you. In fact, actions that are detrimental to you they are seen as more
desirable. I do think that this is very harmful. 

 

(The seeming paradox of 'psychological altruism', that even altruists are
selfish, has been well explored - e.g. see Nathaniel Branden.)  

 

More in my essay: http://www.optimal.org/peter/rational_ethics.htm 

 

I'll try to address these issues a bit in my upcoming talk:
http://www.singinst.org/summit2007/ 

 

That's about all I have time for now..

 

Back to building brains.

 

  _  

From: Robert Wensman [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 26, 2007 4:12 AM
To: agi@v2.listbox.com
Subject: Re: [agi] a2i2 news update

 

What worries me is that the founder of this company subscribes to the
philosophy of Objectivism, and the implications this might have for the
company's possibility at achieving friendly AI. I do not know about the rest
of their team, but some of them use the word "rational" a lot, which could
be a hint. 

 

I am well aware of that Ayn Rand, the founder of Objectivism, uses slightly
non-standard meaning when using words like "selfishness" and "altruism", but
her main point is that altruism is the source of all evil in the world, and
selfishness ought to be the main virtue of all mankind. Instead of altruism
she often also uses the word "selflessness" which better explains her
seemingly odd position. What she essentially means is that all evil of the
world stems from people who "give up their values, and their self" and
thereby become mindless evildoers that respect others as little as they
respect themselves. While this psychological statement in isolation could be
worth noting, and might help understand some collective madness, especially
from the last century, I still feel her philosophy is dangerous because she
mixes up her very specific concept of "selflessness" with the commonly
understood concept of altruism, in the sense of valuing the well being and
happiness of others. Is this mix-up accidental or intended? In her novel The
Fountainhead you even get the impression that she doesn't think it is
possible to combine altruism with creativity and originality, as all
"altruistic" characters of her book are incompetent copycats who just
imitate others. 

 

Her view of the world also seems to completely ignore another category of
potential evil-doers: Selfish people who just do not see any problem with
using whatever means they see fit, including violence, to achieve their
goals. People who just do not see there is "any problem" in killing or
torturing others. Why does she ignore this group of people, because she does
not think they exist? 

 

My personal opinion is that Objectivism is a case of what could be called
"the werewolf fallacy". For example, I could make a case for the following
philosophy: "Werewolves as described in literature would be bad for
humanity, and if we encounter werewolves, we should try to fight them with
whatever means we see fit!". This statement is in itself completely true and
coherent, and I would be possible to write books on the subject that could
seem to make sense. The only problem is of course that there are no
werewolves, and there are other much more important things to do than to go
around preparing to fight werewolves! Similarly I do not think that all
these "selfless people" who Ayn Rand describe exist in any large numbers, or
at least they are certainly not the main source of evil in the world. 

 

How Objectivism could feel like "home" I cannot understand personally. If a
person is less capable of understanding other people, I guess it could make
some sense. I guess social life could be hard for such a person; they would
often hurt other people by mistake, make others annoyed or angry and
frequently bring enemies upon themselves. Ayn Rands gives to them a very
comfortable answer namely that it is ok, even virtuous, to not understand
others as long as you are not physically aggressive. An agenda for peaceful
psychopathy if you like. So far so good, I don't expect everyone to be
empathetic, and to motivate the need for respect rationally by the benefits
of cooperation seems like a reasonable trade of. But Ayn Rand goes a step
too far when she outright attacks altruism and people who value the well
being o

[agi] a2i2 news update

2007-07-25 Thread Peter Voss
Update from a2i2 -- http://adaptiveai.com/news 

 

Peter Voss

 

Towards Increased Intelligence!

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=25189354-15f318

RE: [agi] about AGI designers

2007-06-06 Thread Peter Voss
'fraid not. Have to look after our investors' interests. (and, like Ben, I'm
not keen for AGI technology to be generally available)

 

  _  

From: Kingma, D.P. [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 06, 2007 1:28 PM
To: agi@v2.listbox.com
Subject: Re: [agi] about AGI designers

 

On 6/6/07, Peter Voss <[EMAIL PROTECTED]> wrote:

...
Our goal is to create full AGI, but our business plan is to commercialize an
intermediate-level AGI engine via some highly lucrative applications. Our
target date to commence commercialization is the end of next year. 

Peter Voss
a2i2


The latest news item on your website sais "this time our focus is on a May
2007 milestone, when we expect to showcase our progress to various
investors."


Is there any chance that we (non-investors) will get to see a glimpse of
A2I2 progress?

D.P. Kingma

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;>
&

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

RE: [agi] about AGI designers

2007-06-06 Thread Peter Voss
We are always looking for brilliant, dedicated AI psychologists and
programmers -- and potentially people with other skills. We have the funds
to pay market-related salaries, plus we offer shares in our company.

However, we have found that part-time and/or telecommuting does not work for
us (for various reasons) - people have to live near Playa del Rey (near
LAX). We cannot help with visas (except perhaps Canadians).

Ours is a commercial project, with maximal practical IP protection. The
company owns all IP.

We use c# on .net. If you want open-source, or hate MS, then we're the wrong
team.

Our goal is to create full AGI, but our business plan is to commercialize an
intermediate-level AGI engine via some highly lucrative applications. Our
target date to commence commercialization is the end of next year.

Peter Voss
a2i2

http://adaptiveai.com/ 
http://adaptiveai.com/news/index.htm 
http://adaptiveai.com/company/opportunities.htm 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


[agi] AGI project goals (was: The role of incertainty)

2007-05-01 Thread Peter Voss
Roughly -

AGI *research* is about developing, testing and exploring theories and
approaches.

AGI *development* is about building practical systems. This requires having
a workable theory/ approach (or an awful lot of luck!).

Both benefit from having well defined plans/ goals, however a development
project will almost certainly fail totally without clear *practical* goals/
milestones.

Pei does research (great stuff, I might add). I personally think it a pity
that his approach is not part of any development project.

My company, a2i2, spent many years in a research phase. Two years ago we
transitioned into a development company. We obviously still do research, but
it is now targeted at specific sub-problems, our overall AGI theory is in
place.

Lastly, in theory you could come up with a complete AGI design that was
quite agnostic about specific applications; plug in senses and actuators and
train the system (and/ or let in learn from its environment). Practically,
such an open-ended approach will suffer far too many inefficiencies, and is
unnecessarily hard. You really want the crucial feedback of practical
performance (or non-performance, as the case may be) to help guide any R&D.

Peter Voss


-Original Message-
Subject: Re: [agi] The role of incertainty

On 5/1/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Define the type of problems it addresses which might be [for all I know]
> *understanding and precis-ing a set of newspaper stories 

Pei Wang replied:
>If one of the above problem is solved by an AGI system, it should be
>the result of learning of the system, rather than an innate capability
>built into the system. ...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


RE: [agi] GI as substrate for memetic evolution

2007-04-25 Thread Peter Voss
The essential difference between animal and human intelligence lies in our
ability to form and deal with *abstract* concepts. This enables
self-awareness, volition, etc. Some chains of abstractions are memes
(compact ideas that transmit well in a given environment). 

Lower-level concepts (perceptually grounded) form the basis of all adaptive
intelligence -- for recognition, generalization, differentiation, learning,
surprise, etc.

Our AGI approach is based on these insights (among others).
http://adaptiveai.com/research/index.htm 

Peter Voss


-Original Message-
From: DEREK ZAHN [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 25, 2007 7:25 AM
To: agi@v2.listbox.com
Subject: [agi] GI as substrate for memetic evolution

Nothing particularly original here, but I think
it's kind of interesting.

Suppose that at some point, basically by accident,
the brains of our ancestors became capable of supporting
the evolution of "memes".

Biological evolution started with a LOOONG period of
low complexity creatures, during which time the basic
pieces were discovered, the fruitful building blocks
on which diversity would flourish.  Similarly, our
ancestors were stupid for a long time because they only
hosted simple memes, churning away in their brains.
Eventually the core building blocks for complex memes
were discovered, leading to faster and faster progress
as the size of the substrate increased through population,
the ability for fitter memes to spread increased through
written language and internets, and the memes themselves
are more complex and diverse.

Looking at it this way, we can define the general
intelligence of an individual as the extent to which it
can function as part of this substrate for memetic
evolution -- the ability to host, communicate, and
perform genetic operations on memes as they exist
in human culture.

This definition also suffers from a lack of intermediate
progress points, but it suggests that studying memes
may be an interesting alternative (or precursor) to
studying reasoning, perception, memory, architectures,
etc.

Are any prospective AGI builders working from a viewpoint
similar to this?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


[agi] Cyc a failure?

2007-04-17 Thread Peter Voss
In these forums I often hear that Cyc is a failure. Somebody recently
mentioned it again.

 

My question: What specifically is Cyc unable to do? What are the tests, and
how did it fail?

 

I'd be interested in specifics.

 

Peter

 

PS. My own view is that Cyc does not have what it takes to be(come) an AGI -
I've written about that elsewhere (mainly theoretical reasons).

 

PPS. More generally, I plan to ask the same questions about other 'AGI
faulures'

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

RE: [agi] Low I.Q. AGI

2007-04-15 Thread Peter Voss
Pei, what are yours?

-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 15, 2007 8:32 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Low I.Q. AGI

On 4/15/07, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
> There is an easy assumption of most writers on this board that once the
AGI
> exists, it's route to becoming a singularity is a sure thing.

I'm not sure whether this assumption is really shared by most of the
people here. At least I don't think AGI leads to singularity, though
my reason is not the same as yours.

Pei

> Why is that?
> In humans there is a wide range of "smartness" in the population. People
> face intellectual thresholds that they cannot cross because they just do
not
> have enough of this smartness thing. Although as a physicist I understand
> General Relativity, I really doubt that if it had been left up to me that
it
> would ever have been discovered - no matter how much time I was given. Do
> neuroscientists know where this talent difference comes from in terms of
> brain structure? Where in the designs for other AGI (Ben's for example) is
> the smartness of the AGI designed in? I can see how an awareness may
bubble
> up from a design but this diesn't mean a system smart enough to move
itself
> towards being a singularity. Even if you feed the system all the
information
> in the world, it would know a lot but not be any smarter or even know how
to
> make itself smarter. How many years of training will we give a brand new
AGI
> before we decide it's retarded?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936


[agi] a2i2 news update

2007-04-04 Thread Peter Voss
Update from a2i2 -- http://adaptiveai.com/news 

 

Peter

 

Towards Increased Intelligence!

 

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] My proposal for an AGI agenda

2007-03-25 Thread Peter Voss
Chuck is exactly right,

A successful AGI project (or anything else large & difficult) depends on
someone's vision and leadership: technical (design), motivational
(psychological and goal-defining), and execution (engineering and
management).

Peter

(Chuck, as for the million dollars: join our project...)


-Original Message-
From: Chuck Esterbrook [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 25, 2007 12:37 AM
To: agi@v2.listbox.com
Subject: Re: [agi] My proposal for an AGI agenda

> On 3/24/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:
> > On 3/25/07, rooftop8000 <[EMAIL PROTECTED]> wrote:
...
> Simply voting on individual features cannot work because all the features
> of an AGI are inter-related; they have to work together synergistically.

A committee approach to architecture has historically failed
repeatedly, especially where breaking new ground is concerned. Someone
needs to be the leader/visionary. Usually the one that forms the
group...

> I'd make a bronze statue of anyone who can solve this problem!!

Can I get a million dollars instead?  :-)

-Chuck


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] My proposal for an AGI agenda

2007-03-20 Thread Peter Voss
Yes, David, some good ideas.

We are well into our AGI prototype using c# and are quite happy with it.
However, fully integrated reflection, DB support, etc. would be nice.

I designed and implemented a very comprehensive language (called One) in the
'80s and used it to code a large commercial application: that was heaven.
But. Getting other people to work on it (bad career move), and trying to
develop various development tools as good or better than commercial ones
proved to be its undoing.

I see a specialized language for AGI as a (huge) distraction. C#/.net gives
you a fighting chance to overcome most serious limitations. 

I must say it *would* be nice to have a z# (ie. C# plus full wishlist).

Peter Voss


-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 20, 2007 9:57 AM
To: agi@v2.listbox.com
Subject: Re: [agi] My proposal for an AGI agenda


>  For people who might be interested in influencing some of the 
> features of this system, I would appreciate them looking at my 
> documentation at www.rccconsulting.com/hal.htm 
> <http://www.rccconsulting.com/hal.htm>  Although my system isn't quite 
> ready for alpha distribution yet, I expect that it will be within a 
> few months.  People that help with the alpha and beta testing will be 
> given consideration on the use of the system in the future even if 
> they don't participate in the AGI development.
>  
> When this project goes ahead, I think even Ben (who has a huge 
> intellectual and financial investment in his Novamente project) will 
> be interested in the experiments and results a system like I am 
> proposing will have, even if he never interfaces his program with it.
>  
>
> <http://v2.listbox.com/member/?list_id=303>

Your programming language looks interesting, and well designed in many 
ways, but I have never been convinced that the inadequacies of current 
programming languages are a significant cause of the slow progress we 
have seen toward AGI.

If you were introducing a radically new programming paradigm for AGI, I 
would be more interested  Not that I think this is necessary to 
achieve AGI, but I would find it more intellectually stimulating ;-)

Regarding languages, I personally am a big fan of both Ruby and 
Haskell.  But, for Novamente we use C++ for reasons of scalability.

-- Ben


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Logical representation

2007-03-12 Thread Peter Voss
Evolutionary approaches are what you use when you run of engineering
ideas... (and run of statistical approaches)

The last game in town.

Some of us are making good progress towards AGI via engineering.

Peter

-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 12, 2007 1:43 PM
To: Russell Wallace; agi@v2.listbox.com
Subject: Re: [agi] Logical representation

On Mon, Mar 12, 2007 at 07:47:26PM +, Russell Wallace wrote:

...

The first and biggest step is to get your system to learn how to evolve.
I understand many do not yet see this as a problem at all.

>shot at that route, let me know if you want a summary of conclusions
>and ideas I got to before I moved away from it.

I don't understand why you moved away from it (it's the only game
in town), but if you have a document of your conclusions to share,
fire away.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Development Environments for AI

2007-02-21 Thread Peter Voss
Mark Waser >>Could you educate us some?

Not really. Many SQL problems have solutions/workarounds, but we let
cost-benefit analysis decide. We found that for most of our requirements our
own solutions were more effective than using SQL. We did invest several
man-months on these issues.

Peter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 21, 2007 2:03 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Development Environments for AI 

> PS. Regarding Databases, in our own work we use SqlServer for some things
> (it's an integral part of our overall system), but found it quite useless
> for central knowledge representation.

My personal WAG (wild-assed guess) is that you can/want to use SqlServer for

two purposes:  a) similar to long-term memory and b) context/swap space.  I 
certainly wouldn't want to use it for working memory (which seemed more of 
your thrust than what I would call central knowledge representation).

> There were many specific
> problems we encountered in trying to make more use of a SQL.

Could you educate us some?


- Original Message - 
From: "Peter Voss" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, February 21, 2007 4:05 PM
Subject: [agi] Development Environments for AI


>A few comments on this debate: It's good to keep in mind the different
> premises that people hold --
>
> There are those who believe that no-one knows how to build AGI at this
> stage. For them, discussing reverse-engineering the brain or building new
> experimental/exploratory development environments may make sense.
>
> The there are those who believe that current hardware has inappropriate
> architecture and/or technology, or is *way* underpowered (many orders of
> magnitude). Not much hope there, unless you believe that you can design &
> build such new systems AND you know what AGI will require!
>
> Lastly, there are those of us who believe that there are indeed some 
> people
> who know how to build AGI *now*. Hopefully, those would concentrate on
> finding the best approaches, and making it happen ASAP.
>
> Peter Voss
> http://adaptiveai.com/
>
>
> PS. Regarding Databases, in our own work we use SqlServer for some things
> (it's an integral part of our overall system), but found it quite useless
> for central knowledge representation. The three main reasons are: many
> real-time inserts (they are slow), the need for very large numbers of 
> simple
> queries that depend on previous results (single query overhead kills
> performance), and the need for specialized data requirements (very sparse
> tables, highly dynamic table creation, etc.). There were many specific
> problems we encountered in trying to make more use of a SQL.
>
>
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
> 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Development Environments for AI

2007-02-21 Thread Peter Voss
A few comments on this debate: It's good to keep in mind the different
premises that people hold --

There are those who believe that no-one knows how to build AGI at this
stage. For them, discussing reverse-engineering the brain or building new
experimental/exploratory development environments may make sense.

The there are those who believe that current hardware has inappropriate
architecture and/or technology, or is *way* underpowered (many orders of
magnitude). Not much hope there, unless you believe that you can design &
build such new systems AND you know what AGI will require!

Lastly, there are those of us who believe that there are indeed some people
who know how to build AGI *now*. Hopefully, those would concentrate on
finding the best approaches, and making it happen ASAP.

Peter Voss
http://adaptiveai.com/ 


PS. Regarding Databases, in our own work we use SqlServer for some things
(it's an integral part of our overall system), but found it quite useless
for central knowledge representation. The three main reasons are: many
real-time inserts (they are slow), the need for very large numbers of simple
queries that depend on previous results (single query overhead kills
performance), and the need for specialized data requirements (very sparse
tables, highly dynamic table creation, etc.). There were many specific
problems we encountered in trying to make more use of a SQL.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Peter Voss
Dynamic code generation is not a major aspect of our AGI.

To clarify: While I agree that many AI apps require massively parallel
number-crunching, in our AGI approach neither are major requirements.
'Number crunching' is of course part of any serious AI/AGI implementation,
but we find that (software) design is by far the more important bottleneck.


-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 
Sent: Saturday, February 17, 2007 8:50 AM
To: agi@v2.listbox.com
Subject: Re: Languages for AGI [WAS Re: [agi] Priors and indefinite
probabilities]

On Sat, Feb 17, 2007 at 08:46:17AM -0800, Peter Voss wrote:

> We use .net/ c#, and are very happy with our choice. Very productive.

I don't know much about those. Bytecode, JIT at runtime? Might be not
too slow. If you use code generation, do you do it at source or at bytecode
level?
 
> Eugen>>(Of course AI is a massively parallel number-crunching
application...
> 
> Disagree.

That it is massively parallel, or number-crunching? Or neither
massively-parallel,
nor number-crunching?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Peter Voss
We use .net/ c#, and are very happy with our choice. Very productive.

Eugen>>(Of course AI is a massively parallel number-crunching application...

Disagree.

Peter Voss

http://adaptiveai.com/ 


-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 
Sent: Saturday, February 17, 2007 8:35 AM
To: agi@v2.listbox.com
Subject: Re: Languages for AGI [WAS Re: [agi] Priors and indefinite
probabilities]

On Sat, Feb 17, 2007 at 08:24:21AM -0800, Chuck Esterbrook wrote:

> What is the nature of your language and development environment? Is it
> in the same neighborhood as imperative OO languages such as Python and
> Java? Or something "different" like Prolog?

There are some very good Lisp systems (SBCL) with excellent compilers,
rivalling C and Fortran in code quality (if you avoid common pitfalls
like consing). Together with code and data being represented by
the same data structure and good support of code generation by code
(more so than any other language I've heard of) makes Lisp an evergreen
for classical AI domains. (Of course AI is a massively parallel
number-crunching application, so Lisp isn't all that helpful here).

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] hard-coded inductive biases

2007-02-14 Thread Peter Voss
>>... various comments ...

It more fundamental than that: The design of your 'senses' - what feature
extraction, sampling and encoding you provide lays a primary foundation to
induction.

Peter


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] (video)The Future of Cognitive Computing

2007-01-21 Thread Peter Voss
Yes, then we disagree. I'm convinced that in hindsight, AGI will turn out to
be simpler than we think now (all of us). Even without dedicated AGI work,
many sub-problems of AGI will be solved in the normal course of development.


We will have much better common-knowledge databases to work with, and many
more practical examples of powerful and useful narrow AI applications (more
clearly identifying their limitations).

Putting the remaining pieces together will become much easier.

(Even ignoring our own project) I predict this will happen way before 2030. 

Peter


-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Sunday, January 21, 2007 8:57 AM
To: agi@v2.listbox.com
Subject: Re: [agi] (video)The Future of Cognitive Computing

Peter wrote:
> > Ben, your comment seems to reflect your frustration at lack of funding
> > rather than a realistic assessment of the situation. Even if no
*dedicated*
> > AGI engineering project is first to achieve AGI, people in the
software/AI
> > community will "stumble on" a solution long before reverse engineering
> > becomes feasible. Don't you agree?

Actually, I don't quite agree with your penultimate sentence...

I don't think that people in the software/AI community are likely to
"stumble on" a solution to AGI in the next few decades  Definitely
not before the 2030-or-so time-point that Kurzweil speculates for the
human-brain-emulation approach.

Narrow-AI and AGI are pretty different.  If we have to rely on the
narrow-AI folks -- even smart ones like the Google folks -- to get us
to AGI, then we're in pretty bad luck, and the brain-emulators may win
the race...

But my projection is that explicit AGI research is going to zoom to
prominence vividly and excitingly sometime in the first half of the
next decade.

Of course, what triggers this may be when word of our amazing AGI
successes with Novamente ... or yours with A2I2 ... begin to leak out
;-)   [Not that we have had amazing AGI successes yet with
Novamente, except on the theory-and-design level -- we've had some fun
small-scale practical successes, but our practical work is still at
the level of a very sophisticated cognitive-mechanism-toolkit being
used to drive a pretty primitive baby-AI  -- but I'm projecting into
the future...!].  Then we will see just how fast the takeoff is, and
how big is the vaunted first-mover advantage after all ;=)

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] (video)The Future of Cognitive Computing

2007-01-21 Thread Peter Voss
torically, all approaches to AGI have failed so far.

> Trying to reverse-engineer a known working system might do less for one's
ego,

I really don't think ego is the issue here.

IMO, creating AGI via emulating the human brain **or** by creating a
novel architecture, would be mighty ego-gratifying for nearly
anybody!!!

> but it's the only game in town, as far as I can see.

It is not the only game in town, there are plenty of us taking the
computer-science approach to AGI as well.

Your point of view seems even more extreme than Kurzweil's, as he e.g.
mentioned Novamente favorably in "The Singularity is Near."

I find it frustrating that your attitude is at least approximately
shared by so many others, including Kurzweil and some of the high
muck-a-mucks at IBM.  However, I also find it gratifying that there
are many other smart folks such as Pei and Peter Voss and others on
this list who have a more open-minded attitude regarding various AGI
approaches.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] (video)The Future of Cognitive Computing

2007-01-21 Thread Peter Voss
Eugen> IBM is smart. They know what they're doing.

Yeah! What an impressive argument.


Eugen> There are shorter paths, but nobody knows where they are

There is more known about the "shorter paths" than actual the functioning of
the human mind/brain.

* All current useful robots are engineered, not reverse-engineered.
* All AI successes so far are engineered solutions, not copies of wetware
(Deep Blue, Darpa Challenge, Google, etc.)
* Planes have been flying for >100 years, yet we haven't even
reverse-engineered a sparrow's fart...


Ben> ... the prophecy that human brain emulation will be the initial path to
AGI could become a self-fulfilling one.

Ben, your comment seems to reflect your frustration at lack of funding
rather than a realistic assessment of the situation. Even if no *dedicated*
AGI engineering project is first to achieve AGI, people in the software/AI
community will "stumble on" a solution long before reverse engineering
becomes feasible. Don't you agree?

Peter Voss
http://adaptiveai.com/ 



-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 

On Sun, Jan 21, 2007 at 10:03:52AM -0500, Benjamin Goertzel wrote:

> One thing I find interesting is that IBM is focusing their AGI-ish
> efforts so tightly on human-brain-emulation-related approaches.

IBM is smart. They know what they're doing.
 
> Kurzweil, as is well known, has forecast that human brain emulation is
> the most viable path to follow to get to AGI.  I agree that it is a
> viable path, but I don't think it is anywhere near the shortest path.

There are shorter paths, but nobody knows where they are. That's
the key point of it: the world is complicated. Dealing with the
world takes lost of machinery. There's a strange cognitive bias in
people, AIlers specifically, to think that AI is based on some
simple generic method, and they just know what it is. No validation
or further evidence required; it's all obvious. Whomever
you ask, they all know it, but all their answers differ. Historically,
this approach has failed abysmally. Trying to reverse-engineer
a known working system might do less for one's ego, but it's the only
game in town, as far as I can see.

> However, I think it's possible (though not extremely likely) that if
> all the pundits and funding sources (like IBM) continue to harp on the
> brain-emulation approach to the exclusion of other approaches, the
> prophecy that human brain emulation will be the initial path to AGI
> could become a self-fulfilling one ;-p ...

In this race, there are no second places.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] a2i2 news update

2007-01-02 Thread Peter Voss
Here's our latest update: http://adaptiveai.com/news 

 

Adaptive A.I. Inc currently has two openings:
http://adaptiveai.com/company/opportunities.htm 

 

Peter

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] AGI meeting in Austin on Sunday Dec 10th?

2006-12-04 Thread Peter Voss
I'll be in Austin next Sunday. 

If anyone there would like to meet to talk about AGI (and other things
extropian), please contact me privately at [EMAIL PROTECTED] 

Peter Voss

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] new paper: What Do You Mean by "AI"?

2006-11-17 Thread Peter Voss
That'll be when you join our project...

... or buy our product...   :)


-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED] 

Peter,

Thanks!

I look forward to the day when you can tell us more about a2i2. :-)

Pei

On 11/17/06, Peter Voss <[EMAIL PROTECTED]> wrote:
> Hi Pei,
>
> Just finished reading your "Rigid Flexibility" book; it's a nice summary
of
> your approach.
>
> I can recommend it to anyone interested in AGI: If you agree with Pei's
> general approach it provides quite a bit a detail; if you disagree, it
> provides a coherent reference point.
>
> Peter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] new paper: What Do You Mean by "AI"?

2006-11-17 Thread Peter Voss
Hi Pei,

Just finished reading your "Rigid Flexibility" book; it's a nice summary of
your approach.

I can recommend it to anyone interested in AGI: If you agree with Pei's
general approach it provides quite a bit a detail; if you disagree, it
provides a coherent reference point.

Peter



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 17, 2006 8:52 AM
To: agi@v2.listbox.com
Subject: [agi] new paper: What Do You Mean by "AI"?

Hi,

A new paper of mine is put on-line for comment. English corrections
are also welcome. You can either post to this mailing list or send me
private emails.

Thanks in advance.

Pei

---

TITLE: What Do You Mean by "AI"?

ABSTRACT: Many problems in AI study can be traced back to the
confusion of different research goals.  In this paper, five typical
ways to define AI are clarified, analyzed, and compared. It is argued
that though they are all legitimate research goals, they lead the
research to very different directions. Furthermore, most of them have
trouble to give AI a proper identity.

URL: http://nars.wang.googlepages.com/wang.AI_Definitions.pdf

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] a2i2 Looking for Entry-Level AI Psychologist

2006-10-25 Thread Peter Voss








Adaptive A.I. Inc is looking for
Entry-Level AI Psychologist

 

As all of our current staff members are now quite experienced and
highly productive, we are again looking to fill a (Los Angeles based) entry-level position.

 

We want someone smart, and highly motivated to work on AGI. Attitude
and work ethics are more important than deep technical skills. 

 

For more details see: http://adaptiveai.com/company/opportunities.htm
  At this stage the work will entail quite a bit of system training
and testing.

 

Because we are expecting rapid expansion of our project and team over
the few 2 years, this position provides excellent scope for advancement.

 

Our project has definite commercial objectives, and we offer
competitive compensation in addition to the option of equity participation in
our company.

 

Please pass this on to anyone who might fit the bill.

 

Thanks.

 

Peter

 

 

PS. We are also still looking for a highly experienced programmer, as
well as two managers (see opportunities)




This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] SOTA

2006-10-18 Thread Peter Voss
I'm often asked about state-of-the-art in AI, and would like to get some
opinions.

What do you regard, or what is generally regarded as SOTA in the various AI
aspects that may be, or may be seen to be relevant to AGI?

For example: 

- Comprehensive (common-sense) knowledge-bases and/or ontologies
- Inference engines, etc.
- Adaptive expert systems
- Question answering systems
- NLP components such as parsers, translators, grammar-checkers
- Interactive robotics systems (sensing/ actuation) - physical or virtual
- Vision, voice, pattern recognition, etc.
- Interactive learning systems
- Integrated intelligent systems
... whatever ...

I'm looking for the best functionality -- irrespective of proprietary,
open-source, or academic.



Peter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Failure scenarios

2006-09-25 Thread Peter Voss








Looking at past and current (likely) failures
– 

 


 trying
 to solve the wrong problem is the first place, or
 not
 having good enough theory/ approaches to solving the right problems, or
 poor
 implementation


 

However, even though you specifically restricted
your question to technical matters, by far the most important reasons are ‘managerial’
– ie. Staying focused on general
intelligence, funding, project management, etc.

 

Peter

 

http://adaptiveai.com/faq/index.htm#little_progress


 









From: Joshua Fox
[mailto:[EMAIL PROTECTED] 
Sent: Monday, September 25, 2006
5:53 AM
To: agi@v2.listbox.com
Subject: [agi] Failure scenarios



 



I hope this question isn't too forward, but it would certainly
help clarify the possibilities for AGI.





 





To those doing AGI development: If, at the end of the development stage
of your project  -- say, after approximately five years -- you find that
it has failed technically to the point that it is not salvageable, what do you
think is most likely to have caused it? Let's exclude financial and management
considerations from this discussion; and let's take for granted that a failure
is just a learning opportunity for the next step. 






Answers can be oriented to functionality or implementation. Some examples: True
general intelligence in some areas, but so unintelligent in others as to be
useless; Super-intelligence in principle but severely limited by hardware
capacity to the point of uselessness. But of course, I'm interested
in _your_ answers. 






Thanks,

Joshua

 









This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss
AGI ideas that are well developed can be quite concrete, as well as having
payoffs in the near future. Our project's business plan aims to do both.

Peter


-Original Message-
From: Eliezer S. Yudkowsky [mailto:[EMAIL PROTECTED] 

Additional factor:  AGI ideas are often vague or analogical.  Even the 
ideas with mathematically describable internals are often vague in the 
explanation of what they are supposed to do, or why they are supposed to 
be "intelligent".  It would be harder to cooperate on a project like 
that, than on developing a faster sorting algorithm.  Fuzzy beliefs are 
harder to communicate; communication is the essence of cooperation.


-Original Message-
From: Neil H. [mailto:[EMAIL PROTECTED] 

Of course, one might also argue that they simply didn't venture far
enough to see the proverbial "light at the end of the tunnel." I
suppose one of the downsides about AGI is that, unlike more focused AI
research (vision, NLP, etc), there really aren't any intermediate
payoffs between now and the "holy grail."


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss








Yes, an important point. For our project
we invented a new profession: AI psychologist. 

 

It is very hard to find computer
scientists who are comfortable thinking about a program (AGI) in terms of teaching,
training and psychology. Conversely, developmental and cognitive psychologists
usually don’t have an interest in computers/ programming.

 

Peter

 

PS. http://adaptiveai.com/company/opportunities.htm


 











From: Andrew Babian
[mailto:[EMAIL PROTECTED] 



On Wed, 13 Sep 2006 18:04:31 +0300,
Joshua Fox wrote 
> I'd like to raise a FAQ: Why is so little AGI research and development
being done? 

I think this is a very good question.  Maybe the problem has just been
daunting.  It seems like only recently have there really started to be
some good theoretical models, and maybe people just haven't realized that it
may have just become reasonable.  So maybe some is inertia.  I'm in
town here with Stan Franklin, who is one of those working on a general model,
though I don't work with his group.  He's had a relationship with the
cognitive science people at the university here, and is glad to be able to do
"real science".  And it does seem like the computer people and
psychologists really are in separate worlds and are not that into reaching
out.  I remember talking to a cog psych graduate student who seemed to
have interests in understanding how mings might work.  But I'm from an
engineering background, and talking to her, it seemed like she came out and
said she was only interested in how people work, and had no interest in how to
get a machine to do it.  A matter of priorities and interest, then,
perhaps.  As for the principles, I also seem to remember that they had
some trouble getting the primary cognitive psychologist to get that interested
in helping with the theoretical psychology because he had so many other things
he was working on.  My exposure to that group was very limited though, but
I remember getting that feeling.  And, they have a cog sci seminar where
really try to get the computer people to work with the psychologists, but a
semester is too short.  I suppose I need to find out if there are any
deeper collaborations going on. …





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss








I considered and researched this issue thoroughly
a few years ago.

 

For a summary: http://adaptiveai.com/faq/index.htm#few_researchers


For detail: http://adaptiveai.com/research/index.htm
(section 8)

 

In addition to asking researchers you also
need to look at psychological and hidden motives, as well as the dynamics of
funding sources (DARPA, etc), business and academia. 

 

Peter

 

 











From: Joshua Fox [mailto:[EMAIL PROTECTED]




I'd like to raise a FAQ: Why is so little AGI research and development
being done?...





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] a2i2 is ready to expand team once again

2006-08-04 Thread Peter Voss








Our latest news flash: http://adaptiveai.com/news/ 

 

News Flash



Our project is progressing well, and we are once again looking to
expand our team.



This is an opportunity for a select few individuals to become
members of our core team, and to significantly contribute to the creation of
real AGI.



Various Los
  Angeles based full-time positions are
currently open. All offer competitive cash compensation.

Please see http://adaptiveai.com/company/opportunities.htm  for
details, and apply to [EMAIL PROTECTED]com 



 Peter Voss 

Towards Increased
Intelligence! 

 




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] a2i2 News Update

2006-05-12 Thread Peter Voss
Here's our latest news update: http://adaptiveai.com/news/index.htm 

Peter

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Anyone for a pre Singularity Summit meeting this Friday?

2006-05-08 Thread Peter Voss
I'll be in Palo Alto from Friday afternoon for the Singularity Summit:
http://sss.stanford.edu/ 

Anyone interested in meeting Friday afternoon or evening to socialize and
talk about Futurism/ Extropy/ AGI ?

You can contact me at [EMAIL PROTECTED] 

Peter Voss

http://www.optimal.org/peter/peter.htm 



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] 4 positions open at a2i2

2006-03-07 Thread Peter Voss








A quick update from a2i2 – 

 

* We have another two key investors in our company, and now
don’t expect funding to be a bottleneck for the remainder of our development
project.

 

* Work on our prototype is progressing well – we are
on schedule to achieve robust human-level learning and cognition within 2 years.

 

* We have four Los
  Angeles based full-time positions open. All are offering
competitive cash compensation.

 

- 
Entry-level AI Psychologist/
programmer

- 
Experienced, and highly competent programmer

- 
Software engineering team-leader /
CTO candidate

- 
Senior AI psychologist to head up
our AGI training/ testing/ knowledge acquisition effort

 

These are opportunities to become a member of our core team,
and to significantly contribute to the creation of real AGI.

 

Please see: http://adaptiveai.com/company/opportunities.htm
and apply to [EMAIL PROTECTED] 

 

 

Peter Voss

 

Adaptive A.I. Inc

 




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] a2i2 news update: still looking for additional talent

2005-11-11 Thread Peter Voss




a2i2 is 
still looking for additional team 
members.
 

http://adaptiveai.com/news/index.htm
 

Towards Increased 
Intelligence!
 
Peter Voss
 
Adaptive A.I. 
Inc


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] a2i2 opportunities

2005-09-07 Thread Peter Voss
We are now well into the implementation phase of out AGI project, but have
not yet found suitable candidates for two key positions -

1) Senior Programmer/Team Leader (CTO candidate)

2) Chief AI Psychologist/Team Leader

http://adaptiveai.com/company/opportunities.htm

Please pass this along to anyone you think may be suitable.

Peter Voss

Adaptive A.I. Inc.

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 news update - We're Hiring!

2005-07-08 Thread Peter Voss




All Systems Go 
for Project Aigo - We're Hiring!
 
Please spread the 
word. Help us find additional 
talent.
 
http://adaptiveai.com/news/index.htm
 

Towards Increased 
Intelligence!
 
Peter Voss
 
a2i2 - Adaptive A.I. Inc.
 


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] a2i2 - news update (seeking CTO)

2005-04-09 Thread Peter Voss




Funding progress!
 
 
News update: http://adaptiveai.com/news/ 
 
Please help us find a CTO:  http://adaptiveai.com/company/opportunities.htm 

 
(Also seeking LA-based 
programmer)
 
 
 
Towards Increased Intelligence!
 
Peter


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] a2i2 news update

2005-03-04 Thread Peter Voss



A news update: http://adaptiveai.com/news/index.htm 
 
Towards Increased 
Intelligence!
 
Peter
 
 
 


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] a2i2 news: The Next Phase

2005-01-11 Thread Peter Voss
Our project is ready to make the transition from doing internal,
proof-of-concept research to developing a fully-functioning, high-level
working model.

We are seeking suggestions for one or two additional advisors to our
project.

http://adaptiveai.com/news/index.htm

Towards Increased Intelligence!

Peter

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 news update

2004-10-25 Thread Peter Voss
a2i2 news update: http://adaptiveai.com/news/index.htm
 
Peter Voss

 
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.782 / Virus Database: 528 - Release Date: 10/22/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Accelerating Change Conference (Early Bird Registration Deadline: Sept 30th)

2004-09-29 Thread Peter Voss
Accelerating Change Conference - Physical Space, Virtual Space, and
Interface
Stanford University, Palo Alto CA
November 5 - 7, 2004


In case you guys didn't know about this cool futurist conference:
http://www.accelerating.org/ac2004/

Five of us from a2i2 will be there!

Hope to see some of you there.

Best,

Peter


PS. Note: Early bird registration deadline is Sept 30th

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.770 / Virus Database: 517 - Release Date: 9/27/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Psychometric AI

2004-09-17 Thread Peter Voss
I can't find it in the archives. Can you give me a link?

Thanks,

Peter


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of J. W. Johnston

...As AGI testing and validation goes, some might recall in my IVI
Architecture posted here about a year ago, I specified testing to proceed
from Mental Status Tests (basic orientation, attention, memory, etc. tests
like a human neurologist would administer) ...

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.764 / Virus Database: 511 - Release Date: 9/15/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-21 Thread Peter Voss
Visual Studio (beta) with 64-bit (c#) compilation is available now:

http://lab.msdn.microsoft.com/vs2005/productinfo/productline/

as is Windows XP 64-bit for testing.

That's all one needs for development.

Peter




-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Shane
Sent: Friday, August 20, 2004 9:57 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] SOTA 50 GB memory for Windows



Well I guess I have become skeptical about when they will
release such a thing as they have been saying that they will
put out a 64 bit version for Intel for years now but then
always pushing back the release date.  No doubt it's related
to the difficulties Intel has been having with Itanium.
If Intel deliver their AMD compatible 64 bit chips reasonably
soon then surely a 64 bit Windows release can't be too far
away.

The other thing is that when something as big as this changes
in the OS it can take a while for various things to straighten
themselves out.  Things like development tools, devices drivers
and so on.  I guess for you the key thing is when they will
deliver a 64 bit version of C# and associated tools.

Curiously, you could get 64 bit Windows for Alpha CPUs about
8 years ago!  A friend of mind used to develop things for it
way back then, but that version of Windows was eventually
killed off by Microsoft.

Shane

Peter Voss wrote:
> Microsoft Updates 64-bit Windows XP Preview Editions
>
> http://www.eweek.com/article2/0,1759,1637471,00.asp
>
> Peter

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.742 / Virus Database: 495 - Release Date: 8/19/2004

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.742 / Virus Database: 495 - Release Date: 8/19/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
Microsoft Updates 64-bit Windows XP Preview Editions

http://www.eweek.com/article2/0,1759,1637471,00.asp

Peter



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
Shane

. However there isn't a 64 bit version of Windows on the market nor will
there be for some time.  Thus your only option is to run something like
Linux if you want to have all this data being accessed by code directly in
RAM



---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
Thanks Andrew.

I didn't realize that RAID cache doesn't help on reads (like RAM disks do).
Just how expensive is a high-performance 50GB RAM disk system?

Off hand, anyone know progress/ETA on Intel EM64T for .net laguages (c#) ?

Also, what Windows compatible machines offer the most RAM ?  (Dell seems to
max out at 8Gb)

Peter



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of J. Andrew Rogers
Sent: Friday, August 20, 2004 2:12 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] SOTA 50 GB memory for Windows


> I'm looking for price & performance (access time) for:
>
> 1) Cached RAID


This will be useless for runtime VM or pseudo-VM purposes.  RAID cache
isolates the application from write burst bottlenecks when syncing disks
(e.g. checkpointing transaction logs), but that's about it.  For flatter
I/O patterns, you'll lose 3-4 orders of magnitude access time over
non-cached main memory and it won't be appreciably faster than raw
spindle.  Wrong tool for the application.


> 2) RAM disks


Functionally workable, but very expensive.  It is much cheaper per GB to
buy the biggest RAM chips you can find and put them on the motherboard.
 The primary advantage is that you can scale it to very large sizes
while only losing somewhere around an order of magnitude versus main
core if done well.


> 3) Internal RAM (using 64 bit architecture?)


The best performing, and relatively cheap too.  You can slap 32 GB of
RAM in an off-the-shelf Opteron system for not much money.  The biggest
problem is finding motherboards with loads of memory slots and the fact
that there is a hard upper bound on how much memory a given system will
support.


> 4) other


Nothing I can think of that will work with Windows.  There are other
performant and cost-effective options for Linux/Unix systems.


A compromise might be to max out system RAM within reason (e.g. using
2GB DIMMs), and then using RAM disks on a fast HBA to get the rest of
your capacity.  All of this will require a 64-bit OS to be efficient.


j. andrew rogers


---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
What are the best options for large amounts of fast memory for Windows-based
systems?

I'm looking for price & performance (access time) for:

1) Cached RAID
2) RAM disks
3) Internal RAM (using 64 bit architecture?)
4) other

Thanks for any info.

Peter

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Experiential interactive learning and Novamente

2004-08-01 Thread Peter Voss
Ben, have you found/ settled on a virtual world system for your AGI-SIM?

Peter

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.732 / Virus Database: 486 - Release Date: 7/29/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 news update (& seeking AI Psychologists)

2004-07-08 Thread Peter Voss



A  news update: http://adaptiveai.com/news/index.htm Please also see our ad for AI Psychologists: http://adaptiveai.com/company/opportunities.htm 

 
Towards Increased Intelligence!


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




RE: [agi] AGI research consortium

2004-06-17 Thread Peter Voss
YKY>we can form a research consortium to better capture market value and
so everyone will get a slice of the pie. This will also facilitate
communication and external knowledge sharing between AGI groups (such as
sharing a virtual sensory environment testbed). ..

It could make sense to share virtual & test environments, and test setups -
however, they need to be (made) compatible. That may not be all that easy;
e.g. we use .net/C#.



YKY>...Maybe we can start a discussion on how to divide future AGI revenues
among different groups.

That's potentially 'easy' for any for-profits: just pay for services & tools
in shares. For example we would pay you in shares for anything of yours we
used -- and vice versa. If we can't agree on value, then we don't deal. The
main problem I see with this is that people tend to overvalue their own
work/ shares.

Peter


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 6/15/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Dogs can learn language...

2004-06-10 Thread Peter Voss
Quite a bit less than rocket science: negative reinforcement on concept
labeling will do the trick (overlayed with 'fetch toy' goal). Behaviorism
gets you quite far. Also, you may be surprised how long it takes them to
train these animals Alex > 10 years!


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Ben Goertzel
Sent: Thursday, June 10, 2004 11:20 AM
To: [EMAIL PROTECTED]
Subject: RE: [agi] Dogs can learn language...


Peter,

I agree that simply learning mappings between names and objects is easy,
and doesn't even require a system as sophisticated as yours is.

The slightly more interesting example of reasoning mentioned in this
article was:

When there were N toys in a room, and the dog knew the names of N-1 of
them, and was then asked to fetch "blah", where "blah" was an unfamiliar
name -- the dog figured out to fetch the toy whose name it didn't know,
figuring that must be a "blah."

This isn't rocket science, but it's reasonably sophisticated speculative
inference [which could lead to error sometimes, in cases of objects with
multiple names, obviously]

-- Ben

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Peter Voss
> Sent: Thursday, June 10, 2004 3:08 PM
> To: [EMAIL PROTECTED]
> Subject: RE: [agi] Dogs can learn language...
>
>
> We are not going for 'dpg-level intelligence' per se --
> rather, roughly best cognitive abilities of various animals &
> human infants.
>
> Actually our current 'phase3' specs already include Alex's
> (The Parrot) abilities. Also, I don't see any particular
> difficulty with our current design/ system learning hundreds
> of different concept or goal references.
>
> Peter
>
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Ben Goertzel
> Sent: Thursday, June 10, 2004 10:49 AM
> To: [EMAIL PROTECTED]
> Subject: [agi] Dogs can learn language...
>
>
> >From the New York Times today ... This is perhaps pertinent to Peter
> Voss's notion of "dog-level intelligence" ;-)
>
> ben
>
>
> ---
> Outgoing mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.700 / Virus Database: 457 - Release Date: 6/6/2004
>
> ---
> To unsubscribe, change your address, or temporarily
> deactivate your subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.700 / Virus Database: 457 - Release Date: 6/6/2004

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.700 / Virus Database: 457 - Release Date: 6/6/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Dogs can learn language...

2004-06-10 Thread Peter Voss
We are not going for 'dpg-level intelligence' per se -- rather, roughly best
cognitive abilities of various animals & human infants.

Actually our current 'phase3' specs already include Alex's (The Parrot)
abilities. Also, I don't see any particular difficulty with our current
design/ system learning hundreds of different concept or goal references.

Peter


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Ben Goertzel
Sent: Thursday, June 10, 2004 10:49 AM
To: [EMAIL PROTECTED]
Subject: [agi] Dogs can learn language...


>From the New York Times today ... This is perhaps pertinent to Peter
Voss's notion of "dog-level intelligence" ;-)

ben


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.700 / Virus Database: 457 - Release Date: 6/6/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Robot Brain Project

2004-06-07 Thread Peter Voss
Thanks, Ben - a real find.

May be able to use parts of their algorithms.

Peter



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Ben Goertzel
Sent: Sunday, June 06, 2004 7:46 PM
To: [EMAIL PROTECTED]
Subject: [agi] Robot Brain Project


Check it out -- interesting project!

http://ed-02.ams.eng.osaka-u.ac.jp/~kfm/


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.700 / Virus Database: 457 - Release Date: 6/6/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 news update (seeking another 'AI Psychologist' to join team)

2004-03-15 Thread Peter Voss



A brief news 
update: http://adaptiveai.com/news/index.htm
 


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] Down Under

2004-02-06 Thread Peter Voss
I'll be visiting Sydney, Brisbane, Perth and Auckland later this month.

I anyone Down Under wants to meet me to chat about the Singularity, AGI,
etc. drop me a line  mailto:[EMAIL PROTECTED]

Peter




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Umnick

2004-02-02 Thread Peter Voss
Anyone here have any real information on
http://www.umnick.com/Eng/DigitalBrain.asp?DigitalBrain_001.asp ?

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 Project Review/Update

2004-01-06 Thread Peter Voss
Here is a review and status report on our project:
http://adaptiveai.com/news/index.htm

We are again actively looking for (at least) two additional LA-based team
members. Contact me for details.  mailto:[EMAIL PROTECTED]

Towards Increased Intelligence!

Peter

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] RE: dog-level AI

2003-11-12 Thread Peter Voss
Hi Ben,

Yes, it was good meeting with you guys - pity we can't do this more
often.

I entirely agree with your concerns!

I think that by emphasizing how impoverished sense acuity & dexterity are in
our proof-of-concept prototype, I must have mistakenly given the impression
that interactive perception-action learning is not high on our list of
priorities. Quite the contrary, it is the main focus of our current work,
and central to our model. It is just that we believe that it is
counterproductive to try to achieve anywhere near real animal competitive
ability at this stage.

Hope this clarifies.

Best,

Peter



-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 07, 2004 5:43 AM
To: [EMAIL PROTECTED]
Subject: dog-level AI


hi Peter,

It was great to talk to you in DC -- and, I look forward to having a deeper
chat on AI/experiential learning stuff next time around ;-)

We talked about your project and approach afterwards, and I thought I'd
email you with a more crystallized version of the complaint that was
bothering several of us after our chat

Basically it's as follows.  You say one can achieve human-level intelligence
by taking a dog-level-intelligent system and applying its abilities to
self-analysis  rather than the external world.  But we're not at all sure
this principle applies to a system whose dog-level intelligence doesn't
include subsystems created for robust perception and action oriented
intelligence.  Maybe it's in reasonably-large-part the perception and action
processing stuff (as integrated with dog-level cognition) that enables the
dog-level mind to start the hard task of recognizing patterns in its own
dynamics.

-- Ben


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Dog-Level Intelligence

2003-03-25 Thread Peter Voss
Hi Ben, I think there are some important points pertaining to the early
achievement of AGI here -


Ben> My feeling on dog-level intelligence is that the *cognition* aspects of
dog-level intelligence are really easy, but the perception and action
components are significantly difficult and subtle.

1.) I thought it was well established that by now that perception & action
are *integral* aspects of human and/or dog-level intelligence (as
appropriate to AGI). You can dramatically reduce the 'resolution'/ capacity
(especially for proof-of-concept prototypes), but AGI without them seems
misguided.

2.) If 'dog-level intelligence' is so simple, why has no-one come near to
achieving it? I see it as a crucial sub-set of higher-level intelligence
(and thus our shared AGI ambitions).


Ben> In other words, once a dog's brain has produced abstract patterns not
tied to particular environmental stimuli, the stuff it does with these
patterns is probably not all that fancy.  But the dog's brain is really good
at recognizing and enacting complex patterns, and doing this recognizing &
enacting in a coordinated way.

Producing abstract patterns from stimuli and being 'really good at
recognizing and enacting complex patterns' is a core AGI requirement - the
basis for all higher-level ability.


Ben> Peter Voss's (www.adaptiveai.com) approach to AI aims to emulate
biological evolution on Earth, in the sense that it wants to start with a
dog-level brain (very roughly speaking) and then incrementally build more
cognition on top of this.  This is a reasonable approach, to be sure.

I would not characterize our approach as 'emulating biological evolution'. I
believe that roughly dog-level intelligence is the right level to aim at,
because it includes much of the fundamental cognition needed for AGI while
eliminating the 'distractions' of language, abstract thinking, and formal
logic. (I see these as inappropriate problems to focus on at this stage -
especially if they *not* perception based. The cart before the horse - for
numerous reasons.)


Ben> But if I had to make a guess, I'd say this approach should probably
begin with robotics, with real sensors and actuators and a system embodied
in a real physical environment.  I am skeptical that simplistic simulated
worlds
provide enough richness to support development of robust dog-level
intelligence... as perception and action oriented as dog intelligence is...

I don't see any problem with using virtual environments for testing &
proving basic abilities - one  can get an enormous amount of complexity out
of them these days. But in any case, our framework (and actual testing)
seamlessly integrates virtual and real-world perception/ action.

Peter

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] aii2 project: Once again, looking for additional team members, & other news

2003-03-05 Thread Peter Voss




Here is our latest 
news bulletin: http://www.adaptiveai.com/news/index.htm 
 

The project is going well, and once again, we are 
looking for additional team members.
 
Towards Increased 
Intelligence!
 
Peter
 
PS. To receive a2i2 news automatically (less than 10 
posts per year) you can subscribe to http://groups.yahoo.com/group/AdaptiveAI 
 
 


[agi] Building a safe AI

2003-02-20 Thread Peter Voss


http://www.optimal.org/peter/siai_guidelines.htm

Peter




-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Ben Goertzel

I would recommend Eliezer's excellent writings on this topic if you don't
know them, chiefly www.singinst.org/CFAI.html .  Also, I have a brief
informal essay on the topic, www.goertzel.org/dynapsyc/2002/AIMorality.htm ,
although my thoughts on the topic have progressed a fair bit since I wrote
that.  Note that I don't fully agree with Eliezer on this stuff, but I do
think he's thought about it more thoroughly than anyone else (including me).

It's a matter of creating an initial condition so that the trajectory of the
evolving AI system (with a potentially evolving goal system) will have a
very high probability of staying in a favorable region of state space ;-)

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Reinforcement learning

2003-02-13 Thread Peter Voss



Thanks, Ben.  Looks really interesting. Hadn't 
seen it before.

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Ben 
  Goertzel
  
  Hi  all , 
  As a digression from the recent threads on the Friendliness 
  or otherwise of certain uncomputable, unimplementable AI systems, I thought 
  I'd post something on some fascinating practical AI algorithms  These 
  are narrow-AI at present, but, they definitely have some AGI 
  relevance. 
  Moshe Looks recently pointed out some very exciting work to 
  me, by a guy named Eric Baum.
   ... Put simply, this guy 
  seems to have actually made reinforcement learning work.  


[agi] Seeking additional team members for AGI project

2002-12-12 Thread Peter Voss



Adaptive AI Inc 
(a2i2) is looking for additional team members for its AGI (Artificial General 
Intelligence) project. 
 
Our development has 
progressed to a point where we require one or two full-time (Los Angeles based) 
people to help design, implement and analyze general intelligence training/ 
testing tasks for our AGI system. 
 
Note that 
compensation is primarily in the form of equity and satisfaction, with only a 
small cash component.
 
Please see http://adaptiveai.com/news/index.htm for 
details.
 

Towards Increased 
Intelligence!
 
Peter Voss
 
 
 
 


RE: [agi] Grounding

2002-12-09 Thread Peter Voss
True. The more fundamental point is that symbols representing entities and
concepts need to be grounded with (scalar) attributes of some sort.

How this is *implemented* is a practical matter. One important consideration
for AGI is that data is easily retrievable by vector distance (similarity)
and that new patterns can be leaned (unlearned) incrementally.

Peter

http://adaptiveai.com/



-Original Message- Behalf Of Ben Goertzel

Well, the fact that clustering requires vectors for A2I2, is a property of
your particular AI algorithms...

Our Novamente clustering MindAgent is based on the Bioclust clustering
algorithm, which does not act on vectors:

...

Translating textual experience directly into weighted graphs is often more
natural than translating it into vectors.  A lot of NLP frameworks use graph
representations

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Grounding

2002-12-09 Thread Peter Voss
I think it's more than a matter of 'pragmatics': In order to do unsupervised
learning (clustering) of grounded entities and concepts, they *must* be
derived from vector-encodable input data. Obviously, not all inputs need to
represent continuous attributes/ features, but foundational ones do.

Peter

http://adaptiveai.com/




-Original Message-
Behalf Of Ben Goertzel

Kevin,

I'm sure you're right in a theoretical sense, but in practice, I have a
strong feeling it will be a lot easier to teach an AGI stuff if one has a
nonlinguistic world to communicate to it about.

Rather than just communicating in math and English, I think teaching will be
much easier if the system can at least perceive 2D pixel patterns.  It'll be
a lot nicer to be able to tell it "There's a circle" when there's a circle
on the screen [that you and it both see] -- to tell it "the circle is moving
fast", "You stopped the circle", etc. etc.  Then to have it see a whole lot
of circles so that, in an unsupervised way, it gets used to perceiving
them

This is not a matter of principle, it's a matter of pragmatics  I think
that a perceptual-motor domain in which a variety of cognitively simple
patterns are simply expressed, will make world-grounded early language
learning much easier...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Tony's 2d World

2002-12-09 Thread Peter Voss
Hey Tony - are you on this list? How are you doing? Can we have a look at
your world (or spec)? Perhaps we can co-ordinate our efforts somehow.


Peter

http://adaptiveai.com/



-Original Message-
Behalf Of Ben Goertzel


... [Although, in fact, Tony Lofthouse is coding up a simple 2D
training-world right now, just to test
some of the current Novamente cognitive functions in isolation, even though
the system is not yet ready for real experiential learning]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Early AGI apps

2002-11-09 Thread Peter Voss
[I started this thread on the SL4 list - decided that it belongs here. My
original question & Ben's reply are at the bottom.]

I agree with Ben about the difficulty of developing products based on
(early) AGI: In most cases you will have to do all of the engineering (and
research and marketing) of a conventional application *plus* developing &
integrating your AGI engine.

Once we develop practical applications, I hope to mitigate this difficulty
by:
a) applying the AGI engine as a layer on top of existing applications;
b) having some very powerful adaptive (real-time, incremental) learning
algorithms to give an edge;
c) selecting aspects of applications that do not require 100% accuracy,
reliability, predictability.


The various data mining (text, genomics/ proteomics, data etc.) applications
suggested are far from what our AGI project is all about - and from what I
think is important for general intelligence. I know that Ben disagrees (to
some extent).

I believe that crucial AGI abilities center around online, interactive
selection-perception-action learning (of static and temporal knowledge and
skills). It needs to combine unsupervised, self-supervised and supervised
methods. Large statistical number crunching, language, and symbolic logic
are distractions in my opinion - especially if data is preselected,
essentially static, and batch processed.

Our early applications are likely to be at the level of animal (say, dog)
cognition, but initially of course at extremely low data/ resolution rates
(for both perception and action/ dexterity).

Notwithstanding this animal reference, I do think that adaptively learning a
PC user's habits (by monitoring mouse, screen, etc.), plus being taught
specific 'tricks' could be an early AGI application.

The (difficult) trick is to chose applications that leverage inherent
strengths of artificial systems without shifting focus from core AGI
requirements.

The other problem, that I know that Ben is well aware of, is that for *any
given* application it is almost always easier to hard-code or custom
engineer a solution, rather than using general intelligence abilities and
letting the system learn.

Peter









- Ben's post from SL4 

> I'm interested in any and all potential early applications for AGI - both
to evaluate the performance of our a2i2 system, and for possible
implementation.
>
> Any ideas?
>
> Peter

First, I have a general observation about early AGI applications.  Then I'll
try to answer your question..

An advanced AGI will be able to observe a new application area, and figure
out how to apply itself to that area, and connect itself to other
appropriate software programs, etc.

Until we're at that stage, setting up a concrete application of a proto-AGI
software system is a *lot of work*.

There are two factors here:

1) setting up any software application is a lot of work

2)  Creating an AGI-based application can actually be more work than just
setting up a narrow-AI-based software application, even not counting the
work involved in creating the AGI system itself.

The reason for 2 is as follows.  AGI systems are created for autonomy and
flexibility and nondeterminism, whereas in a software application context,
one often needs different virtues instead: repeatable efficient behavior in
particular contexts.  There is of course no intrinsic reason a software
system can't have both AGI virtues and software application virtues... but
in practice, early-stage AGI systems often aren't created with software
application virtues in mind.  We specifically architected Novamente so that
it could support both the autonomy & flexibility & nondeterminism required
for AGI, AND also so that one could create highly constrained & efficient
Novamente-based software apps.  But not all AGI's will be this way; Webmind
AI Engine, for instance, did not have this property at all, and so building
practical apps on top of it was like pulling teeth (and we wound up
primarily pulling objects out of it to use in narrow-AI apps, rather than
actually using Webmind AI Engine in practical apps).

So, my experience is, it's possible to make a simple prototype application
of an AGI system in a particular area by having a strong programmer work on
it for 6 months or so.  But to actually  make an AGI-based product that can
be sold or used in some domain, is a huge amount of work, even more than
building an analogous product without AGI involved.

Another bit of general wisdom, which you surely know already: The most
important thing is to pick an app area where you have a lot of domain
expertise

Now, having gotten that blather out of the way, what are good app areas for
AGI systems?

We're now working in bioinformatics.  There are loads of subfields here...
we're working mostly on genomics & proteomics data analysis.

At Webmind Inc., we worked in computational finance, and information
retrieval (document categorization, document retrieval).  Big uses for AGI
here..