RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Peter Voss
Not a single one of our current investors (dozen) or potential investors
have used AGI lists to evaluate our project (or the competition)

 

Peter Voss

a2i2

 

From: Terren Suydam [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 15, 2008 1:25 PM
To: agi@v2.listbox.com
Subject: Re: [agi] META: A possible re-focusing of this list

 



This is a publicly accessible forum with searchable archives... you don't
necessarily have to be subscribed and inundated to find those nuggets. I
don't know any funding decision makers myself, but if I were in control of a
budget I'd be using every resource at my disposal to clarify my decision. If
I were considering Novamente for example I'd be looking for exactly the kind
of exchanges you and Richard Loosemore (for example) have had on the list,
to gain a better understanding of possible criticism, and because others may
be able to articulate such criticism far better than me.  Obviously the same
goes for anyone else on the list who would look for funding... I'd want to
see you defend your ideas, especially in the absence of peer-reviewed
journals (something the JAGI hopes to remedy obv).

Terren

--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:

From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 3:37 PM


Terren,

I know a good number of VC's and government and private funding decision
makers... and believe me, **none** of them has remotely enough extra time to
wade through the amount of text that flows on this list, to find the nuggets
of real intellectual interest!!!

-- Ben G

On Wed, Oct 15, 2008 at 12:07 PM, Terren Suydam [EMAIL PROTECTED] wrote:



One other important point... if I were a potential venture capitalist or
some other sort of funding decision-maker, I would be on this list and
watching the debate. I'd be looking for intelligent defense of (hopefully)
intelligent criticism to increase my confidence about the decision to fund.
This kind of forum also allows you to sort of advertise your approach to
those who are new to the game, particularly young folks who might one day be
valuable contributors, although I suppose that's possible in the more
tightly-focused forum as well.

--- On Wed, 10/15/08, Terren Suydam [EMAIL PROTECTED] wrote:

From: Terren Suydam [EMAIL PROTECTED]
Subject: Re: [agi] META: A possible re-focusing of this list


To: agi@v2.listbox.com

Date: Wednesday, October 15, 2008, 11:29 AM

 



Hi Ben,

I think that the current focus has its pros and cons and the more narrowed
focus you suggest would have *its* pros and cons. As you said, the con of
the current focus is the boring repetition of various anti positions. But
the pro of allowing that stuff is for those of us who use the conflict among
competing viewpoints to clarify our own positions and gain insight. Since
you seem to be fairly clear about your own viewpoint, it is for you a
situation of diminishing returns (although I will point out that a recent
blog post of yours on the subject of play was inspired, I think, by a point
Mike Tintner made, who is probably the most obvious target of your
frustration). 

For myself, I have found tremendous value here in the debate (which probably
says a lot about the crudeness of my philosophy). I have had many new
insights and discovered some false assumptions. If you narrowed the focus, I
would probably leave (I am not offering that as a reason not to do it! :-)
I would be disappointed, but I would understand if that's the decision you
made.

Finally, although there hasn't been much novelty among the debate (from your
perspective, anyway), there is always the possibility that there will be.
This seems to be the only public forum for AGI discussion out there (are
there others, anyone?), so presumably there's a good chance it would show up
here, and that is good for you and others actively involved in AGI research.

Best,
Terren


--- On Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED] wrote:

From: Ben Goertzel [EMAIL PROTECTED]
Subject: [agi] META: A possible re-focusing of this list
To: agi@v2.listbox.com
Date: Wednesday, October 15, 2008, 11:01 AM


Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current
computers, according to designs that can feasibly be implemented by
moderately-sized groups of people

2)
Discussions about whether the above is even possible -- or whether it is
impossible because of weird physics, or poorly-defined special
characteristics of human creativity, or the so-called complex systems
problem, or because AGI intrinsically requires billions of people and
quadrillions of dollars, or whatever

Personally I am pretty bored with all the conversations of type 2.

It's not that I consider them useless discussions in a grand sense ...
certainly

RE: **JUNK** Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Peter Voss
no

 

From: Joseph Henry [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 15, 2008 1:56 PM
To: agi@v2.listbox.com
Subject: **JUNK** Re: [agi] META: A possible re-focusing of this list

 

Peter, do you think they would be less overwhelmed if they were given the
option of looking at the same content through the use of a forum?

I think it would be far easier to wade through...

- Joseph

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
 Modify Your Subscription

 http://www.listbox.com 

No virus found in this incoming message.
Checked by AVG - http://www.avg.com
Version: 8.0.173 / Virus Database: 270.8.0/1722 - Release Date: 10/13/2008
7:50 AM




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Question, career related

2008-08-21 Thread Peter Voss
Perhaps: http://adaptiveai.com/company/opportunities.htm 

 

From: Valentina Poletti [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 21, 2008 10:23 AM
To: agi@v2.listbox.com
Subject: [agi] Question, career related

 

Dear AGIers,

I am looking for a research opportunity in AGI or related neurophysiology. I
won prizes in maths, physics, computer science and general science when I
was younger and have a keen interest in those fields. I'm a pretty good
programmer, and have taught myself neurophysiology and some cognitive
science. I have an inclination towards math and logic. I was wondering if
anyone knows of any open such positions, or could give me advice, references
to whom I may speak.

Thanks.

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: [agi] More Info Please

2008-05-23 Thread Peter Voss
Thanks, Ben.

The technical details of our design and business plan details are indeed
confidential. All I can really say publicly is that we are confident that we
have pretty direct path to high-level AGI from where we are, and that we
have an extremely viable business plan to make this happen. Initial
commercialization next year will utilize the current 'low-grade' version of
our AGI engine that will be able to perform certain tasks that are quite
dumb (in human terms) but still commercially valuable. Our AGI 'brain' can
potentially be utilized in many different kinds of systems/ applications.

More details will probably become available late this year.

Peter

PS. I also have *some* doubts about the ultimate capabilities of our AGI
engine, but probably no greater than yours about NM  :)


-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 23, 2008 2:56 PM
To: agi@v2.listbox.com
Subject: Re: [agi] More Info Please

Peter has some technical info on his overall (adaptive neural net) based
approach to AI, on his company website, which is based on a paper he wrote
in the AGI volume Cassio and I edited for Springer (written 2002, published
2006).

However, he has kept his specific commercial product direction tightly under
wraps.

I believe Peter's ideas are interesting but I have my doubts that his
approach is really AGI-capable.  However, I don't feel comfortable going
into great deal on my reasons, because Peter seems to value secrecy
regarding his approach... I've had a mild amount of insider info regarding
the approach (e.g. due to visiting his site a few years ago, etc.) and don't
want to blab stuff on this list that he'd want me to keep secret...

Ben


On Fri, May 23, 2008 at 5:40 PM, Mike Tintner [EMAIL PROTECTED]
wrote:
 ... on this:

 http://www.adaptiveai.com/news/index.htm

   Towards Commercialization

 It's been a while. We've been busy. A good kind of busy.

 At the end of March we completed an important milestone: a demo system 
 consolidating our prior 10 months' work. This was followed by my 
 annual pilgrimage to our investors in Australia. The upshot of all 
 this is that we now have some additional seed funding to launch our 
 commercialization phase late this year.

 On the technical side we still have a lot of hard work ahead of us.
 Fortunately we have a very strong and highly motivated team, so that 
 over the next 6 months we expect to make as much additional progress 
 as we have over the past 12. Our next technical milestone is around 
 early October by which time we'll want our 'proto AGI' to be pretty 
 much ready to start earning a living.

 By the end of 2008 we should be ready to actively pursue 
 commercialization in addition to our ongoing RD efforts. At that time 
 we'll be looking for a high-powered CEO to head up our business 
 division which we expect to grow to many hundreds of employees over a few
years.

 Early in 2009 we plan to raise capital for this commercial venture, 
 and if things go according to plan we'll have a team of around 50 by 
 the middle of the year.

 Well, exciting future plans, but now back to work.

 Peter 



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they will
surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] a2i2 is looking for Entry-Level AI Psychologist

2008-02-20 Thread Peter Voss
Adaptive A.I. Inc is looking for Entry-Level AI Psychologist

 

As all of our current staff members (now up to 17) are now quite experienced
and highly productive, we are again looking to fill a (Los Angeles based)
full-time, entry-level position.

 

We want someone smart, and highly motivated to work on AGI. Attitude and
work habits are more important than deep technical skills. 

 

For more details see: http://adaptiveai.com/company/opportunities.htmAt
this stage the work will entail quite a bit of system training and testing,
as well as sys admin.

 

Because we are expecting rapid expansion of our project and team over the
few years (we expect to more than triple our staff over the next year), this
position provides excellent advancement opportunities.

 

Our project has definite near-term commercial objectives, and we offer
competitive compensation in addition to the option of equity participation
in our company.

 

Please pass this on to anyone who might fit the bill.

 

Thanks.

 

Peter

 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] a2i2 looking for another AI Psychologist

2007-11-01 Thread Peter Voss
We are looking for another AI Psychologist to join our team:
http://adaptiveai.com/company/team_oct07.JPG 

 

Details: http://adaptiveai.com/company/opportunities.htm 

 

. Los Angeles based

. Full-time

. Entry-level to experienced: 35 to 70k package incl. some stock
options

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=59954018-83da2f

RE: [agi] a2i2 news update

2007-07-26 Thread Peter Voss
Just a quick note: While Rand's writings helped me a lot to clarify/solve a
number of crucial philosophical/moral questions, I certainly don't subscribe
to either all of her fictional characters' actions, or that of many of her
'true blue' followers. In fact, I don't think that Rand herself was a good
Objectivist! Still, I owe her a lot for numerous crucial insights.

 

What Rand meant by 'selfishness' is really rational, principled, long-term
self-interest. In my book this definitely includes having good EQ, and
caring about the welfare of others.

 

Altruism means selflessness. The logical, though unconventional, conclusion
is that it refers to actions taken irrespective of the effects they have on
you. In fact, actions that are detrimental to you they are seen as more
desirable. I do think that this is very harmful. 

 

(The seeming paradox of 'psychological altruism', that even altruists are
selfish, has been well explored - e.g. see Nathaniel Branden.)  

 

More in my essay: http://www.optimal.org/peter/rational_ethics.htm 

 

I'll try to address these issues a bit in my upcoming talk:
http://www.singinst.org/summit2007/ 

 

That's about all I have time for now..

 

Back to building brains.

 

  _  

From: Robert Wensman [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 26, 2007 4:12 AM
To: agi@v2.listbox.com
Subject: Re: [agi] a2i2 news update

 

What worries me is that the founder of this company subscribes to the
philosophy of Objectivism, and the implications this might have for the
company's possibility at achieving friendly AI. I do not know about the rest
of their team, but some of them use the word rational a lot, which could
be a hint. 

 

I am well aware of that Ayn Rand, the founder of Objectivism, uses slightly
non-standard meaning when using words like selfishness and altruism, but
her main point is that altruism is the source of all evil in the world, and
selfishness ought to be the main virtue of all mankind. Instead of altruism
she often also uses the word selflessness which better explains her
seemingly odd position. What she essentially means is that all evil of the
world stems from people who give up their values, and their self and
thereby become mindless evildoers that respect others as little as they
respect themselves. While this psychological statement in isolation could be
worth noting, and might help understand some collective madness, especially
from the last century, I still feel her philosophy is dangerous because she
mixes up her very specific concept of selflessness with the commonly
understood concept of altruism, in the sense of valuing the well being and
happiness of others. Is this mix-up accidental or intended? In her novel The
Fountainhead you even get the impression that she doesn't think it is
possible to combine altruism with creativity and originality, as all
altruistic characters of her book are incompetent copycats who just
imitate others. 

 

Her view of the world also seems to completely ignore another category of
potential evil-doers: Selfish people who just do not see any problem with
using whatever means they see fit, including violence, to achieve their
goals. People who just do not see there is any problem in killing or
torturing others. Why does she ignore this group of people, because she does
not think they exist? 

 

My personal opinion is that Objectivism is a case of what could be called
the werewolf fallacy. For example, I could make a case for the following
philosophy: Werewolves as described in literature would be bad for
humanity, and if we encounter werewolves, we should try to fight them with
whatever means we see fit!. This statement is in itself completely true and
coherent, and I would be possible to write books on the subject that could
seem to make sense. The only problem is of course that there are no
werewolves, and there are other much more important things to do than to go
around preparing to fight werewolves! Similarly I do not think that all
these selfless people who Ayn Rand describe exist in any large numbers, or
at least they are certainly not the main source of evil in the world. 

 

How Objectivism could feel like home I cannot understand personally. If a
person is less capable of understanding other people, I guess it could make
some sense. I guess social life could be hard for such a person; they would
often hurt other people by mistake, make others annoyed or angry and
frequently bring enemies upon themselves. Ayn Rands gives to them a very
comfortable answer namely that it is ok, even virtuous, to not understand
others as long as you are not physically aggressive. An agenda for peaceful
psychopathy if you like. So far so good, I don't expect everyone to be
empathetic, and to motivate the need for respect rationally by the benefits
of cooperation seems like a reasonable trade of. But Ayn Rand goes a step
too far when she outright attacks altruism and people who value the well
being of others! She 

[agi] a2i2 news update

2007-07-25 Thread Peter Voss
Update from a2i2 -- http://adaptiveai.com/news 

 

Peter Voss

 

Towards Increased Intelligence!

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415id_secret=25189354-15f318

RE: [agi] about AGI designers

2007-06-06 Thread Peter Voss
'fraid not. Have to look after our investors' interests. (and, like Ben, I'm
not keen for AGI technology to be generally available)

 

  _  

From: Kingma, D.P. [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 06, 2007 1:28 PM
To: agi@v2.listbox.com
Subject: Re: [agi] about AGI designers

 

On 6/6/07, Peter Voss [EMAIL PROTECTED] wrote:

...
Our goal is to create full AGI, but our business plan is to commercialize an
intermediate-level AGI engine via some highly lucrative applications. Our
target date to commence commercialization is the end of next year. 

Peter Voss
a2i2


The latest news item on your website sais this time our focus is on a May
2007 milestone, when we expect to showcase our progress to various
investors.


Is there any chance that we (non-investors) will get to see a glimpse of
A2I2 progress?

D.P. Kingma

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] GI as substrate for memetic evolution

2007-04-25 Thread Peter Voss
The essential difference between animal and human intelligence lies in our
ability to form and deal with *abstract* concepts. This enables
self-awareness, volition, etc. Some chains of abstractions are memes
(compact ideas that transmit well in a given environment). 

Lower-level concepts (perceptually grounded) form the basis of all adaptive
intelligence -- for recognition, generalization, differentiation, learning,
surprise, etc.

Our AGI approach is based on these insights (among others).
http://adaptiveai.com/research/index.htm 

Peter Voss


-Original Message-
From: DEREK ZAHN [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 25, 2007 7:25 AM
To: agi@v2.listbox.com
Subject: [agi] GI as substrate for memetic evolution

Nothing particularly original here, but I think
it's kind of interesting.

Suppose that at some point, basically by accident,
the brains of our ancestors became capable of supporting
the evolution of memes.

Biological evolution started with a LOOONG period of
low complexity creatures, during which time the basic
pieces were discovered, the fruitful building blocks
on which diversity would flourish.  Similarly, our
ancestors were stupid for a long time because they only
hosted simple memes, churning away in their brains.
Eventually the core building blocks for complex memes
were discovered, leading to faster and faster progress
as the size of the substrate increased through population,
the ability for fitter memes to spread increased through
written language and internets, and the memes themselves
are more complex and diverse.

Looking at it this way, we can define the general
intelligence of an individual as the extent to which it
can function as part of this substrate for memetic
evolution -- the ability to host, communicate, and
perform genetic operations on memes as they exist
in human culture.

This definition also suffers from a lack of intermediate
progress points, but it suggests that studying memes
may be an interesting alternative (or precursor) to
studying reasoning, perception, memory, architectures,
etc.

Are any prospective AGI builders working from a viewpoint
similar to this?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


[agi] Cyc a failure?

2007-04-17 Thread Peter Voss
In these forums I often hear that Cyc is a failure. Somebody recently
mentioned it again.

 

My question: What specifically is Cyc unable to do? What are the tests, and
how did it fail?

 

I'd be interested in specifics.

 

Peter

 

PS. My own view is that Cyc does not have what it takes to be(come) an AGI -
I've written about that elsewhere (mainly theoretical reasons).

 

PPS. More generally, I plan to ask the same questions about other 'AGI
faulures'

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

RE: [agi] My proposal for an AGI agenda

2007-03-25 Thread Peter Voss
Chuck is exactly right,

A successful AGI project (or anything else large  difficult) depends on
someone's vision and leadership: technical (design), motivational
(psychological and goal-defining), and execution (engineering and
management).

Peter

(Chuck, as for the million dollars: join our project...)


-Original Message-
From: Chuck Esterbrook [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 25, 2007 12:37 AM
To: agi@v2.listbox.com
Subject: Re: [agi] My proposal for an AGI agenda

 On 3/24/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
  On 3/25/07, rooftop8000 [EMAIL PROTECTED] wrote:
...
 Simply voting on individual features cannot work because all the features
 of an AGI are inter-related; they have to work together synergistically.

A committee approach to architecture has historically failed
repeatedly, especially where breaking new ground is concerned. Someone
needs to be the leader/visionary. Usually the one that forms the
group...

 I'd make a bronze statue of anyone who can solve this problem!!

Can I get a million dollars instead?  :-)

-Chuck


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] My proposal for an AGI agenda

2007-03-20 Thread Peter Voss
Yes, David, some good ideas.

We are well into our AGI prototype using c# and are quite happy with it.
However, fully integrated reflection, DB support, etc. would be nice.

I designed and implemented a very comprehensive language (called One) in the
'80s and used it to code a large commercial application: that was heaven.
But. Getting other people to work on it (bad career move), and trying to
develop various development tools as good or better than commercial ones
proved to be its undoing.

I see a specialized language for AGI as a (huge) distraction. C#/.net gives
you a fighting chance to overcome most serious limitations. 

I must say it *would* be nice to have a z# (ie. C# plus full wishlist).

Peter Voss


-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 20, 2007 9:57 AM
To: agi@v2.listbox.com
Subject: Re: [agi] My proposal for an AGI agenda


  For people who might be interested in influencing some of the 
 features of this system, I would appreciate them looking at my 
 documentation at www.rccconsulting.com/hal.htm 
 http://www.rccconsulting.com/hal.htm  Although my system isn't quite 
 ready for alpha distribution yet, I expect that it will be within a 
 few months.  People that help with the alpha and beta testing will be 
 given consideration on the use of the system in the future even if 
 they don't participate in the AGI development.
  
 When this project goes ahead, I think even Ben (who has a huge 
 intellectual and financial investment in his Novamente project) will 
 be interested in the experiments and results a system like I am 
 proposing will have, even if he never interfaces his program with it.
  

 http://v2.listbox.com/member/?list_id=303

Your programming language looks interesting, and well designed in many 
ways, but I have never been convinced that the inadequacies of current 
programming languages are a significant cause of the slow progress we 
have seen toward AGI.

If you were introducing a radically new programming paradigm for AGI, I 
would be more interested  Not that I think this is necessary to 
achieve AGI, but I would find it more intellectually stimulating ;-)

Regarding languages, I personally am a big fan of both Ruby and 
Haskell.  But, for Novamente we use C++ for reasons of scalability.

-- Ben


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Logical representation

2007-03-12 Thread Peter Voss
Evolutionary approaches are what you use when you run of engineering
ideas... (and run of statistical approaches)

The last game in town.

Some of us are making good progress towards AGI via engineering.

Peter

-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 12, 2007 1:43 PM
To: Russell Wallace; agi@v2.listbox.com
Subject: Re: [agi] Logical representation

On Mon, Mar 12, 2007 at 07:47:26PM +, Russell Wallace wrote:

...

The first and biggest step is to get your system to learn how to evolve.
I understand many do not yet see this as a problem at all.

shot at that route, let me know if you want a summary of conclusions
and ideas I got to before I moved away from it.

I don't understand why you moved away from it (it's the only game
in town), but if you have a document of your conclusions to share,
fire away.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: Languages for AGI [WAS Re: [agi] Priors and indefinite probabilities]

2007-02-17 Thread Peter Voss
Dynamic code generation is not a major aspect of our AGI.

To clarify: While I agree that many AI apps require massively parallel
number-crunching, in our AGI approach neither are major requirements.
'Number crunching' is of course part of any serious AI/AGI implementation,
but we find that (software) design is by far the more important bottleneck.


-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 
Sent: Saturday, February 17, 2007 8:50 AM
To: agi@v2.listbox.com
Subject: Re: Languages for AGI [WAS Re: [agi] Priors and indefinite
probabilities]

On Sat, Feb 17, 2007 at 08:46:17AM -0800, Peter Voss wrote:

 We use .net/ c#, and are very happy with our choice. Very productive.

I don't know much about those. Bytecode, JIT at runtime? Might be not
too slow. If you use code generation, do you do it at source or at bytecode
level?
 
 Eugen(Of course AI is a massively parallel number-crunching
application...
 
 Disagree.

That it is massively parallel, or number-crunching? Or neither
massively-parallel,
nor number-crunching?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] hard-coded inductive biases

2007-02-14 Thread Peter Voss
... various comments ...

It more fundamental than that: The design of your 'senses' - what feature
extraction, sampling and encoding you provide lays a primary foundation to
induction.

Peter


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] (video)The Future of Cognitive Computing

2007-01-21 Thread Peter Voss
Eugen IBM is smart. They know what they're doing.

Yeah! What an impressive argument.


Eugen There are shorter paths, but nobody knows where they are

There is more known about the shorter paths than actual the functioning of
the human mind/brain.

* All current useful robots are engineered, not reverse-engineered.
* All AI successes so far are engineered solutions, not copies of wetware
(Deep Blue, Darpa Challenge, Google, etc.)
* Planes have been flying for 100 years, yet we haven't even
reverse-engineered a sparrow's fart...


Ben ... the prophecy that human brain emulation will be the initial path to
AGI could become a self-fulfilling one.

Ben, your comment seems to reflect your frustration at lack of funding
rather than a realistic assessment of the situation. Even if no *dedicated*
AGI engineering project is first to achieve AGI, people in the software/AI
community will stumble on a solution long before reverse engineering
becomes feasible. Don't you agree?

Peter Voss
http://adaptiveai.com/ 



-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 

On Sun, Jan 21, 2007 at 10:03:52AM -0500, Benjamin Goertzel wrote:

 One thing I find interesting is that IBM is focusing their AGI-ish
 efforts so tightly on human-brain-emulation-related approaches.

IBM is smart. They know what they're doing.
 
 Kurzweil, as is well known, has forecast that human brain emulation is
 the most viable path to follow to get to AGI.  I agree that it is a
 viable path, but I don't think it is anywhere near the shortest path.

There are shorter paths, but nobody knows where they are. That's
the key point of it: the world is complicated. Dealing with the
world takes lost of machinery. There's a strange cognitive bias in
people, AIlers specifically, to think that AI is based on some
simple generic method, and they just know what it is. No validation
or further evidence required; it's all obvious. Whomever
you ask, they all know it, but all their answers differ. Historically,
this approach has failed abysmally. Trying to reverse-engineer
a known working system might do less for one's ego, but it's the only
game in town, as far as I can see.

 However, I think it's possible (though not extremely likely) that if
 all the pundits and funding sources (like IBM) continue to harp on the
 brain-emulation approach to the exclusion of other approaches, the
 prophecy that human brain emulation will be the initial path to AGI
 could become a self-fulfilling one ;-p ...

In this race, there are no second places.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] AGI meeting in Austin on Sunday Dec 10th?

2006-12-04 Thread Peter Voss
I'll be in Austin next Sunday. 

If anyone there would like to meet to talk about AGI (and other things
extropian), please contact me privately at [EMAIL PROTECTED] 

Peter Voss

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] new paper: What Do You Mean by AI?

2006-11-17 Thread Peter Voss
Hi Pei,

Just finished reading your Rigid Flexibility book; it's a nice summary of
your approach.

I can recommend it to anyone interested in AGI: If you agree with Pei's
general approach it provides quite a bit a detail; if you disagree, it
provides a coherent reference point.

Peter



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 17, 2006 8:52 AM
To: agi@v2.listbox.com
Subject: [agi] new paper: What Do You Mean by AI?

Hi,

A new paper of mine is put on-line for comment. English corrections
are also welcome. You can either post to this mailing list or send me
private emails.

Thanks in advance.

Pei

---

TITLE: What Do You Mean by AI?

ABSTRACT: Many problems in AI study can be traced back to the
confusion of different research goals.  In this paper, five typical
ways to define AI are clarified, analyzed, and compared. It is argued
that though they are all legitimate research goals, they lead the
research to very different directions. Furthermore, most of them have
trouble to give AI a proper identity.

URL: http://nars.wang.googlepages.com/wang.AI_Definitions.pdf

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] new paper: What Do You Mean by AI?

2006-11-17 Thread Peter Voss
That'll be when you join our project...

... or buy our product...   :)


-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED] 

Peter,

Thanks!

I look forward to the day when you can tell us more about a2i2. :-)

Pei

On 11/17/06, Peter Voss [EMAIL PROTECTED] wrote:
 Hi Pei,

 Just finished reading your Rigid Flexibility book; it's a nice summary
of
 your approach.

 I can recommend it to anyone interested in AGI: If you agree with Pei's
 general approach it provides quite a bit a detail; if you disagree, it
 provides a coherent reference point.

 Peter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] SOTA

2006-10-18 Thread Peter Voss
I'm often asked about state-of-the-art in AI, and would like to get some
opinions.

What do you regard, or what is generally regarded as SOTA in the various AI
aspects that may be, or may be seen to be relevant to AGI?

For example: 

- Comprehensive (common-sense) knowledge-bases and/or ontologies
- Inference engines, etc.
- Adaptive expert systems
- Question answering systems
- NLP components such as parsers, translators, grammar-checkers
- Interactive robotics systems (sensing/ actuation) - physical or virtual
- Vision, voice, pattern recognition, etc.
- Interactive learning systems
- Integrated intelligent systems
... whatever ...

I'm looking for the best functionality -- irrespective of proprietary,
open-source, or academic.



Peter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Failure scenarios

2006-09-25 Thread Peter Voss








Looking at past and current (likely) failures
 




 trying
 to solve the wrong problem is the first place, or
 not
 having good enough theory/ approaches to solving the right problems, or
 poor
 implementation




However, even though you specifically restricted
your question to technical matters, by far the most important reasons are managerial
 ie. Staying focused on general
intelligence, funding, project management, etc.



Peter



http://adaptiveai.com/faq/index.htm#little_progress












From: Joshua Fox
[mailto:[EMAIL PROTECTED] 
Sent: Monday, September 25, 2006
5:53 AM
To: agi@v2.listbox.com
Subject: [agi] Failure scenarios







I hope this question isn't too forward, but itwould certainly
help clarify the possibilities for AGI.











To those doing AGI development: If, at the end of the development stage
of your project -- say, after approximately five years -- you find that
it has failed technically to the point that it is not salvageable, what do you
think is most likely to have caused it? Let's exclude financial and management
considerations from this discussion; and let's take for granted that a failure
is just a learning opportunity for the next step. 






Answers can be oriented to functionality or implementation. Some examples: True
general intelligence in some areas, but so unintelligent in others as to be
useless; Super-intelligence in principle but severely limited by hardware
capacity to the point of uselessness. But of course, I'm interested
in_your_ answers. 






Thanks,

Joshua











This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss








I considered and researched this issue thoroughly
a few years ago.



For a summary: http://adaptiveai.com/faq/index.htm#few_researchers


For detail: http://adaptiveai.com/research/index.htm
(section 8)



In addition to asking researchers you also
need to look at psychological and hidden motives, as well as the dynamics of
funding sources (DARPA, etc), business and academia. 



Peter















From: Joshua Fox [mailto:[EMAIL PROTECTED]




I'd like to raise a FAQ: Why is so little AGI research and development
being done?...





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss








Yes, an important point. For our project
we invented a new profession: AI psychologist. 



It is very hard to find computer
scientists who are comfortable thinking about a program (AGI) in terms of teaching,
training and psychology. Conversely, developmental and cognitive psychologists
usually dont have an interest in computers/ programming.



Peter



PS. http://adaptiveai.com/company/opportunities.htm














From: Andrew Babian
[mailto:[EMAIL PROTECTED] 



On Wed, 13 Sep 2006 18:04:31 +0300,
Joshua Fox wrote 
 I'd like to raise a FAQ: Why is so little AGI research and development
being done? 

I think this is a very good question. Maybe the problem has just been
daunting. It seems like only recently have there really started to be
some good theoretical models, and maybe people just haven't realized that it
may have just become reasonable. So maybe some is inertia. I'm in
town here with Stan Franklin, who is one of those working on a general model,
though I don't work with his group. He's had a relationship with the
cognitive science people at the university here, and is glad to be able to do
real science. And it does seem like the computer people and
psychologists really are in separate worlds and are not that into reaching
out. I remember talking to a cog psych graduate student who seemed to
have interests in understanding how mings might work. But I'm from an
engineering background, and talking to her, it seemed like she came out and
said she was only interested in how people work, and had no interest in how to
get a machine to do it. A matter of priorities and interest, then,
perhaps. As for the principles, I also seem to remember that they had
some trouble getting the primary cognitive psychologist to get that interested
in helping with the theoretical psychology because he had so many other things
he was working on. My exposure to that group was very limited though, but
I remember getting that feeling. And, they have a cog sci seminar where
really try to get the computer people to work with the psychologists, but a
semester is too short. I suppose I need to find out if there are any
deeper collaborations going on. 





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss
AGI ideas that are well developed can be quite concrete, as well as having
payoffs in the near future. Our project's business plan aims to do both.

Peter


-Original Message-
From: Eliezer S. Yudkowsky [mailto:[EMAIL PROTECTED] 

Additional factor:  AGI ideas are often vague or analogical.  Even the 
ideas with mathematically describable internals are often vague in the 
explanation of what they are supposed to do, or why they are supposed to 
be intelligent.  It would be harder to cooperate on a project like 
that, than on developing a faster sorting algorithm.  Fuzzy beliefs are 
harder to communicate; communication is the essence of cooperation.


-Original Message-
From: Neil H. [mailto:[EMAIL PROTECTED] 

Of course, one might also argue that they simply didn't venture far
enough to see the proverbial light at the end of the tunnel. I
suppose one of the downsides about AGI is that, unlike more focused AI
research (vision, NLP, etc), there really aren't any intermediate
payoffs between now and the holy grail.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 is ready to expand team once again

2006-08-04 Thread Peter Voss








Our latest news flash: http://adaptiveai.com/news/ 



News Flash



Our project is progressing well, and we are once again looking to
expand our team.



This is an opportunity for a select few individuals to become
members of our core team, and to significantly contribute to the creation of
real AGI.



Various Los
  Angeles based full-time positions are
currently open. All offer competitive cash compensation.

Please see http://adaptiveai.com/company/opportunities.htm for
details, and apply to [EMAIL PROTECTED]com 



Peter Voss 

Towards Increased
Intelligence! 






To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] Anyone for a pre Singularity Summit meeting this Friday?

2006-05-08 Thread Peter Voss
I'll be in Palo Alto from Friday afternoon for the Singularity Summit:
http://sss.stanford.edu/ 

Anyone interested in meeting Friday afternoon or evening to socialize and
talk about Futurism/ Extropy/ AGI ?

You can contact me at [EMAIL PROTECTED] 

Peter Voss

http://www.optimal.org/peter/peter.htm 



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] 4 positions open at a2i2

2006-03-07 Thread Peter Voss








A quick update from a2i2  



* We have another two key investors in our company, and now
dont expect funding to be a bottleneck for the remainder of our development
project.



* Work on our prototype is progressing well  we are
on schedule to achieve robust human-level learning and cognition within 2 years.



* We have four Los
  Angeles based full-time positions open. All are offering
competitive cash compensation.



-
Entry-level AI Psychologist/
programmer

-
Experienced, and highly competent programmer

-
Software engineering team-leader /
CTO candidate

-
Senior AI psychologist to head up
our AGI training/ testing/ knowledge acquisition effort



These are opportunities to become a member of our core team,
and to significantly contribute to the creation of real AGI.



Please see: http://adaptiveai.com/company/opportunities.htm
and apply to [EMAIL PROTECTED] 





Peter Voss



Adaptive A.I. Inc






To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] a2i2 news update: still looking for additional talent

2005-11-11 Thread Peter Voss




a2i2is 
still looking for additionalteam 
members.


http://adaptiveai.com/news/index.htm


Towards Increased 
Intelligence!

Peter Voss

Adaptive A.I. 
Inc


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] a2i2 opportunities

2005-09-07 Thread Peter Voss
We are now well into the implementation phase of out AGI project, but have
not yet found suitable candidates for two key positions -

1) Senior Programmer/Team Leader (CTO candidate)

2) Chief AI Psychologist/Team Leader

http://adaptiveai.com/company/opportunities.htm

Please pass this along to anyone you think may be suitable.

Peter Voss

Adaptive A.I. Inc.

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 news update - We're Hiring!

2005-07-08 Thread Peter Voss




All Systems Go 
for Project Aigo - We'reHiring!

Please spread the 
word. Help us find additional 
talent.

http://adaptiveai.com/news/index.htm


Towards Increased 
Intelligence!

Peter Voss

a2i2 - Adaptive A.I. Inc.



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] a2i2 - news update (seeking CTO)

2005-04-09 Thread Peter Voss




Funding progress!


News update: http://adaptiveai.com/news/ 

Please help us find a CTO:http://adaptiveai.com/company/opportunities.htm 


(Also seeking LA-based 
programmer)



Towards Increased Intelligence!

Peter


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] a2i2 news update

2005-03-04 Thread Peter Voss



A news update: http://adaptiveai.com/news/index.htm 

Towards Increased 
Intelligence!

Peter





To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] a2i2 news: The Next Phase

2005-01-11 Thread Peter Voss
Our project is ready to make the transition from doing internal,
proof-of-concept research to developing a fully-functioning, high-level
working model.

We are seeking suggestions for one or two additional advisors to our
project.

http://adaptiveai.com/news/index.htm

Towards Increased Intelligence!

Peter

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Accelerating Change Conference (Early Bird Registration Deadline: Sept 30th)

2004-09-29 Thread Peter Voss
Accelerating Change Conference - Physical Space, Virtual Space, and
Interface
Stanford University, Palo Alto CA
November 5 - 7, 2004


In case you guys didn't know about this cool futurist conference:
http://www.accelerating.org/ac2004/

Five of us from a2i2 will be there!

Hope to see some of you there.

Best,

Peter


PS. Note: Early bird registration deadline is Sept 30th

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.770 / Virus Database: 517 - Release Date: 9/27/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Psychometric AI

2004-09-17 Thread Peter Voss
I can't find it in the archives. Can you give me a link?

Thanks,

Peter


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of J. W. Johnston

...As AGI testing and validation goes, some might recall in my IVI
Architecture posted here about a year ago, I specified testing to proceed
from Mental Status Tests (basic orientation, attention, memory, etc. tests
like a human neurologist would administer) ...

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.764 / Virus Database: 511 - Release Date: 9/15/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-21 Thread Peter Voss
Visual Studio (beta) with 64-bit (c#) compilation is available now:

http://lab.msdn.microsoft.com/vs2005/productinfo/productline/

as is Windows XP 64-bit for testing.

That's all one needs for development.

Peter




-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Shane
Sent: Friday, August 20, 2004 9:57 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] SOTA 50 GB memory for Windows



Well I guess I have become skeptical about when they will
release such a thing as they have been saying that they will
put out a 64 bit version for Intel for years now but then
always pushing back the release date.  No doubt it's related
to the difficulties Intel has been having with Itanium.
If Intel deliver their AMD compatible 64 bit chips reasonably
soon then surely a 64 bit Windows release can't be too far
away.

The other thing is that when something as big as this changes
in the OS it can take a while for various things to straighten
themselves out.  Things like development tools, devices drivers
and so on.  I guess for you the key thing is when they will
deliver a 64 bit version of C# and associated tools.

Curiously, you could get 64 bit Windows for Alpha CPUs about
8 years ago!  A friend of mind used to develop things for it
way back then, but that version of Windows was eventually
killed off by Microsoft.

Shane

Peter Voss wrote:
 Microsoft Updates 64-bit Windows XP Preview Editions

 http://www.eweek.com/article2/0,1759,1637471,00.asp

 Peter

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.742 / Virus Database: 495 - Release Date: 8/19/2004

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.742 / Virus Database: 495 - Release Date: 8/19/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
What are the best options for large amounts of fast memory for Windows-based
systems?

I'm looking for price  performance (access time) for:

1) Cached RAID
2) RAM disks
3) Internal RAM (using 64 bit architecture?)
4) other

Thanks for any info.

Peter

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
Thanks Andrew.

I didn't realize that RAID cache doesn't help on reads (like RAM disks do).
Just how expensive is a high-performance 50GB RAM disk system?

Off hand, anyone know progress/ETA on Intel EM64T for .net laguages (c#) ?

Also, what Windows compatible machines offer the most RAM ?  (Dell seems to
max out at 8Gb)

Peter



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of J. Andrew Rogers
Sent: Friday, August 20, 2004 2:12 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] SOTA 50 GB memory for Windows


 I'm looking for price  performance (access time) for:

 1) Cached RAID


This will be useless for runtime VM or pseudo-VM purposes.  RAID cache
isolates the application from write burst bottlenecks when syncing disks
(e.g. checkpointing transaction logs), but that's about it.  For flatter
I/O patterns, you'll lose 3-4 orders of magnitude access time over
non-cached main memory and it won't be appreciably faster than raw
spindle.  Wrong tool for the application.


 2) RAM disks


Functionally workable, but very expensive.  It is much cheaper per GB to
buy the biggest RAM chips you can find and put them on the motherboard.
 The primary advantage is that you can scale it to very large sizes
while only losing somewhere around an order of magnitude versus main
core if done well.


 3) Internal RAM (using 64 bit architecture?)


The best performing, and relatively cheap too.  You can slap 32 GB of
RAM in an off-the-shelf Opteron system for not much money.  The biggest
problem is finding motherboards with loads of memory slots and the fact
that there is a hard upper bound on how much memory a given system will
support.


 4) other


Nothing I can think of that will work with Windows.  There are other
performant and cost-effective options for Linux/Unix systems.


A compromise might be to max out system RAM within reason (e.g. using
2GB DIMMs), and then using RAM disks on a fast HBA to get the rest of
your capacity.  All of this will require a 64-bit OS to be efficient.


j. andrew rogers


---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
Microsoft Updates 64-bit Windows XP Preview Editions

http://www.eweek.com/article2/0,1759,1637471,00.asp

Peter



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
Shane

. However there isn't a 64 bit version of Windows on the market nor will
there be for some time.  Thus your only option is to run something like
Linux if you want to have all this data being accessed by code directly in
RAM



---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 news update ( seeking AI Psychologists)

2004-07-08 Thread Peter Voss



Anews update:http://adaptiveai.com/news/index.htm Please also see our ad for AI Psychologists: http://adaptiveai.com/company/opportunities.htm 


Towards Increased Intelligence!


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




RE: [agi] AGI research consortium

2004-06-17 Thread Peter Voss
YKYwe can form a research consortium to better capture market value and
so everyone will get a slice of the pie. This will also facilitate
communication and external knowledge sharing between AGI groups (such as
sharing a virtual sensory environment testbed). ..

It could make sense to share virtual  test environments, and test setups -
however, they need to be (made) compatible. That may not be all that easy;
e.g. we use .net/C#.



YKY...Maybe we can start a discussion on how to divide future AGI revenues
among different groups.

That's potentially 'easy' for any for-profits: just pay for services  tools
in shares. For example we would pay you in shares for anything of yours we
used -- and vice versa. If we can't agree on value, then we don't deal. The
main problem I see with this is that people tend to overvalue their own
work/ shares.

Peter


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 6/15/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Dogs can learn language...

2004-06-10 Thread Peter Voss
We are not going for 'dpg-level intelligence' per se -- rather, roughly best
cognitive abilities of various animals  human infants.

Actually our current 'phase3' specs already include Alex's (The Parrot)
abilities. Also, I don't see any particular difficulty with our current
design/ system learning hundreds of different concept or goal references.

Peter


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Ben Goertzel
Sent: Thursday, June 10, 2004 10:49 AM
To: [EMAIL PROTECTED]
Subject: [agi] Dogs can learn language...


From the New York Times today ... This is perhaps pertinent to Peter
Voss's notion of dog-level intelligence ;-)

ben


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.700 / Virus Database: 457 - Release Date: 6/6/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Robot Brain Project

2004-06-07 Thread Peter Voss
Thanks, Ben - a real find.

May be able to use parts of their algorithms.

Peter



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Ben Goertzel
Sent: Sunday, June 06, 2004 7:46 PM
To: [EMAIL PROTECTED]
Subject: [agi] Robot Brain Project


Check it out -- interesting project!

http://ed-02.ams.eng.osaka-u.ac.jp/~kfm/


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.700 / Virus Database: 457 - Release Date: 6/6/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 news update (seeking another 'AI Psychologist' to join team)

2004-03-15 Thread Peter Voss



A brief news 
update:http://adaptiveai.com/news/index.htm



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] Down Under

2004-02-06 Thread Peter Voss
I'll be visiting Sydney, Brisbane, Perth and Auckland later this month.

I anyone Down Under wants to meet me to chat about the Singularity, AGI,
etc. drop me a line  mailto:[EMAIL PROTECTED]

Peter




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Umnick

2004-02-02 Thread Peter Voss
Anyone here have any real information on
http://www.umnick.com/Eng/DigitalBrain.asp?DigitalBrain_001.asp ?

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] a2i2 Project Review/Update

2004-01-06 Thread Peter Voss
Here is a review and status report on our project:
http://adaptiveai.com/news/index.htm

We are again actively looking for (at least) two additional LA-based team
members. Contact me for details.  mailto:[EMAIL PROTECTED]

Towards Increased Intelligence!

Peter

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Building a safe AI

2003-02-20 Thread Peter Voss


http://www.optimal.org/peter/siai_guidelines.htm

Peter




-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Ben Goertzel

I would recommend Eliezer's excellent writings on this topic if you don't
know them, chiefly www.singinst.org/CFAI.html .  Also, I have a brief
informal essay on the topic, www.goertzel.org/dynapsyc/2002/AIMorality.htm ,
although my thoughts on the topic have progressed a fair bit since I wrote
that.  Note that I don't fully agree with Eliezer on this stuff, but I do
think he's thought about it more thoroughly than anyone else (including me).

It's a matter of creating an initial condition so that the trajectory of the
evolving AI system (with a potentially evolving goal system) will have a
very high probability of staying in a favorable region of state space ;-)

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Reinforcement learning

2003-02-13 Thread Peter Voss



Thanks, Ben. Looks really interesting. Hadn't 
seen it before.

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of Ben 
  Goertzel
  
  Hi all,
  As a digression from the recent threads on theFriendliness 
  or otherwise of certain uncomputable, unimplementable AI systems, I thought 
  I'd post something on some fascinating practical AI algorithms These 
  are narrow-AI at present, but, they definitely have some AGI 
  relevance.
  Moshe Looks recently pointed out some very exciting work to 
  me, by a guy named Eric Baum.
  ...Put simply, this guy 
  seems to have actually made reinforcement learning work.