Hi,
It seems that what you are saying, though, is that a KR must involve
probabilities in some shape or form and the ability of a
representation to jump up a level and represent/manipulate other
representations, not just represent the world.
Yes, and these two aspects must work together so
I believe that to be adequate, the code language must incorporate
something loosely analogous to probabilistic logic (however
implemented) and something analogous to higher-order functions
(however implemented). I.e. it must be sensibly viewable as a
probabilistic logic based functional
I'm not sure what you mean by ``higher order functions
Functions that take functions as arguments -- I mean the term in the
sense of functional programming languages like Haskell ...
and ``probabilistic programming language, can you spell out please?
I mean a language (or code library
My guess at a good basis for KR is simply the cleanest, most powerful, and
most general programming language I can come up with. That's because to
learn
new concepts and really understand them, the AI will have to do the
equivalent of writing recognizers, simulators, experiment generators,
Hi,
Just out of curiosity - would you mind sharing your hardware estimates
with the list? I would personally find that fascinating.
Mant thanks,
Stefan
Well, here is one way to slice it... there are many, of course...
Currently the bottleneck for Novamente's cognitive processing is the
However, in the current day, I would say that we can list some principles
that any successful project must comply. Anyone want to start the list?
Sergio Navega.
Sergio,
While this is an interesting pursuit, I find it it much more difficult
than the already-hard problem of articulating some
Hi all,
I have spent some time recently mulling over the details of a
partially-new language for communicating between humans and AI's. The
language is (tentatively) called Lojban++ and is described here:
http://www.goertzel.org/papers/lojbanplusplus.pdf
Of course, I don't think that a
I continue to maintain that:
* syntactic ambiguity is unnecessary in a language of thought or communication
* some level of semantic ambiguity is unavoidable and in fact essential...
ben
On 8/20/06, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 8/19/06, Ben Goertzel [EMAIL PROTECTED
In blackboard the NL word maps to either a board that is black in color
or a board for writing that is usually black/green/white. The KR of those
concepts are unambiguous; it's just that there are 2 alternatives.
This is very naive... a concept such as a board that is black in
color is not
/previous files except in it's executable since it explicitly says
without input from other sources -- and the size of the executable counts
as part of the compressed size.
Mark
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, August 15
Hi,
Phil wrote:
There isn't a problem in doing it, but there's serious doubts whether
an approach in which symbols have constant meanings (the same symbol
has the same semantics in different propositions) can lead to AI.
Sure, but neither Novamente nor NARS (for example) has the problematic
Interesting...
Note also this:
http://research.cyc.com/
which apparently makes the full contents of the Cyc knowledge base
available to researchers in academia or industry, so long as they use
it only for research purposes...
Personally, my attitude on Cyc is:
* from all I have read and
Matt,
You've stated that any knowledge that can be demonstrated verbally CAN
in principle be taught verbally. I don't agree that this is
necessarily true for ANY learning system, but that's not the point I
want to argue.
My larger point is that this doesn't imply that this is how humans do
Hi,
About the Hutter Prize (see the end of this email for a quote of the
post I'm responding to, which was posted a week or two ago)...
While I have the utmost respect for Marcus Hutter's theoretical work
on AGI, and I do think this prize is an interesting one, I also want
to state that I don't
Howdy Shane,
I'll try to put my views in your format
I think that
Extremely powerful, vastly superhuman AGI == outstanding Hutter test result
whereas
Human-level AGI =/= Good Hutter test result
just as
Human =/= Good Hutter test result
and for this reason I consider the Hutter test a
I don't think it's anywhere near that much. I read at about 2 KB
per minute, and I listen to speech (if written down as plain text)
at a roughly similar speed. If you then work it out, buy the time
I was 20 I'd read/heard not more than 2 or 3 GB of raw text.
If you could compress/predict
Hi,
It's easy enough to write out algebraic rules for manipulating fuzzy
qualifiers like very likely, may, and so forth. It may well be
that the human mind uses abstract, intuitive, algebraic-like rules for
manipulating these, instead of or in parallel to more quantitative
methods...
However,
Ben: I think the problem of contextuality may be solved like this:
Examples:
John and Mary have many kids. (like, 10)
This Chinese restuarant has many customers. (like 100s)
Many people in Africa have AIDs. (like 10s of millions)
so I propose a rule like this:
IF
n is significantly the
No. IMO, a simple rule like this does not correctly capture human
usage of qualifiers across contexts, and is not adequate for AI
purposes
Perhaps this rule is a decent high-level approximation, but AGI
requires better...
-- Ben
On 8/4/06, Yan King Yin [EMAIL PROTECTED] wrote:
Ben:
Yeah, and I'd think modifiers like many are easily handled by a
probability distribution determined by the context over integers. Easily at
least in theory that is since the details of choosing an appropriate
distribution in any given context might be a bit tricky.
Right, but the question is,
YKY
1) I agree that the brain's probabilistic reasoning does not involve
high-precision calculations, but rather rough heuristic estimations
2) Of course, the brain has a LOT of stuff going on internally that is
not accessible to consciousness In very many ways our unconscious
brains are
Hi,
On 8/2/06, Pei Wang [EMAIL PROTECTED] wrote:
Short answer: (1) AGI needs to allow fuzzy concept, and to handle
fuzziness properly,
Agreed: e.g. fuzzy modifiers like more, very, many, some etc. must be
handled by an AGI system, along with fuzzy membership statements like
Fido is a member
Google's data will be accessible to any AI anywhere, right? And
computer power can be built up pretty quickly by anyone with a lot of
money...
Just as Google seemingly arose out of nowhere, so could some other
organization, I reckon...
Google is certainly well-positioned, but it would seem
Hi,
I agree that as this list gets busier, a bit more moderation will be necessary.
I'll mull this during the next week, and perhaps appoint someone as an
official moderator (maybe myself, but maybe someone else).
-- Ben Goertzel (list owner)
On 7/28/06, Eugen Leitl [EMAIL PROTECTED] wrote
an intelligent and civilized
bunch, a fact for which I am grateful.
I have chosen Bruce for this august responsibility because -- as well
as being online a lot, savvy about AGI, and willing to do the job --
he is a very sensible, friendly and polite human being
Yours,
Ben Goertzel
Hmmm...
About the measurement of general intelligence in AGI's ...
I would tend to advocate a vectorial intelligence approach
I tend to think that quantitatively or otherwise precisely defining
and measuring general intelligence -- as a single number -- is a bit
of a conceptual and pragmatic
Hi,
On a related subject, I argued in What is Thought? that the hard
problem was not processor speed for running the AI, but coding the
software,
This is definitely true.
However, processor speed for research is often a significant issue.
With faster processors, it would be quicker to run
Eugen Trust me, the speed is. Your biggest problem is memory
Eugen bandwidth, actually.
Well, on this we differ. I can appreciate how you might think memory
bandwidth was important for some tasks, although I don't, but
I'm curious why you think its important for planning problems like
Sokoban or
filling the AI's mind with a bunch of junk. Of course, I
haven't bothered to learn Lojban well yet, though ;-( ...
-- Ben
On 7/13/06, James Ratcliff [EMAIL PROTECTED] wrote:
Ben Goertzel [EMAIL PROTECTED] wrote:
While AIXI is all a bit pie in the sky, mathematical philosophy if
you
like
than I alone
could ever do.
James Ratcliff
Ben Goertzel [EMAIL PROTECTED] wrote:
I think that public learning/training of an AGI would be a terrible
disaster...
Look at what happened with OpenMind and MindPixel These projects
allowed the public to upload knowledge into them, which resulted
are
discussed in a couple chapters in a more general way...
-- Ben Goertzel
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Oops, I have just been informed by BrownWalker that the buy password
does not work on that site yet. It will be activated within a couple
weeks. The book will also be available thru Amazon...
-- Ben
On 7/8/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,
Individuals interested
Hi,
Of course the premisse is that the DAG HTM will work in a way that for all the
situations the systems encounters in its environment, there can be a
good(90%???) prediction winner at each level. That there is no need for
cooperation between nodes on the same level in order to achieve good
HI,
So my guess is that focusing on the practical level for building an agi
system is sufficient, and it's easier than focusing on very abstract
levels. When you have a system that can e.g. play soccer, tie shoe lases,
build fences, throw objects to hit other objects, walk through a terrain
to
Eric Baum wrote:
It is demonstrably untrue that the ability to predict the effects of
any action, suffices to decide what actions one should take to
reach one's goals.
For example, given a specification of a Turing machine, one can
predict its sequence of states if one feeds in any particular
This is more cog-sci than AGI oriented, but it's interesting...
http://www.physorg.com/news69338070.html
New analysis of the language and gesture of South America's
indigenous Aymara people indicates they have a concept of time
opposite to all the world's studied cultures -- so that the past
In Hawkins' HTM architecture it can be imagined that each node contains an
action proposal system. And that actions (and goals) of a node are formulated
in terms of the concepts that are present at that node, and then that those
actions are pushed down the hierarchy were they cause more concrete
More cool stuff...
-- Forwarded message --
From: Neil H. [EMAIL PROTECTED]
Date: Jun 16, 2006 5:22 PM
Subject: Paper: Inducing savant-like counting abilities with rTMS
To: [EMAIL PROTECTED]
(x-posted to extropy-chat)
Back in 2003 there was a popular-press article on Allan
the dependencies
of any concept you would end up on the perception level.
Arnoud
On Thursday 15 June 2006 14:20,
Ben Goertzel wrote:
Hi,
I have read Hawkins' paper carefully and I enjoyed it.
As for the generality of applicability of HTM, here is my opinion..
The specific manifestation of hierarchical
HI,
Without common
interfaces, Novamente processes must have a common internal design and I
would content that this is a large disadvantage.
But, it is not the case that Novamente processes must have a common
internal design
Can I convince you that it is sufficient for a process to be
Richard,
You say the following about your interesting-sounding neural net AGI system:
My big issue is that the system depends on laborious experimentation to
find stable configurations of local parameters that will get all these
processes to happen at once. I believe that this has to be done
Hi,
My own approach is to design a cognitive architecture that has elements
that look somewhat like neurons in some respects, but which have some
properties that make it easy for them to combine in such a way as to
represent abstract knowledge.
This sounds nice ... but what are these
Eric,
I have not received Les Valiant's book yet but I have now read the
papers on his site, and it seems to me that none of them addresses the
questions I asked ;-)
He does a nice and thorough job of explaining how a simple semantic
network [consisting of concept nodes denoting sets or
one meaningful example]
-- Ben
On 6/14/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Ben Goertzel wrote:
Eric,
I have not received Les Valiant's book yet but I have now read the
papers on his site, and it seems to me that none of them addresses the
questions I asked ;-)
It was a pretty
Mark,
Hmmm In this conversation, we seem to be completely talking past
each other and not communicating meaningfully at all...
You say that
In most blackboard systems (i.e. those where all processes share the
same collection of active knowledge) and, more particularly, in 100% of
Hi,
The last
thing that you should be doing is co-varying parameters all over the map.
It's no wonder that you're having stability problems.
You seem to be confusing Novamente with Richard Loosemore's system...
Novamente does NOT have stability problems, in any sense...
The way this
Hi all,
Well, this list seems more active than it has been for a while, but
unfortunately this increased activity does not seem to be correlated
with a more profound intellectual content ;-)
So, I'm going to make a brazen attempt to change the subject, and
start a conversation about an issue
Hi,
If you're using a virtual environment for AGI testing,
are you rolling your own (if yes, open-sourced?), or
using an off-shelf one?
A little of both... we have built our own (open source) 3D simulation
world environment, AGISim
sourceforge.net/projects/agisim/
but it's based on the
More importantly, why hasn't this guy been banned from the list yet?
I'm new here, so if there's any no bans policy I don't know please
excuse the question.
http://www.nothingisreal.com/mentifex_faq.html
I would assume that you all would have read this page with details
about this spammer?
Hi Sanjay,
On 6/12/06, William Pearson [EMAIL PROTECTED] wrote:
On 10/06/06, sanjay padmane [EMAIL PROTECTED] wrote:
I feel you should discontinue the list. That will force people to post there.
I'm not using the forum only because no one else is using it (or very
few), and everyone is
Phil,
The answer is
* I believe the Forum is a superior mode of communication, IF PEOPLE
WILL USE IT, because of the much nicer threading and archiving
facilities
* People in this community seem to prefer to use a list to a forum
So, the Forum exists in the hopes that eventually discussion
Hi Eli,
First, as discussed in the chapter, there's a major context change
between the AI's prehuman stage and the AI's posthuman stage. I can
think of *many* failure modes such that the AI appears to behave well as
a prehuman, then wipes out humanity as a posthuman. What I fear from
this
Hi,
When reading this nice survey paper of Eliezer's
_Cognitive biases potentially affecting judgment of global risks_
http://singinst.org/Biases.pdf
I was reminded of some of the heuristics and biases that exist in the
Novamente system right now.
For instance, consider the case of
Hi,
The chapters are:
_Cognitive biases potentially affecting judgment of global risks_
http://singinst.org/Biases.pdf
...
_Artificial Intelligence and Global Risk_
http://singinst.org/AIRisk.pdf
The new standard introductory material on Friendly AI. Any links to
_Creating Friendly
I suppose the subtext is that your attempts to take the intuitions
underlying CFAI and turn them into a more rigorous and defensible
theory did not succeed.
That's a very interetsing jump. Perhaps he's merely not finished
yet?
-Robin
Ok... I should have said did not succeed YET, which is
Check out this paper...
http://www.numenta.com/Numenta_HTM_Concepts.pdf
I think it's a good article.
It seems to fairly fully reveal the scope and nature of their current
scientific activities, though it says nothing about their plans for
commercialization or other practical application.
Well, the main disadvantage of not representing knowledge is that
doing so makes you completely unintelligent ;-) [Of course, whether
or not this is really a disadvantage is a philosophical question, I
suppose. It has been said that ignorance is bliss ... ]
Seriously: Do you mean to suggest
My question was more to the different methodology of knowledge
Representations (KR) and Knowledge Base (KB) types of designs and their
performance at retrieving facts in respect to the computer time/computer
instructions required to retrieve facts and storage requirements.
Well, viewing the
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, May 31, 2006 10:28 AM
Subject: Re: [agi] Best methods of Knowledge Representaion and Advantages
Disadvantages?
My question was more to the different methodology of knowledge
Representations
?
Dan Goe
From : Ben Goertzel [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Data there vs data not there, Limits to storage?
Date : Wed, 31 May 2006 11:51:45 -0400
Novamente can run on a distributed network of machines, using both
gets to first grade?
12th grade? collage?
Phd Status?
Dan Goe
From : Ben Goertzel [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Largest test to date?.. Data there vs data not
there..
Date : Wed, 31 May 2006 12:14:13 -0400
From : Ben Goertzel [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Estimate of NM grade time table? Largest test to
date?
Date : Wed, 31 May 2006 12:30:38 -0400
On Wed, 31 May 2006 11:24:00 -500, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote
YKY,
First, can you define procedural knowledge?
I don't want to give a formal definition in the context of this email
discussion...
The informal notion is a piece of procedural knowledge is something
that can directly be used to generate a series of actions.
Here directly should be
Shane,
I'm not a neuroscientist either, but I do know there is definitely
plenty of evidence about localization of specific types of memory in
the brain:
For instance,
* Episodic memory tends to be stored in the neocortex, particularly
Right Frontal Temporal Lobes
* Semantic memory tends to
In Novamente nodes may contain procedures. IMO this makes the knowledge
representation very complex. In my model I use a flat representation akin
to predicate logic / semantic network. This is one of the key assumptions I
make, ie that a flat representation is sufficient for AGI.
The
Not the baby-halving threat, actually.
http://www.geocities.com/eganamit/NoCDT.pdf
Here Solomon's Problem is referred to as The Smoking Lesion, but the
formulation is equivalent.
Thanks for the reference. The paper is entertaining, in that both the
theories presented (evidential decision
is *necessary* for understanding the various logical puzzles
and paradoxes we've been discussing in this thread, though perhaps it
may provide a useful perspective.
More later,
Ben
On 5/26/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Ben Goertzel wrote:
Thanks for the reference. The paper
Hi Eliezer,
I worked out an analysis based on correlated computational processes -
you treat your own decision system as a special case of computation and
decide as if your decision determines the output of all computations
that are similar to the decision. Or to put it another way, you don't
Indeed. Also, bear in mind that SIAI is only one organizer of the
Summit; and that the goal was to fit in all the viewpoints, rather than
all the people. Ben Goertzel and Eliezer Yudkowsky may seem different
if your accustomed environment is the SL4 mailing list, but from the
Summit's
Hi,
If any of you have 14 minutes to spare for some silliness, my son Zeb
(age 12) has made a brief animated movie about how two of my
colleagues and I create a superhuman AI called Novamente that destroys
the universe (yeah, I gave him some plot suggestions ;-).
See
On 5/10/06, Bill Hibbard [EMAIL PROTECTED] wrote:
I am concerned that the Singularity Summit will not include
any speaker advocating government regulation of intelligent
machines. The purpose of this message is not to convince you
of the need for such regulation, but just to say that the
Summit
Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Sunday, May 07, 2006 9:41 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Logic and Knowledge Representation
Hi,
My opinion on the most probable route to a true AI Entity is:
1. Build a better fuzzy pattern representation language
Hmmm
The inimitable Mentifex wrote:
http://www.blogcharm.com/Singularity/25603/Timetable.html
2006 -- True AI
2007 -- AI Landrush
2009 -- Human-Level AI
2011 -- Cybernetic Economy
2012 -- Superintelligent AI
2012 -- Joint Stewardship of Earth
2012 -- Technological Singularity
Regarding
Sorry all, I will remove this spammer from the list...
ben
On 3/16/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
---
To unsubscribe,
Hi all,
This conference looks potentially interesting.
-- Ben
Call for Extended Abstracts
Toward Social Mechanisms of Android Science
An ICCS Symposium co-located at
CogSci 2006Vancouver
, Canada, 26 July 2006
androidscience.com
Authors are invited to submit two-page extended abstracts to
Hi,
I don't know if Novamente currently has such a PWM (perhaps by another
name). Anyway, my vision module has to interact with the PWM. The main
function of the vision module is to map the geon-based model to appearances.
I've had this part roughly figured out.
Novamente's part is to
Interesting...
Apparently they realized that without cognition a humanoid robot is
useless... and then instead of working on cognition just gave up.
Pfeehhh!!
http://news.com.com/Sony+puts+Aibo+to+sleep/2100-1041-6031649.html?part=dhttag=nl.e703
---
To unsubscribe, change your address, or
OK, YKY ... thanks!
ben
On 1/21/06, Yan King Yin [EMAIL PROTECTED] wrote:
Thanks, Ben for holding the conference, and for persistently pushing the
status of AGI forward.
I will try to submit a presentation for my group's vision-for-AGI project,
but I may not be able to participate
Peter Norvig (one of Google's AI leaders) shed some light onto this at
his talk at the ACC05 conference.
What he alluded to there was a goal, in 5+ years from how, of having a
system that can answer any natural language query whose answer exists
somewhere on the Internet.
E.g. if asked Who was
It's a rare occurence, but I have just read an AI research paper which
is of nontrivial interest...
A model of syntactic parsing model based almost entirely on the
mechanisms in the physical reasoning model, making the case for the
cognitive substrate principle.
N. L. Cassimatis (2004).
Hi Pei,
The topics where I agree with Cassimatis:
*. humans use the same or similar mechanisms for linguistic and
nonlinguistic cognition
*. there are dualities between elements of physical and grammatical
structure
*. Infant physical reasoning mechanisms are sufficient to infer
There is linguistic-specific knowledge (which is learned), but no
linguistic-specific inference rule (which is innate). The rules alone
are not enough to produce human-level NLP performance, though should
be sufficient to learn the needed knowledge (given proper experience,
of course).
A
Hi all,
Sorry about that SPAM, I am a bit perplexed as the list is already
configured to allow posts by subscribers only. And
[EMAIL PROTECTED]
is not and never was subscribed to the list.
I have contacted customer support at listbox and I presume they will
be able to tell me how to solve
Based on my somewhat but not completely thorough understanding of the
US military/intel community (I live near DC, have done some consulting
for the community, and know a lot of folks involved with it), I find
it very unlikely that they are seriously pursuing AGI RD. However,
*watching* people
is meant
to not compete the civilian people, in fact in many cases it even helps
(Internet). I don't know how much it would affect the civil area if army
would have AGI earlier.
Márk
On 12/19/05, Ben Goertzel [EMAIL PROTECTED] wrote:
Mark,
But a few years shift can make a huge
I guess they have just decided that my research is sufficiently
interesting to keep up to date on. Though getting hits from these
people on a daily basis seems a bit over the top. I only publish
something once every few months or so!
Shane
I suppose this means they are using a very
hi,
My strategy is to first discuss the most typical models of the neural
network family (or the standard NN architectures, as Ben put it),
as what it usually means to most people at the current time. After
that, we can study the special cases one-by-one, to see what makes
them different and
Forwarded for Pei Wang:
-
Hi,
Recently I tried to organize my ideas about neural network, that is,
what I like and dislike, and why. What I've got so far is a short
memo, which is put at www.cis.temple.edu/~pwang/drafts/NN-AGI.pdf for
your comments.
Title: Neural Networks
I'll investigate how to stop the problem, thanks...
ben
On 12/16/05, Brian Atkins [EMAIL PROTECTED] wrote:
If you don't have the mailing list configured to only allow subscribers to
post,
please do so. Otherwise, please figure out which subscriber is sending this
and
remove them. Looks
Hello Lucas,
Welcome to the AGI list!
Where in Brazil are you located? I ask because there happen to be a
couple folks working on the Novamente AGI project in Belo Horizonte at
www.vettalabs.com ...
-- Ben
On 12/15/05, Lucas Silva [EMAIL PROTECTED] wrote:
Hi,
I would like to introduce
This conference looks like an interesting one (especially since I'm
presenting at it ;-)
http://www.wcci2006.org/
Search the site for the Panel Session called
A Roadmap to Human-Level Intelligence
-- Ben G
---
To unsubscribe, change your address, or temporarily deactivate your
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: 12 December 2005 16:00
To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Com; Bruce J. Klein;
agi@v2.listbox.com
Subject: [agi] AGISIM simulation world
Hi all,
In case you're curious, Sanjay Padmane is creating us some
Intelligence has many aspects, and in doing a practical AGI project
one has to prioritize.
In the Novamente project, we have decided that
* grounding of abstract concepts via perception and action in a body
embedded in a world
is a sufficiently useful thing that we should prioritize it, whereas
.
I can't speak for others, but my goal is to create AGI as a tool, not as
something sentient. I believe it is possible to do that, but that
possibility does not appeal to me. Building AGI as a passive tool is much
more important IMO.
My main interest is just the opposite of yours...
The
Well, its a big debate really. To be on a safer side, I feel its good to
hold the source until you are very much sure that its safe to release it.
Sanjay
Precisely...
ben
---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to
Hi,
The purpose of this email is to announce a new
online-magazine/group-blog called Post-Interesting
http://www.post-interesting.com
which I and some friends have created.
(I don't think I announced it on this list before, but if I did and am
going senile I accept your forgiveness ;-)
-- Ben
Hi,
Obviously this has little to do with AGI and was posted to this list
by mistake, yet it is indirectly relevant to AGI. Because the
reaction that many mainstream biological scientists have to Aubrey's
work is typical of the reaction that many mainstream AI scientists
have to any work that
process.
Of course, the basic idea is that if this worked it would be much
cheaper to buy PS3's than 8-processor PC's, so a much larger
evolutionary learning farm could be constructed at a relatively modest
budget.
Thoughts?
--- Ben Goertzel
On 11/27/05, Eugen Leitl [EMAIL PROTECTED] wrote:
Link
Matt,
Hmmm ... I guess I need to be clearer about my conjectured potential
use for the Cell within Novamente or other AI systems.
I agree with most of your general sentiments about the obstacles to
using specialized architectures within AGI systems, but I don't feel
your comments answer my
Hi all,
I'm writing this message just to see if there's anyone out there who's
interested in taking up a somewhat difficult but very important
math/software project, which would be very helpful for AGI in general (and
for my Novamente project, as it happens ;) ...
I am not alone in believing
1001 - 1100 of 1549 matches
Mail list logo