> I have briefly surveyed the research on uncertain reasoning, and found
> out that no one has a solution to the entire problem. Ben and Pei
> Wang may be working towards their solutions but a satisfactory one may
> be difficult to find.
I think the PLN / indefinite probabilities approach is a co
> Do OpenCog atoms roughly correspond to logical atoms?
Not really
> And what is the counterpart of (logic) propositions in OpenCog?
ExtensionalImplication relations I guess...
> I suggest don't use non-standard terminology 'cause it's very confusing...
So long as it's well-defined, I guess it
I'll respond to other points tomorrow or the day after (am currently
on a biz trip through Asia), but just one thing now... You say
> With NO money, none of either of our efforts stands a chance. With some
> realistic investment money, scanning would at minimum be cheap insurance
> that you will b
on a possible 'grand
> theory' of the brain that suggests that virtually all brain functions can be
> modelled with Bayesian statistics.
>
> The link (above) is a blog copy of the article in New Scientist.
>
> -dave
>
> agi | Archiv
> Here are some examples in FOL:
>
> "Mary is female"
>female(mary)
Could be
Inheritance Mary female
or
Evaluation female mary
(the latter being equivalent to female(mary) )
but none of these has an uncertain truth value attached...
> This is a [production] "rule": (not to be confused
On Sun, Jun 1, 2008 at 2:15 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Sat, 5/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> But in future, there could be impostor agents that act like
>> they have humanlike subjective experience but don't
e conscious. I believe you are a zombie. Prove me
> wrong.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
logists that this device is practical to make because
>> they can't see their way past the computer problems - that many of the
>> people here on this forum could handle, even with a hangover.
>>
>> Steve Richfield
>>
>>
>>
>> -----
mark,
> What I'd rather do instead is see if we can get a .NET parallel track
> started over the next few months, see if we can get everything ported, and
> see the relative productivity between the two paths. That would provide a
> provably true answer to the debate.
Well, it's an open-source p
ief" is that seeing
> what happens will cause a migration -- but I'm not invested in that belief
> and would be happy and see huge benefits either way.
>
> Mark
>
> P.S. Thank you for the forward Ben.
>
> - Original Message -
> From: Ben Goertzel
>
On Mon, May 26, 2008 at 8:33 PM, J. Andrew Rogers
<[EMAIL PROTECTED]> wrote:
> Replying to myself,
>
> I'll let Mark have the last word since, after all, it is *his* project and
> not mine. :-)
I assume that last sentence was sarcastic ;-)
Of course, while Mark is a valued participant in OpenCog,
Mark,
>>> For OpenCog we had to make a definite choice and we made one. Sorry
>>> you don't agree w/ it.
>
> I agree that you had to make a choice and made the one that seemed right to
> various reason. The above comment is rude and snarky however --
> particularly since it seems to come *becau
> Ben and Peter. Do you plan to sell your systems to weapons firms if they
> show an interest?
It is unlikely that I would ever sell an AI system to be used to
control a weapon
However, it's easy enough to imagine cases where this would be the best thing
to do, e.g. when an obviously evil power
ing tools and
> infrastructure.
>
> Can you find anyone who is familiar with both .NET 3.5 and Linux/C++ who is
> willing to claim otherwise?
>
> What is your reason for using C++? Other than the fact that porting your
> application is going to be expensive, I'm not sure
> One of the things that I've been tempted to argue for a while is an entirely
> alternate underlying software architecture for OpenCog -- people can then
> develop in the architecture that is most convenient and then we could have
> people cross-port between the two. I strongly contend that the c
> Please, if you're going to argue something --
> please take the time to argue it and don't pretend that you can't magically
> solve it all with your "guesses" (I mean, intuition).
time for mailing list posts is scarce for me these days, so sometimes I post
a conclusion w/out the supporting argum
On Sun, May 25, 2008 at 10:42 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
>> My own view is that our state of knowledge about AGI is far too weak
>> for us to make detailed
>> plans about how to **ensure** AGI safety, at this point
>
> I disagree strenuously. If our arguments will apply to *all* int
25, 2008 at 6:26 AM, Panu Horsmalahti <[EMAIL PROTECTED]> wrote:
> What is your approach on ensuring AGI safety/Friendliness on this project?
>
> agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Direc
er "
>
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Lis
Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"If men cease to believe that they will one day become gods then they
will
Richard wrote:
> Then, when we came back from the break, Ben Goertzel announced that the
> roundtable on symbol grounding was cancelled, to make room for some other
> discussion on a topic like "the future of AGI", or some such. I was
> outraged by this. The subse
Richard wrote:
> My god, Mark: I had to listen to people having a general discussion of
> "grounding" (the supposed them of that workshop) without a single person
> showing the slightest sign that they had more than an amateur's perspective
> on what that concept actually means.
I guess you are
Loosemore wrote:
> I hear people enthusing about systems that are filled with holes that were
> discovered decades ago, but still no fix. I read vague speculations and the
> use of buzzwords ('Theory of Mind'!?). I see papers discussing narrow AI
> projects.
I suppose there was all that at AGI-
Hi,
>Somebody could write an excellent paper about the
> potential pitfalls of such an approach (detail, fidelity, deep causality
> issues behind appearance, function, and inter-object + inter-feature
> relationships, and so on). If nobody else is working in detail on
> publishing such an analysi
> requirements and express them as a fitness function that produces the
> desired results in a way that looks realistic."
>
> Bob/ Ben Goertzel <[EMAIL PROTECTED]>:
>
>
>
> >
> > > If you gathered data about how people move in a certain context, using
&g
>
>
>
> On Thu, May 1, 2008 at 5:39 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > Now this looks like a fairly AGI-friendly approach to controlling
> > animated characters ... unfortunately it's closed-source and
> > proprietary though...
> &
Virus Database: 269.23.7/1409 - Release Date: 5/1/2008
> 8:39 AM
> >
> >
> >
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
>
Now this looks like a fairly AGI-friendly approach to controlling
animated characters ... unfortunately it's closed-source and
proprietary though...
http://en.wikipedia.org/wiki/Euphoria_%28software%29
ben
---
agi
Archives: http://www.listbox.com/member/a
And so, maybe some houses fall down ;-)
>
> But not many do. The combination of rigorous formulas applying to
> restrictive
> cases, together with intuition telling you where to apply what formulas,
> works
> OK.
>
> Anyway this is a total digression, and I'm do
> I said and repeat that we can "engineer the complexity out of intelligence"
> in the Richard Loosemore sense.
> I did not say and do not believe that we can "engineer the complexity out
> of intelligence" in the Santa Fe Institute sense.
OK, gotcha...
Yeah... IMO, complexity in the sense you
ah, you seem to be using the word intuition where I use the words "rules
> of thumb". An interesting distinction and one that we probably should both
> remember . . . .
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/
ives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind L
On Sun, Apr 27, 2008 at 5:51 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
> > Engineering in the real world is nearly always a mixture of rigor and
> > intuition. Just like analysis of complex biological systems is.
> >
>
> AIEe! NO! You are clearly not an engineer because a true engineer
> No: I am specifically asking for some system other than an AGI system,
> because I am looking for an external example of someone overcoming the
> complex systems problem.
The specific criteria you've described would seem to apply mainly to living
systems ... and we just don't have that much kn
Richard,
> Question: "How many systems do you know of in which the system elements
> are governed by a mechanism that has all four of these, AND where the system
> as a whole has a large-scale behavior that has been shown (by any method of
> "showing" except detailed simulation of the system) to
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
>
> Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54
>
>
> > Yes, truly general AI is only possible in the case of infinite
> > processing power, which is
> &
Richard,
> How does this relate to the original context in which I cited this list
> of four characteristics? It loks like your comments are completely outside
> the original context, so they don't add anything of relevance.
I read the thread and I think my comments are relevant
> Let me bri
Ummm... just a little note of warning from the list owner.
Tintner wrote:
> > So I await your geometric solution to this problem - (a mere statement of
> principle will do) - with great interest. Well, actually no. Your answer is
> broadly predictable - you 1) won't have any idea here 2) will hav
& White if you're not a big gaming person?
>
> - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>
>
> To:
> Sent: Saturday, April 26, 2008 2:14 PM
> Subject: **SPAM** Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S
> COMPLEX
gt;
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.lis
Richard,
I've been too busy to participate in this thread, but, now I'll chip
in a single comment,
anyways... regarding the intersection btw your thoughts and Novamente's
current work...
You cited the following 4 criteria,
> > "- Memory. Does the mechanism use stored information about what it w
On Sat, Apr 26, 2008 at 10:03 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> In my opinion you can apply Gödel's theorem to prove that 100% AGI is not
> possible in this world
> if you apply it not to a hypothetical machine or human being but to the
> whole universe which can be assumed to
On Wed, Apr 23, 2008 at 11:29 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Ben/Joshua:
>
> How do you think the AI and AGI fields relate to the embodied & grounded
> cognition movements in cog. sci? My impression is that the majority of
> people here (excluding you) still have only limited awaren
On Wed, Apr 23, 2008 at 5:21 AM, Joshua Fox <[EMAIL PROTECTED]> wrote:
>
> To return to the old question of why AGI research seems so rare, Samsonovich
> et al. say
> (http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf)
>
> 'In fact, there are several scientific communities pursui
main necessary to pay
> programmers to write programs, at least some of the time. You can't
> always rely upon voluntary effort, especially when the problem you
> want to solve is fairly obscure.
>
>
>
>
>
>
> On 19/04/2008, Ben Goertzel <[EMAIL PROTECTED]&
t; ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.list
eems no
logical reason why one can't have precise, robot-simulator type
control of agents in virtual worlds... though I understand that
realizing the software integration involved in integrating OpenSim and
Player might involve numerous technical difficulties...
Thx
Ben G
--
Ben Goertzel
On Sat, Apr 19, 2008 at 12:51 PM, Charles D Hixson
<[EMAIL PROTECTED]> wrote:
> Ed Porter wrote:
>
> > WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?
> >
> >
There are no apparent missing conceptual pieces in the Novamente approach...
Hopefully this will become clear even from the OpenCog documen
> Translation: We all (me included) now accept as reasonable that in order to
> briefly earn a living wage, that we must develop radically new and useful
> technology and then just give it away.
...
> Steve Richfield
The above is obviously a "straw man" statement ... but I think it
**is** true the
YKY,
> > I believe I've solved the fundamental issues behind the Novamente/OpenCog
> > design...
>
> It's hard to tell whether you have really solved the AGI problem, at
> this stage. ;)
Understood...
> Also, your AGI framework has a lot of non-standard, home-brew stuff
> (especially the k
On Fri, Apr 18, 2008 at 5:35 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Pei: I don't really want
>
> a big gang at now (that will only waste the time of mine and the
> others), but a small-but-good gang, plus more time for myself ---
> which means less group debates, I guess. ;-)
>
> Altern
> > Potentially, though, massively distributed, collaborative open-source
> > software development could render your first premise false ...
> >
>
> Though it is unlikely to do so, because collaborative open-source
> projects are best suited to situations in which the fundamental ideas behind
ly when the
> funding is no longer needed anymore.
>
> Q.E.D. :-(
>
> Pei
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.l
On Thu, Apr 17, 2008 at 2:42 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
> > Really, work on the AtomTable has been a small percentage of work on
> > the Novamente Cognition Engine ... and, the code running the AtomTable is
> > now pretty much the same as it was in 2001 (though it was tweaked to ma
Hi Mark,
> This is, by the way, my primary complaint about Novamente -- far too much
> energy, mind-space, time, and effort has gone into optimizing and repeatedly
> upgrading the custom atom table that should have been built on top of
> existing tools instead of being built totally from scratch.
0.1% the
> cpu. But being a researcher is all learning -- so each one would need the
> whole shebang for each copy. A decade of Moore's Law ... and at least that of
> AGI research.
>
> Josh
>
>
>
> ---
> agi
> Archive
> We may well see a variety of proto-AGI applications in different
> domains, sorta midway between narrow-AI and human-level AGI, including
> stuff like
>
> -- maidbots
>
> -- AI financial traders that don't just execute machine learning
> algorithms, but grok context, adapt to regime changes
be able to understand ("appreciate") his/her
> genius anyhow.
> >
> > The only way to deal with postings like this is to IGNORE THEM. Don't
> rise to the bait. Like a bad cold, they will be irritating for a while, but
> they will, eventually, go away.
> >
Peruse the video:
http://www.youtube.com/watch?v=W1czBcnX1Ww&feature=related
Of course, they are only showing the best stuff. And I am sure there
is plenty of work left to do. But from the variety of behaviors that
are displayed, I would say that the problem of quadraped walking is
surprisin
tbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of
Ben G
On Sat, Apr 5, 2008 at 8:46 AM, Evgenii Philippov <[EMAIL PROTECTED]> wrote:
>
> On Sat, Apr 5, 2008 at 7:37 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > For instance, I'll be curious whether ADIOS's automatically inferred
> > grammars can deal wit
ation of who we are and what
makes us distinctive.
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller
> Thank you for your politeness and your insightful comments. I am
> going to quit this group because I have found that it is a pretty bad
> sign when the moderator mocks an individual for his religious beliefs.
FWIW, I wasn't joking about your algorithm's putative
divine inspiration in my rol
P. <[EMAIL PROTECTED]> wrote:
> Is it running inside Second Life already or it's another enviroment? (sorry
> I don't know SL very well)
>
>
>
> On Sat, Mar 29, 2008 at 11:40 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> > Nothing has been publicly r
contain inconsistencies,
> but you are going to have that problem with any inductive system.) If you
> are going to be using a rational-based AGI method, then you are going to
> want some theories that exhibit critical reasoning. These kinds of theories
> might turn out to be the keystone i
box.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Resea
his kind of discussion then
> do just that: Stay out of it.
> Jim Bromer
>
>
> Jim Bromer
>
>
> agi | Archives | Modify Your Subscription
>
>
> agi | Archives | Modify Your Subscription
--
Ben Go
does it cost money or
> something. Is it set up already?
> Jim Bromer
>
>
>
> On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >
> >
> >
> >
> http://technology.newscientist.com/article/mg19726495.700-virtual-pets
> 4. In fact. I would suggest that AGI researchers start to distinguish
> themselves from narrow AGI by replacing the over ambiguous concepts from AI,
> one by one. For example:
>
> knowledge representation = world model.
> learning = world model creation
> reasoning = world model simulation
> goal
http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"If men cease to believe that they will one day become gods then they
will surely b
and the nature of general intelligence.
> >>
> >
> > Mike you are 100% potentially right with a margin of error of 110%. LOL!
> >
> > Seriously Mike how do YOU indicate approximations? And how are you
> > differentiating general and specific? And declaring r
> So if I tell you to "handle" an object, or a piece of business, like say
> "removing a chair from the house" - that word "handle" is open-ended and
> gives you vast freedom within certain parameters as to how to apply your
> hand(s) to that object. Your hands can be applied to move a given bo
rt
of fun weekend ;-)
-- Ben
On Wed, Mar 26, 2008 at 10:43 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> OK... I just burned an hour inserting more links and content into
>
> http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook
>
> I'm burnt out on it for a while, the
ers' references ...
And then I'll save a lot of time during the next year, because when
someone emails me and asks me what they should read to get
up to speed on the general thinking in the AGI field, I'll just point
them to the non-textbook ;-)
-- Ben
--
Ben Goertzel, PhD
CEO,
BTW I improved the hierarchical organization of the TOC a bit, to
remove the impression that it's just a random grab-bag of topics...
http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook
ben
---
agi
Archives: http://www.listbox.com/member/archive/303/
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www
Hi Stephen,
> Ben,
> Wikipedia has significant overlap with the topic list on the AGIRI Wiki. I
> propose for discussion the notion that the AGIRI Wiki be content-compatible
> with Wikipedia along two dimensions:
>
> license - authors agree to the GNU Free Documentation License
I have no problem
pe it
into a textbook
-- Ben
On Wed, Mar 26, 2008 at 9:49 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
> Hi Ben,
>
> I have a publisher who would love to publish the result of the wiki as a
> textbook if you are willing.
>
> Mark
>
>
>
> ----- Orig
ss work to correct it than to make one, right?
>
> Hey - whatever helps. For me, it's a win-win. It would help me, and it
> would help accomplish what you guys are trying to do.
>
> Let me know,
> ~Aki
>
>
>
> On Tue, Mar 25, 2008 at 10:40 PM, Ben Goertzel <
#x27;m working on
> > (http://nars.wang.googlepages.com/gti-summary).
> >
> > Compared to yours, mine will contain less math and algorithms, but
> > more psychology and philosophy.
> >
> > I'd like to see what Richard and others want to propose. We shouldn
On Tue, Mar 25, 2008 at 11:07 PM, Aki Iskandar <[EMAIL PROTECTED]> wrote:
> Thanks Ben. That is really exciting stuff / news. I'm loking forward to
> OpenCog.
>
> BTW - is OpenCog mainly in C++ (like Novamente) ? Or is it translations (to
> Java, or other languages) of concepts so that others ca
> I actually recently purchased Artificial Intelligence: A Modern
> Approach - but only because I did not know where else to start.
It's a very good book ... if you view it as providing insight into various
component technologies of potential use for AGI ... rather than as saying
very much direct
gt; willing to put up and host an "AGI Wiki" if theis community would find it of
> use. I'd need a few weeks - because I don't have the time right now - but
> it is a worthwhile endeavor, and I'm happy to do it.
> >
> > ~Aki
> >
> >
> >
>
> I'll try to find the time to provide my list --- at this moment, it
> will be more like a reading list than a textbook TOC.
That would be great -- however I may integrate your reading
list into my TOC ... as I really think there is value in a structured
and categorized reading list rather than
On Tue, Mar 25, 2008 at 9:39 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Richard,
>
>
> > Unfortunately I cannot bring myself to believe this will help anyone new
> > to the area.
> >
> > The main reason is that this is only a miscellaneous list of t
Richard,
> Unfortunately I cannot bring myself to believe this will help anyone new
> to the area.
>
> The main reason is that this is only a miscellaneous list of topics,
> with nothing to indicate a comprehensive theory or a unifying structure.
> I do not ask for a complete unified theory,
xai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
>
> ____
> Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
> now.
>
>
> agi | Archives | Modify Y
I,
some narrow AI, some psychology, some neuroscience, some mathematics,
etc.
***
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"If men cease to believe that they will one day become gods then th
as little hand-waving, faith, or
> bigotry as possible in my conclusion). To do that properly, I am waiting
> for your book on Probabilistic Logic Networks to be published. Amazon says
> July 2008... is that date correct?
>
> Thanks!
>
> ________
>
Hi Aki,
> Even as a pure scientist, you can
> accomplish more in research by producing wealth, than depending on gov't
> grants. I say gov't grants because private investment is probably years
> away from now. The topic of financing got a lot of attention at AGI 08.
>
Well, if you're an AGI res
> Now, let me ask you a question: Do you believe that all AI / AGI
> researchers are toiling over all this for the challenge, or purely out of
> interest? I doubt that as well. Surely there are those elements as drivers
> - BUT SO IS MONEY.
Aki, you don't seem to understand the psychology of th
http://www.codeplex.com/singularity
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb
pencog.org).
Some proposal ideas are found here
http://opencog.org/wiki/Ideas
but we're quite open to other suggestions as well, in the freewheeling spirit
of GSOC...
Thanks
Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"If men c
... or whatever the set of objects in the toy
world may be...
This is the danger of toy test environments, be they in virtual worlds or
physical robotics...
ben g
On Thu, Mar 13, 2008 at 12:35 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Unless the details of that modified Turing Test ar
any opinions on that.
> >
> > Ed Porter
> >
> > ---
> > agi
> > Archives: http://www.listbox.com/member/archive/303/=now
> > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription
d: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PRO
y.newscientist.com/channel/tech/dn13446-virtual-child-passes-mental-milestone-.html
>
> Josh
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> M
t; > [EMAIL PROTECTED]
> >
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&am
> > An attractor is a set of states that are repeated given enough time. If
> > agents are killed and not replaced, you can't return to the current state.
>
> False. There are certainly attractors that disappear, first
> seen by Ruelle, Takens, 1971 its called a "blue sky catastrophe"
>
> ht
> The three most common of these assumptions are:
>
>1) That it will have the same motivations as humans, but with a
> tendency toward the worst that we show.
>
>2) That it will have some kind of "Gotta Optimize My Utility
> Function" motivation.
>
>3) That it will have an intrinsic
701 - 800 of 2064 matches
Mail list logo