About Friendly AI..
>
> Let me put it this way: I would think anyone in a position to offer funding
> for this kind of work would require good answers to the above.
>
> Terren
My view is a little different. I think these answers are going to come out
of a combination of theoretical advances w
***
So it could be a specific set of states? To specify long term growth
as a goal, wouldn't you need to be able to do an abstract evaluation
of how the state *changes* rather than just the current state?
***
yes, and of course a GroundedPredicateNode could do that too ... the system
can recall i
>
>
> Have you implemented a long term growth goal atom yet?
Nope, right now we're just playing with virtual puppies, who aren't
really explicitly concerned with long-term growth
(plus of course various narrow-AI-ish applications of OpenCog components...)
> Don't they have
> to specify a speci
>
>
> Isn't it an evolutionary stable strategy for the modification system
> module to change to a state where it does not change itself?1
Not if the top-level goals are weighted toward long-term growth
> Let me
> give you a just so story and you can tell me whether you think it
> likely. I'd
On Sat, Aug 30, 2008 at 10:06 AM, William Pearson <[EMAIL PROTECTED]>wrote:
> 2008/8/29 Ben Goertzel <[EMAIL PROTECTED]>:
> >
> > About recursive self-improvement ... yes, I have thought a lot about it,
> but
> > don't have time to write a huge discours
://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"Nothing will ever be
On Fri, Aug 29, 2008 at 6:53 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
Ben,
It looks like what you've thought about is aspects of the information
processing side of RSI but not the knowledge side. IOW you have thought
about the technical side but not abouthow you progress from one domain of
x.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research
nd interdependence of all systems.
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
>
All are welcome...
-- Forwarded message --
From: Monica <[EMAIL PROTECTED]>
Date: Thu, Aug 28, 2008 at 9:51 PM
Subject: [ai-94] New Extraordinary Meetup: Ben Goertzel, Novamente
To: [EMAIL PROTECTED]
Announcing a new Meetup for Bay Area Artificial Intelligence Meetup
hat value growth
and spontaneity ... including growth of their goal systems in unpredictable,
adaptive ways
"
-- Ben G
On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> About play... I would argue that it emerges in any sufficiently
> generally
individual level.
ben g
On Tue, Aug 26, 2008 at 9:49 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>
> Examples of the kind of similarity I'm thinking of:
>
> -- The analogy btw chess or go and military strategy
>
> -- The analogy btw "roughhousing" and a
>
>
> If I do my job right, my AGI will have no "sense of self."
I have doubts that is possible, though I'm sure you can make an AGI with a
very different "sense of self" than any human has.
My reasoning:
1)
To get to a high level of intelligence likely requires some serious
self-analysis and
be you could give an example of what you mean by similarity
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>
/www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of R
Note that in this view play has nothing to do with having a body. An AGi
concerned solely with mathematical theorem proving would also be able to
play...
On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> About play... I would argue that it emerges in any
l lot to
> discuss here - it hasn't all been covered.
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/
On Thu, Aug 14, 2008 at 6:59 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> Jim:I know that
> there are no solid reasons to believe that some kind of embodiment is
> absolutely necessary for the advancement of agi.
>
> I want to concentrate on one dimension of this: precisely the "solid"
> dimension
;extension distinction is getting swept behind the scenes here,
into the definition of InheritanceLink...
ben
On Wed, Aug 13, 2008 at 8:13 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>
> YKY asked:
>
>
>> I'm interested in how the the rules are "fetche
YKY asked:
> I'm interested in how the the rules are "fetched" from memory, and how the
> variables get instantiated, etc...
>
> How would you represent the given facts:
>"John is male"
> "John is unmarried"
> and then perform the inference to get
> "John is a bachelor"?
>
> Sorry if
//www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"Nothing will ev
brain-based AGI system could
of course be built within the OpenCog Framework, which would be good fun...
ben G
On 8/10/08, John LaMuth <[EMAIL PROTECTED]> wrote:
>
> - Original Message -
>
> *From:* Ben Goertzel <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *
> Or even simpler problems, like : how were you to handle the angry Richard
> recently? Your response, and I quote: "Aaargh!" (as in "how on earth do I
> calculate my probabilities and Bayes?" and "which school of psychological
> thought is relevant here?") Now you're talking AGI. There is no ratio
On Sun, Aug 10, 2008 at 5:52 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> Will,
>
> Maybe I should have explained the distinction more fully. A totalitarian
> system is one with an integrated system of decisionmaking, and unified
> goals. A "democratic", "conflict system is one that takes decision
ised and unhappy when it happens.
> Funding and support questions and all.
> andi
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>
> And I've said it before, but it bears repeating in this context. Real
> intelligence requires that mistakes be made. And that's at odds with
> regular programming, because you are trying to write programs that don't
> make mistakes, so I have to wonder how serious people really would be
> abo
es <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> --
> *agi* | Archives <
On Sun, Aug 10, 2008 at 9:02 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> Ben:but, from a practical perspective, it seems more useful to think
> about minds that are rougly similar to human minds, yet better adapted to
> existing computer hardware, and lacking humans' most severe ethical and
> mo
fact this would be
quite desirable.
-- Ben G
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson
---
ntelligence
>> indeed.
>>
>>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox
//www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Directo
a. But again, brains are not
just soups of heterogenous processes -- the right high-level cognitive
architecture is required.
-- Ben Goertzel ... novamente.net agiri.org singinst.org goertzel.org
opencog.org
On Sat, Aug 9, 2008 at 1:01 PM, rick the ponderer <[EMAIL PROTECTED]> wrote:
On Sat, Aug 9, 2008 at 9:30 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> Ben,
>
> I clearly understood/understand this. My point is: are you guys' notions
> of non-human intelligence anything more than sci-fi fantasy as opposed to
> serious invention? To be the latter, you must have some half-co
On Sat, Aug 9, 2008 at 7:35 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> Brad:
> Sigh. Your point of view is heavily biased by the unspoken assumption that
> AGI
> must be Turing-indistinguishable from humans. That it must be AGHI.
>
> Brad,
>
> Literally: "what on earth are you talking about?"
much, but I thought a
bit about how they might fit into the PLN framework ... thoughts are in
attached document
This is technical stuff and the attached doc is written for someone who
knows both PLN and default/epistemic logic, so if you're baffled, no worries
;-)
ben
--
Ben Go
;
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>>
>> -- I see that simple embodiment is not anywhere near enough
>> to put human social contact into the reach of direct experience.
>> Embodiment will help AGI understand "chair" and "table";
>> it will not help it understand vindictiveness, slander.
>>
>
> True
>
Well, maybe I spoke too
>
>
> -- I don't see what benefit embodiment brings to the creation
> of an agi scientist/engineer, whereas reading is critical.
>Mechanical awareness -- not so much -- AGI could have
>"immediate" mechanical awareness of not just 3D, but also
>4D, 5D, etc. spaces.
I feel that simple
t; RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"No
now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
On Tue, Aug 5, 2008 at 7:45 AM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:
>
> On 8/5/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > Yes, but in PLN/ OpenCogPrime backward chaining *can* create hypothetical
> logical relationships and then seek to estimate their tr
n
> >
> >
> >
> >
> > ---
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Li
> When do you think Novamente will be ready to "go out" and effectively
> learn from (/interract with) environments not fully controlled by the
> dev team?
>
>
I wish I could say tomorrow, but realistically it looks like it's gonna be
2009 ... hopefully earlier rather than later in the year but I'
--
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
B
t; Brad
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbo
ata, and asked it to guess what
> the next item in the series would be, what sort of process would it
> employ?
>
> Thanks,
> --Abram Demski
>
> On 8/4/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > On Mon, Aug 4, 2008 at 6:10 PM, YKY (Yan King Yin) <
> > [EM
;
>
> -------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://
On Mon, Aug 4, 2008 at 6:10 PM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:
> On 8/5/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > As noted there, my impression is that PILP could be implemented within
> OpenCog's PLN backward chainer (currently bei
OpenCog by Joel
Pitt, from the Novamente internal codebase) via writing a special scoring
function ...
-- Ben G
-- Forwarded message ------
From: Ben Goertzel <[EMAIL PROTECTED]>
Date: Fri, Jun 6, 2008 at 9:27 AM
Subject: Re: logical implication (was: modus ponens)
To: "Y
; RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
&qu
riously analyzed every one of
>>>> them.
>>>>
>>> This is off the cuff but, Richard, if 1/8 of your messages included
>>> the word 'stupid', this could explain why the general consensus is
>>> that you've been insulting people
ething like "That is
> just one example of how he pulls conclusions out of thin air. The first time
> I read this paper I found the whole thing too ridiculous to read after the
> first few times this happened", this behavior of mine is just as disgraceful
> as comments directed str
gt; So there you go. As you say, the challenge is to do this and then give
> reasons why the rules were picked, and also to do a comparison with chosing
> rules at random.
>
> If people find it really too difficult to get the frequency ratio, I'd be
> happy enough to see j
---
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
--
Ben Goertzel, PhD
CEO,
On Sun, Aug 3, 2008 at 11:05 AM, Derek Zahn <[EMAIL PROTECTED]> wrote:
> I personally think that mailing lists were a decent medium for
> conversations in 1990, but forums are much better -- easily available
> historical context for a conversation, searchable topic history, and so on.
> I think yo
; https://www.listbox.com/member/?&;
> > f491a0
> > Powered by Listbox: http://www.listbox.com
>
>
>
> -------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303
pointers to some of the
> theories that get repeated endlessly, together with encouragement to the
> posters to just post the FAQ's URL rather than repeating the entire theory,
> might reduce the repetition. (Wasn't there a wiki area exactly for that
> started a while ago?)
>
On Sun, Aug 3, 2008 at 7:21 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> BenI think that an engineering based approach will succeed first, just as
> we succeeded in building airplanes first, rather than evolving a birdlike
> flying machine out of a prebiotic molecular soup...
> Ben,
>
> You've go
www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel J
oid "me too" posts -- but for those who felt my last
>> e-mail was too long, this is the essence of my argument (and very well
>> expressed).
>>
>> - Original Message - From: "Vladimir Nesov" <[EMAIL PROTECTED]>
>> To:
>> Sent:
s -- but for those who felt my last
> e-mail was too long, this is the essence of my argument (and very well
> expressed).
>
> - Original Message - From: "Vladimir Nesov" <[EMAIL PROTECTED]>
> To:
> Sent: Sunday, August 03, 2008 8:25 AM
> Subject: Re: [agi]
46 AM, Richard Loosemore <[EMAIL PROTECTED]>wrote:
> Ben Goertzel wrote:
>
>>
>> I think Ed's email was a bit harsh, but not as harsh as many of Richard's
>> (which are frequently full of language like "fools", "rubbish" and so forth
&g
he
> started except to shout down Richard's criticisms. Personally, I have given
> up on posting content to this list. Some moderation is strongly suggested.
> If it includes banning me -- so be it.
>
> Mark
>
> - Original Message -
> *From:* Ben Goertzel <[EMA
Hector
you say
In other words, there is nothing to do about AI or AGI but to look at the
> systems we have already around. I do think that any of those simple systems
> such as CA can achieve AGI of the kind we expect without having to do
> anything else! From my point of view it is just a m
>
>
> I would be perfectly happy if you simply finished our discussions with a
> statement that your scientific intuition tells you that the problem is not
> as serious as I think. You have sometimes done this, and I have gracefully
> subsided.
OK. My scientific intuition tells me that the "com
I is a terribly
similar problem to discovering the Game of Life...
Good night...
Ben
On Sat, Aug 2, 2008 at 10:54 PM, Richard Loosemore <[EMAIL PROTECTED]>wrote:
> Ben Goertzel wrote:
>
>>
>> Well, there may have been a lot of trial and error in figuring out which
>>
d pretty much as planned and whose
>>> behavior is reasonably well understood (although not totally understood, as
>>> is nothing that is truly complex in the non-Richard sense), and whose
>>> overall behavior has been as chosen by design (with a little experimentation
&g
ed the evidence.
>
>
>
> Richard Loosemore
>
>
> -------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.lis
member/?&;>Your Subscription
> <http://www.listbox.com>
>
>
> --
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> |
> Modify<https://www.li
eneral Discussion List" group.
> To post to this group, send email to [EMAIL PROTECTED]
> To unsubscribe from this group, send email to
> [EMAIL PROTECTED]<[EMAIL PROTECTED]>
> For more options, visit this group at
> http://groups.google.com/group/opencog?hl=en
> -~--~---
sign up for that
list and let's chat there...
I'm fine to do more general discussions on this list though.
thx
Ben
-- Forwarded message ------
From: Ben Goertzel <[EMAIL PROTECTED]>
Date: Wed, Jul 30, 2008 at 10:11 PM
Subject: OpenCog Prime wikibook and roadmap posted
On Mon, Jul 28, 2008 at 12:14 PM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:
> On 7/28/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > PLN uses confidence values within its truth values, with a different
> underlying semantics and math than NARS; but t
On Mon, Jul 28, 2008 at 11:10 AM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:
> On 7/28/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > Your inference trajectory assumes that "cybersex" and "STD" are
> probabilistically independent withi
Y
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http:
monotonicity using
> probabilistic networks?
>
> YKY
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.c
iticality is
> explained by the late Per Bak in _How Nature Works_, a short, excellent read
> and an brilliant example of scientific and mathematical progress in the realm
> of complexity.
>
> --- On Mon, 6/30/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> I agree that
m infinite ... the evolutionary process
itself may be endlessly creative, but in that sense so may be the
self-modifying process of an engineered AGI ...
-- Ben G
On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> --- On Mon, 6/30/08, Ben Goertzel <[EMAIL PR
on without doing an awful lot of
> computation.
>
> And what is our mind but the weather in our brains?
>
> Terren
>
> --- On Sun, 6/29/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> From: Ben Goertzel <[EMAIL PROTECTED]>
>> Subject: Re: [agi
Richard,
I think that it would be possible to formalize your "complex systems argument"
mathematically, but I don't have time to do so right now.
> Or, then again . perhaps I am wrong: maybe you really *cannot*
> understand anything except math?
It's not the case that I can only understand
> The argument itself is extremely rigorous: on all the occasions on which
> someone has disputed the rigorousness of the argument, they have either
> addressed some other issue entirely or they have just waved their hands
> without showing any sign of understanding the argument, and then said "..
Richard,
> So long as the general response to the complex systems problem is not "This
> could be a serious issue, let's put our heads together to investigate it",
> but "My gut feeling is that this is just not going to be a problem", or
> "Quit rocking the boat!", you can bet that nobody really w
On Sat, Jun 28, 2008 at 4:13 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Ed Porter wrote:
>>
>> I do not claim the software architecture for AGI has been totally solved.
>> But I believe that enough good AGI approaches exist (and I think Novamente
>> is one) that when powerful hardware avail
>
> But enough of that, let's get to the meat of it: Are you arguing that the
> function that is a neuron is not an elementary operator for whatever
> computational model describes the brain?
>
We don't know which "function that describes a neuron" we need to use --
are Izhikevich's nonlinear dyn
should be possible to engineer them away on computational substrate
> when we have a high-level model of what they are actually for.
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
>
>
> ---
> agi
> Archives: http://www.listbox.com/member
> Instead of talking about what you would do, do it.
>
> I mean, work out your ideal way to solve the questions of the mind and share
> it with us after you've have found some interesting results.
>
> Jim Bromer
> ____
> agi | Arch
> While the details vary widely, Mike and I were addressing the very concept
> of writing code to perform functions (e.g. "thinking") that apparently
> develop on their own as emergent properties, and in the process foreclosing
> on many opportunities, e.g. developing in variant ways to address pro
> The truth is, one of the big problems in
> the field is that nearly everyone working on a concrete AI system has
> **their own** particular idea of how to do it, and wants to proceed
> independently rather than compromising with others on various design
> points. It's hardly a herd mentality --
e been around as long as I have
> been, and hence they certainly should know better since they have doubtless
> seen many other exuberant rookies fall into similar swamps of programming
> complex systems without adequate analysis.
>
> Hey you guys with some gray hair and/or bald spots
uot;
> TL: ?
> FOL: woman(X) -> long_hair(X)
>
> I know your term logic is slightly different from Fred Sommers'. Can
> you fill in the TL parts and also attach indefinite probabilities?
>
> On 6/3/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>&g
t;
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
&g
> First of all, the *tractability* of your algorithm depends on
> heuristics that you design, which are separable from the underlying
> probabilistic logic calculus. In your mind, these 2 things may be
> mixed up.
>
> Indefinite probabilities DO NOT imply faster inference.
> Domain-specific heuris
>
> You have done something new, but not so new as to be in a totally
> different dimension.
>
> YKY
I have some ideas more like that too but I've postponed trying to sell them
to others, for the moment ;-) ... it's hard enough to sell fairly basic stuff
like PLN ...
Look for some stuff on the ap
>
> As we have discussed a while back on the OpenCog mail list, I would like to
> see a RDF interface to some level of the OpenCog Atom Table. I think that
> would suit both YKY and myself. Our discussion went so far as to consider
> ways to assign URI's to appropriate atoms.
Yes, I still think
ox.com/member/archive/rss/303/
>> Modify Your Subscription: http://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=no
s now).
I wonder why you don't join Stephen Reed on the texai project? Is it
because you don't like the open-source nature of his project?
ben
On Tue, Jun 3, 2008 at 3:58 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> One thing I don't get, YKY, is why you think you are goi
On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
> Hi Ben,
>
> Note that I did not pick FOL as my starting point because I wanted to
> go against you, or be a troublemaker. I chose it because that's what
> the textbooks I read were using. There is nothing personal her
http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
--
Ben Goertzel, PhD
CEO, Novamente LLC an
> More likely though, is that your algorithm is incomplete wrt FOL, ie,
> there may be some things that FOL can infer but PLN can't. Either
> that, or your algorithm may be actually slower than FOL.
FOL is not an algorithm, it:s a representational formalism...
As compared to standard logical the
I would imagine so, but I havent thought about the details
I am traveling now but will think about this when I get home and can
refresh my memory by rereading the appropriate sections of
Probabilistic Robotics ...
ben
On 6/2/08, Bob Mottram <[EMAIL PROTECTED]> wrote:
> 2008/6/2 Ben
> I think it's fine that you use the term "atom" in your own way. The
> important thing is, whatever the objects that you attach probabilities
> to, that class of objects should correspond to *propositions* in FOL.
> From there it would be easier for me to understand your ideas.
Well, no, we atta
601 - 700 of 2064 matches
Mail list logo