mathematical axioms for (patially-)logic-based AGI such as
OpenCogPrime.
--Abram
On Mon, Oct 20, 2008 at 9:45 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I do not understand what kind of understanding of noncomputable numbers
you
think a human has, that AIXI could not have. Could you give
On Mon, Oct 20, 2008 at 12:07 PM, Ed Porter [EMAIL PROTECTED] wrote:
As I said in my last email, since the Wikipedia article on constant
weight codes said APART FROM SOME TRIVIAL OBSERVATIONS, IT IS GENERALLY
IMPOSSIBLE TO COMPUTE THESE NUMBERS IN A STRAIGHTFORWARD WAY. And since all
of the
I also don't understand whether A(n,d,w) is the number of sets where the
hamming distance is exactly d (as it would seem from the text of
http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
number of set where the hamming distance is d or less. If the former case
is true
://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED
On Mon, Oct 20, 2008 at 4:04 PM, Eric Burton [EMAIL PROTECTED] wrote:
Ben Goertzel says that there is no true defined method
to the scientific method (and Mark Waser is clueless for thinking that
there
is).
That is not what I said.
My views on the philosophy of science are given here
corresponds to a
bounded-weight code rather than a constant-weight code.
I already forwarded you a link to a paper on bounded-weight codes, which
are also combinatorially intractable and have been studied only via
computational analysis.
-- Ben G
--
Ben Goertzel, PhD
CEO, Novamente LLC
I am not sure about your statements 1 and 2. Generally responding,
I'll point out that uncomputable models may compress the data better
than computable ones. (A practical example would be fractal
compression of images. Decompression is not exactly a computation
because it never halts, we
halting. Then, the universal statement
The box is always right couldn't hold in any computable version of
U.
--Abram
On Mon, Oct 20, 2008 at 3:01 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Yes, if we live in a universe that has Turing-uncomputable physics, then
obviously AIXI
On Mon, Oct 20, 2008 at 5:29 PM, Abram Demski [EMAIL PROTECTED] wrote:
Ben,
[my statement] seems to incorporate the assumption of a finite
period of time because a finite set of sentences or observations must
occur during a finite period of time.
A finite set of observations, sure, but a
driving different the population of
different attractors could have different timing or timing patterns, and if
the auto associatively was sensitive to such timing, this problem could be
greatly reduced.
Ed Porter
-Original Message-
*From:* Ben Goertzel [mailto:[EMAIL PROTECTED]
*Sent
:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing
explanation is enough... who was the Guru for
Humankind?
Thanks,
--Abram
On Sun, Oct 19, 2008 at 5:39 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Abram,
I find it more useful to think in terms of Chaitin's reformulation of
Godel's Theorem:
http://www.cs.auckland.ac.nz/~chaitin
has lectured as opposed
to taught.
- Original Message -
*From:* Ben Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, October 19, 2008 5:26 PM
*Subject:* Re: AW: AW: [agi] Re: Defining AGI
*Any* human who can understand language beyond a certain point (say
good scientific
evaluation if taught the rules and willing to abide by them? Why or why
not?
- Original Message -
*From:* Ben Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, October 19, 2008 5:52 PM
*Subject:* Re: AW: AW: [agi] Re: Defining AGI
Mark
,
and more data. Using that story as an example shows that you don't
understand how to properly run a scientific evaluative process.
- Original Message -
*From:* Ben Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, October 19, 2008 6:07 PM
*Subject:* Re: AW: AW: [agi] Re
?
And why don't we keep this on the level of scientific debate rather than
arguing insults and vehemence and confidence? That's not particularly good
science either.
- Original Message -
*From:* Ben Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, October 19, 2008 6:31 PM
And why don't we keep this on the level of scientific debate rather than
arguing insults and vehemence and confidence? That's not particularly good
science either.
Right ... being unnecessarily nasty is not either good or bad science, it's
just irritating for others to deal with
ben g
not.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC
of many examples one could
find. But the main reason against embodied linguistic AGI for first
generation AGI is the amount of work necessary to build it. I do not think
that the relation of utility vs. costs is positive.
- Matthias
Ben Goertzel wrote:
That is not clear
consulting my assortment of reference dictionaries,,,
On 10/16/08, Ben Goertzel [EMAIL PROTECTED] wrote:
I completely agree that puzzles can be ever so much more interesting when
you can successfully ignore that they cannot possibly lead to anything
useful. Further, people who point out
Matt wrote:
I think the source of our disagreement is the I in RSI. What does it
mean to improve? From Ben's OpenCog roadmap (see
http://www.opencog.org/wiki/OpenCogPrime:Roadmap ) I think it is clear
that Ben's definition of improvement is Turing's definition of AI: more
like a human. In
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC
or infeasible. I
do not think he presented any such thing; I think he presented an opinion in
the guise of a proof It may be a reasonable opinion but that's very
different from a proof.
-- Ben G
On Fri, Oct 17, 2008 at 6:26 PM, William Pearson [EMAIL PROTECTED]wrote:
2008/10/17 Ben Goertzel
. That is good to
see!
ben g
On Thu, Oct 16, 2008 at 11:22 AM, Abram Demski [EMAIL PROTECTED]wrote:
I'll vote for the split, but I'm concerned about exactly where the
line is drawn.
--Abram
On Wed, Oct 15, 2008 at 11:01 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi all,
I have been thinking
://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all
their time building
planes rather than laboriously poking holes in the
intuitively-obviously-wrong
supposed-impossibility-proofs of what they were doing...
ben g
On Thu, Oct 16, 2008 at 11:38 AM, Tim Freeman [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
On the other hand
.
Ed Porter
-Original Message-
*From:* Ben Goertzel [mailto:[EMAIL PROTECTED]
*Sent:* Thursday, October 16, 2008 11:32 AM
*To:* agi@v2.listbox.com
*Subject:* Re: [agi] Who is smart enough to answer this question?
OK, I see what you're asking now
I think some bounds on the number
combinations), and I was more
interested in lower bounds.
-Original Message-
*From:* Ben Goertzel [mailto:[EMAIL PROTECTED]
*Sent:* Thursday, October 16, 2008 2:45 PM
*To:* agi@v2.listbox.com
*Subject:* Re: [agi] Who is smart enough to answer this question?
I am pretty sure
:40 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
One more addition...
Actually the Hamming-code problem is not exactly the same as your problem
because it does not place an arbitrary limit on the size of the cell
assembly... oops
But I'm not sure why this limit is relevant, since cell
, Ben Goertzel [EMAIL PROTECTED] wrote:
They also note that according to their experiments, bounded-weight codes
don't offer much improvement over constant-weight codes, for which
analytical results *are* available... and for which lower bounds are given
at
http://www.research.att.com/~njas
.
***
;-)
-- Ben
On Thu, Oct 16, 2008 at 6:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Ed,
After a little more thought, it occurred to me that this problem was
already solved in coding theory ... just take the bound given here, with
q=2:
http://en.wikipedia.org/wiki/Hamming_bound
The bound
I completely agree that puzzles can be ever so much more interesting when
you can successfully ignore that they cannot possibly lead to anything
useful. Further, people who point out the reasons that they cannot succeed
are really boors and should be censored. This entire thread should be
, we could dramatically accelerate the
progress of medieval society toward modernity, for sure
-- Ben G
On Thu, Oct 16, 2008 at 9:00 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Thu, 10/16/08, Ben Goertzel [EMAIL PROTECTED] wrote:
If some folks want to believe that self-modifying AGI
Right, but his problem is equivalent to bounded-weight, not constant-weight
codes...
On Thu, Oct 16, 2008 at 10:04 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Fri, Oct 17, 2008 at 5:31 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I still think this combinatorics problem is identical
at 10:23 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Fri, Oct 17, 2008 at 6:05 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Right, but his problem is equivalent to bounded-weight, not
constant-weight
codes...
Why? Bounded-weight codes are upper-bounded by Hamming weight, which
As Ben has pointed out language understanding is useful to teach AGI. But
if
we use the domain of mathematics we can teach AGI by formal expressions
more
easily and we understand these expressions as well.
- Matthias
That is not clear -- no human has learned math that way.
We learn math
palimpsest learning scheme for Hopfield nets,
specialized for simple experiments with character arrays.
-- Ben G
On Thu, Oct 16, 2008 at 10:30 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Fri, Oct 17, 2008 at 6:26 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Oh, you're right...
I
On Thu, Oct 16, 2008 at 11:21 PM, Abram Demski [EMAIL PROTECTED]wrote:
On Thu, Oct 16, 2008 at 10:32 PM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
In theorem proving computers are weak too compared to performance of good
mathematicians.
I think Ben asserted this as well (maybe during an
/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome
/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all
://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
---
agi
Archives: https
15, 2008 at 10:49 AM, Jim Bromer [EMAIL PROTECTED] wrote:
On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Actually, I think COMP=false is a perfectly valid subject for discussion
on
this list.
However, I don't think discussions of the form I have all the answers
/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
Richard,
One of the mental practices I learned while trying to save my first marriage
(an effort that ultimately failed) was: when criticized, rather than
reacting emotionally, to analytically reflect on whether the criticism is
valid. If it's valid, then I accept it and evaluate it I should
to be the only public forum for AGI discussion out there (are
there others, anyone?), so presumably there's a good chance it would show up
here, and that is good for you and others actively involved in AGI research.
Best,
Terren
--- On *Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED]* wrote
By the way, I'm avoiding responding to this thread till a little time has
passed and a larger number of lurkers have had time to pipe up if they wish
to...
ben
On Wed, Oct 15, 2008 at 3:07 PM, Bob Mottram [EMAIL PROTECTED] wrote:
2008/10/15 Ben Goertzel [EMAIL PROTECTED]:
What are your
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
---
agi
Archives
--
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
What I am trying to debunk is the perceived risk of a fast takeoff
singularity launched by the first AI to achieve superhuman intelligence. In
this scenario, a scientist with an IQ of 180 produces an artificial
scientist with an IQ of 200, which produces an artificial scientist with an
IQ
I don't really understand why moving to the forum presents any sort of
technical or logistical issues... just personal ones from some of the
participants here.
It's a psychological issue. I rarely allocate time to participate in
forums, but if I decide to pipe a mailing list to my inbox,
you defend your ideas, especially in the absence of peer-reviewed
journals (something the JAGI hopes to remedy obv).
Terren
--- On *Wed, 10/15/08, Ben Goertzel [EMAIL PROTECTED]* wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] META: A possible re-focusing of this list
To: agi
such criticism far better than me. Obviously the same
goes for anyone else on the list who would look for funding... I'd want to
see you defend your ideas, especially in the absence of peer-reviewed
journals (something the JAGI hopes to remedy obv).
Terren
--- On *Wed, 10/15/08, Ben Goertzel [EMAIL
like [agi feasibility]
in their subject lines and allow things to otherwise continue as they are.
Then, when you fail, it won't poison other AGI efforts. Perhaps Matt or
someone would like to separately monitor those postings.
Steve Richfield
===
On 10/15/08, Ben Goertzel [EMAIL
.
=
--
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
This widget seems to integrate mailing lists and forums
in a desirable way...
http://mail2forum.com/forums/
http://mail2forum.com/v12-stable-release/
I haven't tried it out though, just browsed the docs...
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research
/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever
Hi,
Also, you are right that it does not apply to many real world problems.
Here my objection (as stated in my AGI proposal, but perhaps not clearly) is
that creating an artificial scientist with slightly above human intelligence
won't launch a singularity either, but for a different reason.
Matt wrote, in reply to me:
An AI twice as smart as any human could figure
out how to use the resources at his disposal to
help him create an AI 3 times as smart as any
human. These AI's will not be brains in vats.
They will have resources at their disposal.
It depends on what you
Hi,
My main impression of the AGI-08 forum was one of over-dominance by
singularity-obsessed and COMP thinking, which must have freaked me out a
bit.
This again is completely off-base ;-)
COMP, yes ... Singularity, no. The Singularity was not a theme of AGI-08
and the vast majority of
/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
, 2008 at 5:27 PM, Colin Hales
[EMAIL PROTECTED]wrote:
Ben Goertzel wrote:
Hi,
My main impression of the AGI-08 forum was one of over-dominance by
singularity-obsessed and COMP thinking, which must have freaked me out a
bit.
This again is completely off-base ;-)
I also found my feeling
Matt,
But no matter. Whichever definition you accept, RSI is not a viable path to
AGI. An AI that is twice as smart as a human can make no more progress than
2 humans. You don't have automatic self improvement until you have AI that
is billions of times smarter. A team of a few people isn't
/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
learning and
grounding. But I don't think this makes their approaches **more
computational** than a CA model of QED ... it just makes them **bad
computational models of cognition** ...
-- Ben G
On Tue, Oct 14, 2008 at 11:01 PM, Colin Hales
[EMAIL PROTECTED]wrote:
Ben Goertzel wrote:
Again
PROTECTED]wrote:
Ben Goertzel wrote:
Sure, I know Pylyshyn's work ... and I know very few contemporary AI
scientists who adopt a strong symbol-manipulation-focused view of cognition
like Fodor, Pylyshyn and so forth. That perspective is rather dated by
now...
But when you say
Where
://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible
://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections
I
think. (I may be mistaken.)
Jim Bromer
On Mon, Oct 13, 2008 at 12:57 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
I agree it is far nicer when advocates of theories are willing to
gracefully
entertain constructive criticisms of their theories.
However, historically, I'm not sure it's
But when you see someone, theorist or critic, who almost never
demonstrates any genuine capacity for reexamining his own theories or
criticisms from any critical vantage point what so ever, then it's a
strong negative indicator.
Jim Bromer
I would be hesitant to draw strong conclusions
the HTML file just
fine, thanks!
On Mon, Oct 13, 2008 at 1:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
I was eager to debunk your supposed debunking of recursive
self-improvement, but I found that when I tried to open that PDF file
On Mon, Oct 13, 2008 at 11:30 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Ben,
Thanks for the comments on my RSI paper. To address your comments,
You seem to be addressing minor lacunae in my wording, while ignoring my
main conceptual and mathematical point!!!
1. I defined improvement as
Colin wrote:
The only working, known model of general intelligence is the human. If we
base AGI on anything that fails to account scientifically and completely for
*all* aspects of human cognition, including consciousness, then we open
ourselves to critical inferiority... and the rest of
are already doing that. Human culture is improving itself by
accumulating knowledge, by becoming better organized through communication
and specialization, and by adding more babies and computers.
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Mon, 10/13/08, Ben Goertzel [EMAIL PROTECTED] wrote
background of the related discussions.
Pei
On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,
What this highlights for me is the idea that NARS truth values attempt
to reflect the evidence so far, while probabilities attempt to reflect
the world
I
mainly with
the coordinated activity of a large number of different processes, are
harder to describe in detail in specific instances. One can describe the
underlying processes but this then becomes technical and lengthy!!
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
On Sat, Oct 11, 2008 at 8:34 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,
What this highlights for me is the idea that NARS truth values
attempt
to reflect the evidence so far, while probabilities attempt to
reflect
the world
I agree that probabilities attempt to reflect
Pei etc.,
First high level comment here, mostly to the non-Pei audience ... then I'll
respond to some of the details:
This dialogue -- so far -- feels odd to me because I have not been
defending anything special, peculiar or inventive about PLN here.
There are some things about PLN that would
Brad,
But, human intelligence is not the only general intelligence we can imagine
or create. IMHO, we can get to human-beneficial, non-human-like (but,
still, human-inspired) general intelligence much quicker if, at least for
AGI 1.0, we avoid the twin productivity sinks of NLU and
oops, i meant 1895 ... damn that dyslexia ;-) ... though the other way was
funnier, it was less accurate!!
On Sat, Oct 11, 2008 at 8:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I'm only pointing out something everybody here knows full well:
embodiment in various forms has, so far, failed
I'm only pointing out something everybody here knows full well:
embodiment in various forms has, so far, failed to provide any real help in
cracking the NLU problem. Might it in the future? Sure. But the key word
there is might.
To me, you sound like a guy in 1985 saying So far, wings
On Sat, Oct 11, 2008 at 7:38 AM, Mike Tintner [EMAIL PROTECTED]wrote:
As I understand the way you guys and AI generally work, you create
well-organized spaces which your programs can systematically search for
options. Let's call them nets - which have systematic, well-defined and
On Sat, Oct 11, 2008 at 9:46 AM, Mike Tintner [EMAIL PROTECTED]wrote:
I guess the obvious follow up question is when your systems search among
options for a response to a situation, they don't search in a systematic way
through spaces of options? They can just start anywhere and end up
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections
(and goodbye),
Brad
Ben Goertzel wrote:
A few points...
1) Closely associating embodiment with GOFAI is just flat-out historically
wrong. GOFAI refers to a specific class of approaches to AI that wer
pursued a few decades ago, which were not centered on embodiment as a key
concept or aspect.
2
Thanks Pei!
This is an interesting dialogue, but indeed, I have some reservations about
putting so much energy into email dialogues -- for a couple reasons
1)
because, once they're done,
the text generated basically just vanishes into messy, barely-searchable
archives.
2)
because I tend to
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted
On Sat, Oct 11, 2008 at 12:27 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Ben,
Thanks. But you didn't reply to the surely central-to-AGI question of
whether this free-form knowledge base is or can be multi-domain - and
particularly involve radically conflicting sets of rules about how given
, Ben Goertzel [EMAIL PROTECTED] wrote:
Brad,
Sorry if my response was somehow harsh or inappropriate, it really wasn't
intended as such. Your contributions to the list are valued. These last
few weeks have been rather tough for me in my entrepreneurial role (it's not
the best time
Can you provide me with a link to how you deal with explanations and
reasons in OCP?
Jim Bromer
That topic is so broad I wouldn't know what to do except to point you to PLN
generally..
http://www.amazon.com/Probabilistic-Logic-Networks-Comprehensive-Framework/dp/0387768718
(alas the book
--
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your Subscription
http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
I guess I'll try #3 and see what happens. Recently, I've decided to
use Lisp as the procedural language, so that makes my approach even
more similar to OCP's. One remaining big difference is that my KB is
sentential but OCP's is graphical. Maybe we should spend some time
discussing the
Hi,
What this highlights for me is the idea that NARS truth values attempt
to reflect the evidence so far, while probabilities attempt to reflect
the world
I agree that probabilities attempt to reflect the world
.
Well said. This is exactly the difference between an
getting through? (Going out
to the list) What do you call that?
ATM/Mentifex
--
http://code.google.com/p/mindforth/
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
but this then becomes technical and lengthy!!
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be first
overcome - Dr Samuel Johnson
...?
YKY
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben
as a System, or if you've been
influenced by Niklas Luhmann on any level.
Terren
--- On *Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED]* wrote:
There is a sense in which social groups are mindplexes: they have
mind-ness on the collective level, as well as on the individual level.
https
. I wonder what your thoughts are about it? To what extent has
that influenced your philosphy? Not looking for an essay here, but I'd be
interested in your brief reflections on it.
Terren
--- On *Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED]* wrote:
From: Ben Goertzel [EMAIL PROTECTED
If my impression of these discussions
is accurate, if the partisan arguments for logic, probability or
neural networks and the like are really arguments for choosing one or
the other as a preponderant decision process, then it is my opinion
that the discussants are missing the major problem.
Abram,
I finally read your long post...
The basic idea is to treat NARS truth values as representations of a
statement's likelihood rather than its probability. The likelihood of
a statement given evidence is the probability of the evidence given
the statement. Unlike probabilities,
301 - 400 of 1549 matches
Mail list logo