On Mon, Sep 22, 2008 at 1:34 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
On the other hand, if intelligence is in large part a systems phenomenon,
that has to do with the interconnection of reasonably-intelligent components
in a reasonably-intelligent way (as I have argued in many prior
--- On Sun, 9/21/08, David Hart [EMAIL PROTECTED] wrote:
On Mon, Sep 22, 2008 at 10:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Training will be the overwhelming cost of AGI. Any language model
improvement will help reduce this cost.
How do you figure that training will cost more than
Hmm. My bot mostly repeats what it hears.
bot Monie: haha. r u a bot ?
bot cyberbrain: not to mention that in a theory complex enough with
a large enough number of parameters. one can interpret anything.
even things that are completely physically inconsistent with each
other. i suggest actually
Ok, most of its replies here seem to be based on the first word of
what it's replying to. But it's really capable of more lateral
connections.
wijnand yeah i use it to add shortcuts for some menu functions i use a lot
bot wijnand: TOMACCO!!!
On 9/21/08, Eric Burton [EMAIL PROTECTED] wrote:
--- On Sat, 9/20/08, Mike Tintner [EMAIL PROTECTED] wrote:
Matt: A more appropriate metaphor is that text compression
is the altimeter
by which we measure progress. (1)
Matt,
Now that sentence is a good example of general intelligence
- forming a new
connection between domains -
--- On Sat, 9/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:
A more appropriate metaphor is that text compression is the altimeter by
which we measure progress.
An extremely major problem with this idea is that, according to this
altimeter, gzip is vastly more intelligent than a chimpanzee or a
Now if you want to compare gzip, a chimpanzee, and a 2 year old child using
language prediction as your IQ test, then I would say that gzip falls in the
middle. A chimpanzee has no language model, so it is lowest. A 2 year old
child can identify word boundaries in continuous speech, can
--- On Sat, 9/20/08, Pei Wang [EMAIL PROTECTED] wrote:
Matt,
I really hope NARS can be simplified, but until you give me the
details, such as how to calculate the truth value in your converse
rule, I cannot see how you can do the same things with a simpler
design.
You're right. Given
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Hmmm I am pretty strongly skeptical of intelligence tests that do not
measure the actual functionality of an AI system, but rather measure the
theoretical capability of the structures or processes or data inside the
system...
The
I'm not building AGI. (That is a $1 quadrillion problem). I'm studying
algorithms for learning language. Text compression is a useful tool for
measuring progress (although not for vision).
OK, but the focus of this list is supposed to be AGI, right ... so I suppose
I should be forgiven for
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Text compression is IMHO a terrible way of measuring incremental progress
toward AGI. Of course it may be very valuable for other purposes...
It is a way to measure progress in language modeling, which is an important
component of AGI
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Text compression is IMHO a terrible way of measuring incremental progress
toward AGI. Of course it may be very valuable for other purposes...
It is a way to
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Text compression is IMHO a terrible way of measuring incremental progress
toward AGI. Of course it may
On Mon, Sep 22, 2008 at 10:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Training will be the overwhelming cost of AGI. Any language model
improvement will help reduce this cost.
How do you figure that training will cost more than designing, building and
operating AGIs? Unlike a training a
Matt,
So, what formal language model can solve this problem?
A FL that clearly separates basic semantic concepts like objects,
attributes, time, space, actions, roles, relationships, etc + core
subjective concepts e.g. want, need, feel, aware, believe, expect,
unreal/fantasy. Humans have senses
On Fri, Sep 19, 2008 at 11:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
So perhaps someone can explain why we need formal knowledge representations
to reason in AI.
Because the biggest open sub problem right now is dealing with
procedural, as opposed to merely declarative or reflexive,
--- On Fri, 9/19/08, Jan Klauck [EMAIL PROTECTED] wrote:
Formal logic doesn't scale up very well in humans. That's why this
kind of reasoning is so unpopular. Our capacities are that
small and we connect to other human entities for a kind of
distributed problem solving. Logic is just a tool
On Sat, Sep 20, 2008 at 4:44 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Fri, 9/19/08, Jan Klauck [EMAIL PROTECTED] wrote:
Formal logic doesn't scale up very well in humans. That's why this
kind of reasoning is so unpopular. Our capacities are that
small and we connect to other human
On Sat, Sep 20, 2008 at 6:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sat, 9/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:
If formal reasoning were a solved problem in AI, then we would have
theorem-provers that could prove deep, complex theorems unassisted. We
don't. This indicates
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Sat, 9/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:
It seems a big stretch to me to call theorem-proving guidance a language
modeling problem ... one may be able to make sense of this statement, but
only by treating the concept of language VERY
Matt,
I really hope NARS can be simplified, but until you give me the
details, such as how to calculate the truth value in your converse
rule, I cannot see how you can do the same things with a simpler
design.
NARS has this conversion rule, which, with the deduction rule, can
replace
To pursue an overused metaphor, to me that's sort of like trying to
understand flight by carefully studying the most effective high-jumpers.
OK, you might learn something, but you're not getting at the crux of the
problem...
A more appropriate metaphor is that text compression is the
Pei:In a broad sense, formal logic is nothing but
domain-independent and justifiable data manipulation schemes. I
haven't seen any argument for why AI cannot be achieved by
implementing that
Have you provided a single argument as to how logic *can* achieve AI - or
to be more precise,
Ben: Mike:
(And can you provide an example of a single surprising metaphor or analogy
that have ever been derived logically? Jiri said he could - but didn't.)
It's a bad question -- one could derive surprising metaphors or analogies by
random search, and that wouldn't prove anything
Mike,
I understand that my task is to create an AGI system, and I'm working on
it ...
The fact that my in-development, partial AGI system has not yet demonstrated
advanced intelligence, does not imply that it will not do so once completed.
No, my AGI system has not yet discovered surprising
and not to forget...
SATAN GUIDES US TELEPATHICLY THROUGH RECTAL THERMOMETERS. WHY DO YOU THINK
ABOUT META-REASONING?
On Sat, Sep 20, 2008 at 11:38 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Mike,
I understand that my task is to create an AGI system, and I'm working on
it ...
The fact
Ben,
Not one metaphor below works.
You have in effect accepted the task of providing a philosophy and explanation
of your AGI and your logic - you have produced a great deal of such stuff
(quite correctly). But none of it includes the slightest explanation of how
logic can produce AGI - or,
Mike
If you want an explanation of why I think my AGI system will work, please
see
http://opencog.org/wiki/OpenCogPrime:WikiBook
The argument is complex and technical and it would not be a good use of my
time to recapitulate it via email!!
Personally I do think the metaphor
COWS FLY LIKE
Ben, Just to be clear, when I said no argument re how logic will produce
AGI.. I meant, of course, as per the previous posts, ..how logic will
[surprisingly] cross domains etc. That, for me, is the defining characteristic
of AGI. All the rest is narrow AI.
--- On Fri, 9/19/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
Try What's the color of Dan Brown's black coat? What's the excuse
for a general problem solver to fail in this case? NLP? It
then should use a formal language or so. Google uses relatively good
search algorithms but decent general
Matt wrote,
There seems to be a lot of effort to implement reasoning in knowledge
representation systems, even though it has little to do with how we actually
think.
Please note that not all of us in the AGI field are trying to closely
emulate human thought. Human-level thought does not
I think the whole idea of a semantic layer is to provide the kind of
mechanism for abstract reasoning that evolution seems to have built
into the human brain. You could argue that those faculties are
acquired during one's life, using only a weighted neural net (brain),
but it seems reasonable to
On Sat, Sep 20, 2008 at 8:46 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
But if you can learn these types of patterns then with no additional effort
you can learn patterns that directly solve the problem...
This kind of reminds me of the people think in their natural
language theory that Steven
Matt,
People who haven't studied
logic or its notation can certainly learn to do this type of reasoning.
Formal logic doesn't scale up very well in humans. That's why this
kind of reasoning is so unpopular. Our capacities are that small and
we connect to other human entities for a kind of
34 matches
Mail list logo