On Fri, Oct 17, 2008 at 8:17 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Likewise, writing software has to be understood in terms of natural language
> learning and modeling. A programming language is a compromise between what
> humans can understand and what machines can understand. Humans lea
--- On Thu, 10/16/08, Trent Waddington <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 17, 2008 at 8:17 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
>
> > Natural language is poorly understood, which is
> > exactly why we need to study it.
>
> I don't disagree, but I think a lot of NLP is done in
> iso
> As Ben has pointed out language understanding is useful to teach AGI. But
> if
> we use the domain of mathematics we can teach AGI by formal expressions
> more
> easily and we understand these expressions as well.
>
> - Matthias
That is not clear -- no human has learned math that way.
We lear
On Thu, Oct 16, 2008 at 10:32 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> In theorem proving computers are weak too compared to performance of good
> mathematicians.
I think Ben asserted this as well (maybe during an OpenCog tutorial?).
Is this proven? My intuition is that a computer could
On Thu, Oct 16, 2008 at 11:21 PM, Abram Demski <[EMAIL PROTECTED]>wrote:
> On Thu, Oct 16, 2008 at 10:32 PM, Dr. Matthias Heger <[EMAIL PROTECTED]>
> wrote:
> > In theorem proving computers are weak too compared to performance of good
> > mathematicians.
>
> I think Ben asserted this as well (mayb
"Your intuition is dead wrong based on the last decades of work in
automated theorem proving!!"
Ben,
Thanks... though I do find that rather strange... can you cite any
particular references?
The reason I find it surprising, is that it suggests the existence of
actual shortcuts, rather than only
On Fri, Oct 17, 2008 at 12:32 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> In my opinion language itself is no real domain for intelligence at all.
> Language is just a communication protocol. You have patterns of a certain
> domain in your brain you have to translate your internal pattern
>
> From: Ben Goertzel [mailto:[EMAIL PROTECTED]
>
> As Ben has pointed out language understanding is useful to teach AGI.
> But if
> we use the domain of mathematics we can teach AGI by formal expressions
> more
> easily and we understand these expressions as well.
>
> - Matthias
>
>
> That is n
The best way to get a sense of the state of the art in automated theorem
proving would be to download and play with the systems
http://en.wikipedia.org/wiki/Isabelle_(theorem_prover)
and
http://en.wikipedia.org/wiki/Prover9
, which are state-of-the-art automated theorem provers.
Isabelle is fa
On Lakoff and Nunez, "Where Mathematics Comes From". Dittos. Great book.
I have had to buy multiple copies because I keep loaning it and not
getting it back. Lakoff's emobdiment theme is a primary concept for me.
andi
---
agi
Archives: https://www.lis
On Sat, Oct 18, 2008 at 2:39 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> a certain degree (mirror neurons).
Oh you just hit my other annoyance.
"How does that work?"
"Mirror neurons"
IT TELLS US NOTHING.
Trent
---
agi
Archives: https://www.list
Trent: Oh you just hit my other annoyance.
"How does that work?"
"Mirror neurons"
IT TELLS US NOTHING.
Trent,
How do they work? By observing the shape of humans and animals , ("what
shape they're in"), our brain and body automatically *shape our bodies to
mirror their shape*, (put the
On Sat, Oct 18, 2008 at 9:48 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
>
> [snip] We understand and think with our whole bodies.
>
Mike, these statements are an *enormous* leap from the actual study of
mirror neurons. It's my hunch that the hypothesis paraphrased above is
generally true, but it
I am well aware that building even *virtual* embodiment (in simulated
worlds) is hard
However, creating human-level AGI is **so** hard that doing other hard
things in order to make the AGI task a bit easier, seems to make sense!!
One of the things the OpenCog framework hopes to offer AGI deve
I do appreciate the support of embodiment frameworks. And I really get
the feeling that Matthias is wrong about embodiment because when it comes
down to it, embodiment is an assumption made by people when judging if
something is intelligent. But that's just me.
And what's up with language as jus
David:Mike, these statements are an *enormous* leap from the actual study of
mirror neurons. It's my hunch that the hypothesis paraphrased above is
generally true, but it is *far* from being fully supported by, or understood
via, the empirical evidence.
[snip] these are all original
Mike, I think you won't get a disagreement in principle about the benefits
of melding creativity and rationality, and of grokking/manipulating concepts
in metaphorical wholes. But really, a thoughtful conversation about *how*
the OCP design addresses these issues can't proceed until you've RTFBs.
Trent,
I should have added that our brain and body, by observing the mere
shape/outline of others bodies as in Matisse's Dancers, can tell not only
how to *shape* our own outline, but how to "dispose" of our *whole body* -
we transpose/translate (or "flesh out") a static two-dimensional body
Matthias:
I do not agree that body mapping is necessary for general intelligence. But
this would be one of the easiest problems today.
In the area of mapping the body onto another (artificial) body, computers
are already very smart:
See the video on this page:
http://www.image-metrics.com/
Matthias: I think here you can see that automated mapping between different
faces is
possible and the computer can smoothly morph between them. I think, the
performance is much better than the imagination of humans can be.
http://de.youtube.com/watch?v=nice6NYb_WA
Matthias,
Perhaps we're havin
On 10/18/2008 9:27 AM, Mike Tintner wrote:
What rational computers can't do is find similarities between
disparate, irregular objects - via fluid transformation - the essence
of imagination.
So you don't believe that this is possible by finding combinations of
abstract shapes (lines, square
Matthias,
When a programmer (or cameraman) "macroscopic(ally) positions two faces" -
"adjusting them manually" so that they are capable of precise point-to-point
matching, that proceeds from an initial act of visual object recognition -
and indeed imagination, as I have defined it.
He wil
On Sat, Oct 18, 2008 at 3:38 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Matthias,
>
> When a programmer (or cameraman) "macroscopic(ally) positions two faces" -
> "adjusting them manually" so that they are capable of precise point-to-point
> matching, that proceeds from an initial act of visu
23 matches
Mail list logo