On Jan 28, 2008 6:48 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Tom:"embodied cognitive science" gets 5,310 hits on Google. "cognitive
> science" gets 2,730,000 hits. Please back up your statements,
> especially ones which talk about "revolutions" in any field.
>
> Check out the wiki article - l
Tom:"embodied cognitive science" gets 5,310 hits on Google. "cognitive
science" gets 2,730,000 hits. Please back up your statements,
especially ones which talk about "revolutions" in any field.
Check out the wiki article - look at the figures at the bottom such as
Lakoff & co & Google them. Che
On 29/01/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:
> The latter. I'm arguing that a disembodied AGI has as much chance of getting
> to know, understand and be intelligent about the world as Tommy - a deaf,
> dumb and blind and generally sense-less kid, that's totally autistic, can't
> play any
On 29/01/2008, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> On Jan 28, 2008 4:36 AM, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
>
> > Are you simply arguing that an embodied AI that can interact with the
> > real world will find it easier to learn and develop, or are you
> > arguing that ther
On Jan 28, 2008 7:16 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Gudrun: I think this is not about
> intelligence, but it is about our mind being inter-dependent (also via
> evolution) with senses and body.
>
> Sorry, I've lost a subsequent post in which you went on to say that the very
> terms "
On Jan 28, 2008 9:43 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
> Stathis: Are you simply arguing that an embodied AI that can interact with
> the
> > real world will find it easier to learn and develop, or are you
> > arguing that there is a fundamental reason why an AI can't develop in
> > a
On Jan 28, 2008 10:02 AM, <[EMAIL PROTECTED]> wrote:
> Vow, this is well worded, structured in a really nice set of feedback loops.
Some like my writing very much; others find it off-putting. I tend to
err toward excessive abstraction, expecting that others will ask for
supporting detail and/or
Quoting [EMAIL PROTECTED]:
On Jan 28, 2008 7:56 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
X:Of course this is a variation on "the grounding problem" in AI. But
do you think some sort of **absolute** grounding is relevant to
effective interaction between individual agents (assuming you think
Quoting Mike Tintner <[EMAIL PROTECTED]>:
Gudrun: I think this is not about
intelligence, but it is about our mind being inter-dependent (also via
evolution) with senses and body.
Sorry, I've lost a subsequent post in which you went on to say that the very
terms "mind" and "body" in this contex
On Jan 28, 2008 7:56 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> X:Of course this is a variation on "the grounding problem" in AI. But
> do you think some sort of **absolute** grounding is relevant to
> effective interaction between individual agents (assuming you think
> any such ultimate grou
X:Of course this is a variation on "the grounding problem" in AI. But
do you think some sort of **absolute** grounding is relevant to
effective interaction between individual agents (assuming you think
any such ultimate grounding could even perform a function within a
limited system), or might it
On Jan 28, 2008 6:43 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
> Stathis: Are you simply arguing that an embodied AI that can interact with
> the
> > real world will find it easier to learn and develop, or are you
> > arguing that there is a fundamental reason why an AI can't develop in
> > a
Stathis: Are you simply arguing that an embodied AI that can interact with
the
real world will find it easier to learn and develop, or are you
arguing that there is a fundamental reason why an AI can't develop in
a purely virtual environment?
The latter. I'm arguing that a disembodied AGI ha
On Jan 28, 2008 4:36 AM, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
> Are you simply arguing that an embodied AI that can interact with the
> real world will find it easier to learn and develop, or are you
> arguing that there is a fundamental reason why an AI can't develop in
> a purely virtu
On 28/01/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:
> The background for me is this: there is a great, untrumpeted revolution
> going on, which is called Embodied Cognitive Science. See Wiki. That is all
> founded on the idea of the "embodied mind". Cognitive science is based on
> the idea tha
Gudrun cont..
Actually I got that wrong - a classic example of the old linguistic biasses
and traps - it's more like:
Cog Sci is the idea that thought is a program
Embodied Cog sci - is the idea that there is no thought without sensation,
emotion and movement .
("no mentation without r
Gudrun: I think this is not about
intelligence, but it is about our mind being inter-dependent (also via
evolution) with senses and body.
Sorry, I've lost a subsequent post in which you went on to say that the very
terms "mind" and "body" in this context were splitting up something that
can't b
On Jan 28, 2008 2:17 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Can you define what you mean by "decision" more precisely, please?
That's difficult, I don't have it formalized. Something like
application of knowledge about the world, it's likely to end up an
intelligence-definition-complete pro
Can you define what you mean by "decision" more precisely, please?
> OK, but why can't they all be dumped in a single 'normal' multiverse?
> If traveling between them is accommodated by 'decisions', there is a
> finite number of them for any given time, so it shouldn't pose
> structural problems.
On Jan 28, 2008 6:40 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Nesov wrote:
> > Exactly. It needs stressing that probability is a tool for
> > decision-making and it has no semantics when no decision enters the
> > picture.
> ...
> > What's it good for if it can't be used (= advance knowledg
20 matches
Mail list logo