> > yet I still feel you dismiss the text-mining approach too glibly...
>
> No, but text mining requires a language model that learns while mining. You
> can't mine the text first.
Agreed ... and this gets into subtle points. Which aspects of the
language model
need to be adapted while mining
--- Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > It could be done with a simple chain of word associations mined from a
> text
> > corpus: alert -> coffee -> caffeine -> theobromine -> chocolate.
>
> That approach yields way, way, way too much noise. Try it.
I agree that it does to the point
> It could be done with a simple chain of word associations mined from a text
> corpus: alert -> coffee -> caffeine -> theobromine -> chocolate.
That approach yields way, way, way too much noise. Try it.
> But that is not the problem. The problem is that the reasoning would be
> faulty, eve
--- Ben Goertzel <[EMAIL PROTECTED]> wrote:
> For instance, suppose you ask an AI if chocolate makes a person more
> alert.
>
> It might read one article saying that coffee makes people more alert,
> and another article saying that chocolate contains theobromine, and another
> article saying that
Ben, thank you for clarifying.
Ben Goertzel wrote:
I like the marketing technique at this mailing list. AGI "developers"
are claiming that they are building "AGI" but they are just building
narrow programs.
Personally I am working on both -- the former for R&D purposes and the
latter t
On Wed, Feb 27, 2008 at 3:41 PM, a <[EMAIL PROTECTED]> wrote:
>
> Intelligence requires human-like perception.
That's a very bold claim. What's the reasoning behind it?
-Jey Kottalam
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed
> I like the marketing technique at this mailing list. AGI "developers"
> are claiming that they are building "AGI" but they are just building
> narrow programs.
Personally I am working on both -- the former for R&D purposes and the
latter to make a living ;-p
> The term "artificial general i
On 27/02/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> I don't buy that my body plays a significant role in thinking about,
> for instance,
> mathematics. I bet that my brain in a vat could think about math just
> as well or
> better than my embodied brain.
>
> Of course my brain is what it
I do not doubt that body-thinking exists and is important, my doubt is that it
is in any AGI-useful sense "the largest part" of thinking...
On Wed, Feb 27, 2008 at 1:07 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Ben:What evidence do you have that this [body thinking] is the "largest
>
> part" .
J Storrs Hall, PhD wrote:
On Wednesday 27 February 2008 12:22:30 pm, Richard Loosemore wrote:
Mike Tintner wrote:
As Ben said, it's something like "multisensory integrative
consciousness" - i.e. you track a subject/scene with all senses
simultaneously and integratedly.
Conventional approaches
Ben:What evidence do you have that this [body thinking] is the "largest
part" ... it does
not feel at all
that way to me, as a subjectively-experiencing human; and I know of no
evidence
in this regard
Like I said, I'm at the start here - and this is going against thousands of
years of literat
On Wednesday 27 February 2008 12:22:30 pm, Richard Loosemore wrote:
> Mike Tintner wrote:
> > As Ben said, it's something like "multisensory integrative
> > consciousness" - i.e. you track a subject/scene with all senses
> > simultaneously and integratedly.
>
> Conventional approaches to AI may
> Well, what I and embodied cognitive science are trying to formulate
> properly, both philosophically and scientifically, is why:
>
> a) common sense consciousness is the brain-AND-body thinking on several
> levels simultaneously about any given subject...
I don't buy that my body plays a si
> d) you keep repeating the illusion that evolution did NOT achieve the
> airplane and other machines - oh yes, it did - your central illusion here is
> that machines are independent species. They're not. They are EXTENSIONS of
> human beings, and don't work without human beings attached. Mani
Mike Tintner wrote:
Richard: Mike Tintner wrote:
No one in AGI is aiming for common sense consciousness, are they?
Inasmuch as I understand what you mean by that, yes of course.
Both common sense and consciousness.
As Ben said, it's something like "multisensory integrative
consciousness"
> I'm not talking about inference control here -- I assume that inference
> control is done in a proper way, and there will still be a problem. You
> seem to assume that all knowledge = what is explicitly stated in online
> texts. So you deny that there is a large body of implicit knowledge other
Ben: MT:>> You guys seem to think this - true common sense consciousness -
can all be
cracked in a year or two. I think there's probably a lot of good
reasons -
and therefore major creative problems - why it took a billion years of
evolution to achieve.
Ben: I'm not trying to emulate th
On 2/27/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> YKY<
>
> I thought you were talking about the extraction of information that
> is explicitly stated in online text.
>
> Of course, inference is a separate process (though it may also play a
> role in direct information extraction).
>
> I don't
What I tried to do with robocore is have a number of subsystems
dedicated to particular modalities such as vision, touch, hearing,
smell and so on. Each of these modalities operates in a semi
independent and self organised way, and their function is to create
stable abstractions from the raw data
19 matches
Mail list logo