lieve
that y'all don't get this" OR "I'm sure that science will eventually catch
on to my brilliant ideas"), people here would enjoy your keeping it on-list.
Thanks again.
Mark
- Original Message -
From: "Mike Tintner" <[EMAIL PROTECTED]>
Mike Tintner wrote:
Richard,
I hope you understand - and I think you do - unlike your good friend -
that it's actually a lot easier to say nothing. Harsh as I may sound, I
was trying to be constructive.
I hear your sentiment (and welcome it), but if you really believe you
were not being ha
Richard,
I hope you understand - and I think you do - unlike your good friend - that
it's actually a lot easier to say nothing. Harsh as I may sound, I was
trying to be constructive.
I suggest that you cannot expect your reader to make allowances for you -
your ideas have to be clearly stat
>
> Meet me halfway here and I am always willing to expand on anything I
> have written.
>
One must be fully in touch with Global-Local Disconnect (GLD) to get the
gist of the paper.
john
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
Mike Tintner wrote:
Richard,
I can't swear that I did read it. I read a paper of more or less exactly
that length some time ago and do remember the Neats vs Scruffies bit.
Here's why I would have not made an effort to remember the rest - and
this is consistent with what what you do mention b
Saying your system is a "complex system" doesn't constitute a creative
idea. What's the big deal here? Why is your system truly new and
different? Why will it solve any of the unsolved problems of AGI? Where's
the beef? And what on earth does the thing do? Site visitors & investors
will want to kn
Richard,
I can't swear that I did read it. I read a paper of more or less exactly
that length some time ago and do remember the Neats vs Scruffies bit.
Here's why I would have not made an effort to remember the rest - and this
is consistent with what what you do mention briefly here from time to
Richard,
I can't swear that I did read it. I read a paper of more or less exactly
that length some time ago and do remember the Neats vs Scruffies bit.
Here's why I would have not made an effort to remember the rest - and this
is consistent with what what you do mention briefly here from time
Richard,
I can't swear that I did read it. I read a paper of more or less exactly
that length some time ago and do remember the Neats vs Scruffies bit.
Here's why I would have not made an effort to remember the rest - and this
is consistent with what what you do mention briefly here from time
Mike Tintner wrote:
Richard,
Again, reread me precisely.
Saying your system is a "complex system" doesn't constitute a creative
idea. What's the big deal here? Why is your system truly new and
different? Why will it solve any of the unsolved problems of AGI?
Where's the beef? And what on ear
Richard,
Again, reread me precisely.
Saying your system is a "complex system" doesn't constitute a creative idea.
What's the big deal here? Why is your system truly new and different? Why
will it solve any of the unsolved problems of AGI? Where's the beef? And
what on earth does the thing do?
Mike Tintner wrote:
Richard: I already did publish a paper doing exactly that ... haven't
you read it?
Yep. And I'm still mystified. I should have added that I have a vague
idea of what you mean by complex system and its newness, but no idea of
why it will solve any unsolved problem of AGI,
On 31/03/2008, Mark Waser <[EMAIL PROTECTED]> wrote:
>
> Did you get the fact that once you generalize your idea enough, we're
> all in complete agreement -- but that *a lot* of your specific facts are
> just plain wrong (to whit -- the phrase "*vision isn't just saccade-ing.
> The retina does
>Notice how quickly the image changed. That's because you did it by
>manipulating references rather than by moving around enough bits to
>represent an image of one or the other kind of baseball.
The human mind does not manipulate pixels by pixels, nor even store
pixels. The mind uses feature ext
bandwidth and little in the way of subsequent
processing costs.
- Original Message -
*From:* Mike Tintner <mailto:[EMAIL PROTECTED]>
*To:* agi@v2.listbox.com <mailto:agi@v2.listbox.com>
*Sent:* Sunday, March 30, 2008 4:02 PM
*Subject:* Re: [agi] Symbols
what you accuse us of
"not getting" or "not doing" before making further accusations.
Mark
- Original Message -
From: Mike Tintner
To: agi@v2.listbox.com
Sent: Monday, March 31, 2008 11:33 AM
Subject: Re: [agi] Symbols
I was not and am not arguing
Richard: I already did publish a paper doing exactly that ... haven't you
read it?
Yep. And I'm still mystified. I should have added that I have a vague idea
of what you mean by complex system and its newness, but no idea of why it
will solve any unsolved problem of AGI, and absolutely no id
Mike Tintner wrote:
Richard: What *exactly* do you mean by "an AGI must be able to see in
wholes"? My point is that you cannot make criticisms at that level of
vagueness.
I'll give the detailed explanation that I think you're looking for,
within a few days.
P.S. Maybe then you'll be able t
Richard: What *exactly* do you mean by "an AGI must be able to see in
wholes"? My point is that you cannot make criticisms at that level of
vagueness.
I'll give the detailed explanation that I think you're looking for, within
a few days.
P.S. Maybe then you'll be able to return the favour,
Mike Tintner wrote:
I was not and am not arguing that anything is impossible. By definition
- for me - if the brain can do it, a computer or some kind of
machine should be able to do it eventually.
But you have to start by recognizing what neither you nor anyone else is
doing - that an AGI mu
le telling that consciousness that the picture is
what it actually sees?
- Original Message -
From: Mike Tintner
To: agi@v2.listbox.com
Cc: dan michaels
Sent: Monday, March 31, 2008 5:56 AM
Subject: Re: [agi] Symbols
You're saying "I can
e of an AGI take a
natural language description and convert it into a picture and send it to the
main AGI consciousness while telling that consciousness that the picture is
what it actually sees?
- Original Message -
From: Mike Tintner
To: agi@v2.listbox.com
Cc: dan m
On Mon, Mar 31, 2008 at 10:56 AM, Mike Tintner wrote:
> You guys probably think this is all rather peripheral and unimportant - they
> don't teach this in AI courses, so it can't be important.
>
No. It means you're on the wrong list.
> But if you can't see things whole, then you can't see or c
You're saying "I can do it.." without explaining at all how. Sort of "a miracle
happens here".
Crucially, you're quite right that if you have a machine that replicates the
human eye and brain and how it processes the Cafe Wall illusion, then you will
still see the illusion.
The problem is you
From: Mike Tintner
> Well, guys, if the only difference between an image and, say, a symbolic -
verbal or mathematical or programming - description is bandwidth, perhaps
you'll be able to explain how you see the Cafe Wall illusion from a symbolic
description:
Sure! The Cafe Wall illusion
MW:
MT:>> Why are images almost always more powerful than the corresponding
symbols? Why do they communicate so much faster?
Um . . . . dude . . . . it's just a bandwidth thing.
Vlad:Because of higher bandwidth?
Well, guys, if the only difference between an image and, say, a symbol
Derek Zahn wrote:
Related obliquely to the discussion about pattern discovery
algorithms What is a symbol?
I am not sure that I am using the words in this post in exactly the same
way they are normally used by cognitive scientists; to the extent that
causes confusion, I'm sorry. I'd ra
ttle bandwidth and little in the way of subsequent processing costs.
- Original Message -
From: Mike Tintner
To: agi@v2.listbox.com
Sent: Sunday, March 30, 2008 4:02 PM
Subject: Re: [agi] Symbols
In this & surrounding discussions, everyone seems deeply confused - &
On Mon, Mar 31, 2008 at 12:02 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
> We are all next to illiterate - and I mean, mind-blowingly ignorant - about
> how images function. What, for example, does an image of D.Z. or any person,
> do, that no amount of symbols - whether words, numbers, algebrai
In this & surrounding discussions, everyone seems deeply confused - & it's
nothing personal, so is our entire culture - about the difference between
SYMBOLS
1. "Derek Zahn" "curly hair" "big jaw" "intelligent eyes" . etc. etc
and
IMAGES
2. http://robot-club.com/teamtoad/nerc/h2-derek
From: Derek Zahn
Is anybody else interested in this kind of question, or am I simply inventing
issues that are not meaningful and useful?
The issues you bring up are key/core to a major part of AGI. Unfortunately,
they are also issues hashed over way to many times in a mailing list forma
Related obliquely to the discussion about pattern discovery algorithms What
is a symbol?
I am not sure that I am using the words in this post in exactly the same way
they are normally used by cognitive scientists; to the extent that causes
confusion, I'm sorry. I'd rather use words in the
Sunday, March 2, 2003, 11:58:19 PM, Ben Goertzel wrote:
BG> Ten computers of intelligence N, or one computer with intelligence 10*N ?
BG> Sure, the intelligence of the ten computers of intelligence N will be a
BG> little smarter than N, all together, because of cooperative effects
Not sure I
Ben Goertzel wrote:
Philip,
What would help me to understand this idea would be to understand in
more detail what kinds of rules you want to hardwire.
Do you want to hardwire, for instance, a rule like "Don't kill people."
And then give it rough rule-based definitions of "don't", "kill" an
Philip,
What would help me
to understand this idea would be to understand in more detail what kinds of
rules you want to hardwire.
Do you want to
hardwire, for instance, a rule like "Don't kill
people."
And then give it
rough rule-based definitions of "don't", "kill" and "people", a
> > Philip: I think an AGI needs
other AGIs to relate to as a community so that a
> > community of learning develops with multiple
perspectives available.
> > This I think is the only way that the
accelerating bootstraping of
> > AGIs can be handled with any possibility of
bein
Ben,
> > Philip:
I think an AGI needs other AGIs to relate to as a community so that a
> > community
of learning develops with multiple perspectives available.
> > This I think
is the only way that the accelerating bootstraping of
> > AGIs can
be handled with any possibility of being safe.
*
But the idea of
having just one Novamente seems somewhat unrealistic and quite risky to
me.
If the Novamente design is
going to enable boostraping as you plan then your one Novamente is going to
end up being very powerful. If you try to be the gatekeeper to this o
Ben,
> I don't
have a good argument on this point, just an intuition, based
> on the fact that
generally speaking in narrow AI, inductive learning
> based rules based
on a very broad range of experience, are much more
> robust than expert-encoded
rules. The key is a broad range of
> expe
***
And anyway why would your pure
experience-based learning approach be any less likely to lead to subtly but
dangerously warped ethical systems? The trainers could make errors and a
Novamente's self-learning could be skewed by the limits of its experience and
the modelling it observes
Ben,
OK - so Novamente has a system for
handling 'importance' already and
there is an importance updating function that feeds back to other aspects of
Attention Value. That's good in terms of Novamente having an internal
architecture capable of supporting and ethical system.
> You're
***
At the moment you have truth and attention
values attached to nodes and links. I'm wondering whether you need to have
a third numerical value type relating to 'importance'. Attention has a
temporal implication - it's intended to focus significant mental resources on a
key issue in th
Ben,
> One question
is whether it's enough to create general
> pattern-recognition
functionality, and let it deal with "seeking
> meaning for symbols"
as a subcase of its general behavior. Or does
> one need to create
special heuristics/algorithms/structures just for
> guiding this
parti
In a message dated 2/27/2003 7:04:37 AM Mountain Standard Time, [EMAIL PROTECTED] writes:
In theory they could be encoded, but this would be a lot
harder than formally encoding, say, English syntax, which has not yet been
done with full success.
I'll say. It's difficult enough to be able repres
In a message dated 2/26/2003 9:47:58 PM Mountain Standard Time, [EMAIL PROTECTED] writes:
Human children will learn that certain sound patterns are associated
with patterned human behaviour. So very soon (plus or minus one
year) children will start to accumulate awareness of words that they
kn
Hi,
Philip wrote:
> But whichever way it goes it seems to me that it would be a useful thing
> to equip baby AGIs with the urge and capacity to seek meaning for
> symbols/simple for apparently important behaviours/environmental
> patterns.
Yes, I think this is a useful and important thing, indee
>From time to time people on the list have commented that this or that
concept is too abstract for an infant AGI to comprehend.
I'm wondering whether this is the wrong way to look at the issue. I think
there's a multi-way relationship between symbols, abstract concepts
and experiential learnin
47 matches
Mail list logo