On 10/4/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> All understood. Remember, though, that the original reason for talking
> about GoL was the question: Can there ever be a scientific theory that
> predicts all the "interesting creatures" given only the rules?
>
> The question of getting s
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> In order for your statement to be true, everybody would have to have exactly
> the same word distribution.
I meant the true (unknown) distribution, not the distribution as modeled by
the speaker and listener. But you are right that this difference mak
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> I'll repeat again since you don't seem to be paying attention to what I'm
> saying -- "The determination of whether a given action is friendly or
> ethical or not is certainly complicated but the base principles are actually
> pretty darn simple."
The
My mistake --- the previous email was meant to be private, though I
was too tired to remember that I shouldn't use "reply". :-(
Anyway, I don't mind to share this paper, but please don't post it on the Web.
Pei
On 10/4/07, Pei Wang <[EMAIL PROTECTED]> wrote:
> Mike,
>
> Attached is the paper (fo
Vladimir Nesov wrote:
Richard,
It's a question of notation. Yes, you can sometimes formulate
difficult problems succinctly. GoL is just another formalism in which
it's possible. What does it have to do with anything?
It has to do with the argument in my paper.
Richard Loosemore
On 10/4/07
Mike Dougherty wrote:
On 10/4/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Do it then. You can start with interesting=cyclic.
should GoL gliders be considered cyclic?
I personally think the candidate-AGI that finds a glider to be similar
to a local state of cells from N iterations earlie
Mike,
I think the concept of image schema is a very good one.
Among my many computer drawings are ones showing multiple simplified
drawings of different, but at different semantic levels, similar events
for the purpose of helping me to understand how a system can naturally
extract appropriate gen
Richard,
It's a question of notation. Yes, you can sometimes formulate
difficult problems succinctly. GoL is just another formalism in which
it's possible. What does it have to do with anything?
On 10/4/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> J Storrs Hall, PhD wrote:
> > On Thursday 0
Josh,
Again a good reply. So it appears the problem is they don't have good
automatic learning of semantics.
But, of course, that's vertually impossible to do in small systems except,
perhaps, about trivial domains. It becomes much easier in tera-machines.
So if my interpretation of what you ar
On 10/4/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> We can't build a system that learns as fast as a 1-year-old just now. Which is
> our most likely next step: (a) A system that does learn like a 1-year-old, or
> (b) a system that can learn 1000 times as fast as an adult?
>
> Following Moor
In order for your statement to be true, everybody would have to have exactly
the same word distribution.
And if you're talking about written text, what are you talking about mouths
and ears?
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Thursday, October 0
On 10/4/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Do it then. You can start with interesting=cyclic.
should GoL gliders be considered cyclic?
I personally think the candidate-AGI that finds a glider to be similar
to a local state of cells from N iterations earlier to be particularly
ast
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> Matt Mahoney pontificated:
> > The probability distribution of language
> > coming out through the mouth is the same as the distribution coming in
> > through
> > the ears.
>
> Wrong.
Could you explain how they differ and why it would matter? Rememb
Let me answer with an anecdote. I was just in the shop playing with some small
robot motors and I needed a punch to remove a pin holding a gearbox onto one
of them. I didn't have a purpose-made punch, so I cast around in the toolbox
until Aha! an object close enough to use. (It was a small ratta
On 10/4/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> On Thursday 04 October 2007 11:52:01 am, Vladimir Nesov wrote:
> > Analogy-making can be reformulated as other problems, so even if it's
> > not named this way it's still associated with many approaches to
> > learning. Recalling relevant
Edward You talk about the Cohen article I quoted as perhaps leading to a
major
paradigm shift, but actually much of its central thrust is similar to
ideas that have been around for decades. Cohens gists are surprisingly
similar to the scripts Schank was talking about circa 1980.
Josh: And
Well, the two papers have similar central ideas, though the second one
is much longer and also reflects Hofstadter's opinions --- so it is
not free. ;-)
I'll send you (and the others who have asked) a softcopy in private email.
Pei
On 10/4/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>
>
>
If permissible, I to would be interested in the JoETAI version of your
paper.
Thanks,
Mike Ramsey
On 10/4/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
> In response to Pei Wang's post of 10/4/2007 3:13 PM
>
> Thanks for giving us a pointer so such inside info.
>
> Googling for the article
In response to Pei Wangs post of 10/4/2007 3:13 PM
Thanks for giving us a pointer so such inside info.
Googling for the article you listed I found
1. The Logic of Categorization, by PeiWang at
http://nars.wang.googlepages.com/wang.categorization.pdf FOR FREE; and
2. A logic of
J Storrs Hall, PhD wrote:
On Thursday 04 October 2007 11:06:11 am, Richard Loosemore wrote:
As far as we can tell, GoL is an example of that class of system in
which we simply never will be able to produce a "theory" in which we
plug in the RULES of GoL, and get out a list of all the pattern
On 10/4/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>
>
> Josh,
>
> (Talking of "breaking the small hardware mindset," thank god for the company
> with the largest hardware mindset -- or at least the largest physical
> embodiment of one-- Google. Without them I wouldn't have known what "FARG
Josh,
(Talking of breaking the small hardware mindset, thank god for the
company with the largest hardware mindset -- or at least the largest
physical embodiment of one-- Google. Without them I wouldnt have known
what FARG meant, and would have had to either (1) read your valuable
response w
On Thursday 04 October 2007 01:57:22 pm, Edward W. Porter wrote:
> You talk about the Cohen article I quoted as perhaps leading to a major
> paradigm shift, but actually much of its central thrust is similar to
> ideas that have been around for decades. Cohens gists are surprisingly
> similar t
In response to the below post from Mike Tintner of 10/4/2007 12:33 PM:
You talk about the Cohen article I quoted as perhaps leading to a major
paradigm shift, but actually much of its central thrust is similar to
ideas that have been around for decades. Cohens gists are surprisingly
similar to
On Thursday 04 October 2007 10:56:59 am, Edward W. Porter wrote:
> You appear to know more on the subject of current analogy drawing research
> than me. So could you please explain to me what are the major current
> problems people are having in trying figure out how to draw analogies
> using a str
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> > I mean that ethics or friendliness is an algorithmically complex function,
> > like our legal system. It can't be simplified.
>
> The determination of whether a given action is friendly or ethical or not is
> certainly complicated but the base princ
On Thursday 04 October 2007 11:52:01 am, Vladimir Nesov wrote:
> Analogy-making can be reformulated as other problems, so even if it's
> not named this way it's still associated with many approaches to
> learning. Recalling relevant knowledge is about the same thing as
> analogy-making, and in life
On Thursday 04 October 2007 11:50:21 am, Bob Mottram wrote:
> To me this seems like elevating that status of nanotech to magic.
> Even given RSI and the ability of the AGI to manufacture new computing
> resources it doesn't seem clear to me how this would enable it to
> prevent other AGIs from also
On Thursday 04 October 2007 11:06:11 am, Richard Loosemore wrote:
> As far as we can tell, GoL is an example of that class of system in
> which we simply never will be able to produce a "theory" in which we
> plug in the RULES of GoL, and get out a list of all the patterns in GoL
> that are i
On 10/4/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
> To me this seems like elevating that status of nanotech to magic.
> Even given RSI and the ability of the AGI to manufacture new computing
> resources it doesn't seem clear to me how this would enable it to
> prevent other AGIs from also reaching
Edward P: II skimmed “LGIST: Learning Generalized Image Schemas for Transfer
Thrust D Architecture Report”, by Carole Beal and Paul Cohen at the USC
Information Sciences Institute. It was one of the PDFs listed on the web
link you sent me (at
http://eksl.isi.edu/files/papers/cohen_2006_1160084
On 10/4/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> Research in analogy-making is slow -- I can only think of Gentner and
> Hofstadter and their groups as major movers. We don't have a solid theory of
> analogy yet (structure-mapping to the contrary notwithstanding). It's clearly
> central,
To me this seems like elevating that status of nanotech to magic.
Even given RSI and the ability of the AGI to manufacture new computing
resources it doesn't seem clear to me how this would enable it to
prevent other AGIs from also reaching RSI capability. Presumably
"lesser techniques" means blac
On Thursday 04 October 2007 10:42:46 am, Mike Tintner wrote:
> ... I find
> no general sense of the need for a major paradigm shift. It should be
> obvious that a successful AGI will transform and revolutionize existing
> computational paradigms ...
I find it difficult to imagine a developmen
Josh, in your 10/4/2007 9:57 AM post you wrote:
RESEARCH IN ANALOGY-MAKING IS SLOW -- I CAN ONLY THINK OF GENTNER AND
HOFSTADTER AND THEIR GROUPS AS MAJOR MOVERS. WE DON'T HAVE A SOLID THEORY
OF ANALOGY YET (STRUCTURE-MAPPING TO THE CONTRARY NOTWITHSTANDING). IT'S
CLEARLY CENTRAL, AND SO I DON'T
Mike Tintner wrote:
>My impression is everyone's clinging to
>existing paradigms, even though they obviously don't work for AGI as opposed
>to AI. By all means disabuse me and point to someone contemplating such a
>shift.
>
You just pointed us to one (!): Paul Cohen (see
http://www.isi.edu/~
In my complex systems paper I make extensive use of John Horton Conway's
little cellular automaton called Game of Life (GoL), but two people have
made objections to this on the grounds that GoL can be used to implement
a Turing Machine, and is therefore an example of me not knowing what I
am
Bob Mottram wrote:
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
This seems very vague. I would suggest that if
Josh: One main reason I support the development of AGI as a serious subfield
is not
that I think any specific approach here is likely to work (even mine), but
that there is a willingness to experiment and a tolerance for new and
odd-sounding ideas that spells a renaissance of science in AI.
Well
Response to Mike Tintners Thu 10/4/2007 7:36 AM post:
I skimmed LGIST: Learning Generalized Image Schemas for Transfer Thrust D
Architecture Report, by Carole Beal and Paul Cohen at the USC Information
Sciences Institute. It was one of the PDFs listed on the web link you
sent me (at http://ek
On Wednesday 03 October 2007 09:37:58 pm, Mike Tintner wrote:
> I disagree also re how much has been done. I don't think AGI - correct me -
has solved a single creative problem - e.g. creativity - unprogrammed
adaptivity - drawing analogies - visual object recognition - NLP - concepts -
creat
Matt Mahoney pontificated:
The probability distribution of language
coming out through the mouth is the same as the distribution coming in
through
the ears.
Wrong.
My goal is not to compress text but to be able to compute its probability
distribution. That problem is AI-hard.
Wrong again
> I mean that ethics or friendliness is an algorithmically complex function,
> like our legal system. It can't be simplified.
The determination of whether a given action is friendly or ethical or not is
certainly complicated but the base principles are actually pretty darn simple.
> However, I
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> As to exactly how, I don't know, but since the AGI is, by assumption,
> peaceful, friendly and non-violent, it will do it in a peaceful,
> friendly and non-violent manner.
This seems very vague. I would suggest that if there is no clea
Bob Mottram wrote:
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Linas Vepstas wrote:
Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would n
Another AGI project -some similarities to Ben's. (I was not however able to
play with my Wubble - perhaps you'll have better luck). Comments?
http://eksl.isi.edu/cgi-bin/page.cgi?page=project-jean.html
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change y
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Linas Vepstas wrote:
> > Um, why, exactly, are you assuming that the first one will be freindly?
> > The desire for self-preservation, by e.g. rooting out and exterminating
> > all (potentially unfreindly) competing AGI, would not be wha
Linas Vepstas wrote:
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote:
Second, You mention the 3-body problem in Newtonian mechanics. Although
I did not use it as such in the paper, this is my poster child of a
partial complex system. I often cite the case of planetary system
Linas Vepstas wrote:
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
When the first AGI is built, its first actions will be to make sure that
nobody is trying to build a dangerous, unfriendly AGI.
Yes, OK, granted, self-preservation is a reasonable character trait.
After
49 matches
Mail list logo