by them, because the brain has learned from prior inferencing
patterns to expect such activations to arise given the task being performed.
Ed Porter
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
the switcheroo. This is true even when the new
construction worker was obviously, to any one who looked with any care, of a
different sex.
So probabilistic reasoning is often involved when thinking about identity is
done.
Ed Porter
-Original Message-
From: Harry Chesley [mailto:ches
and they funnel all
reasoning through a single narrowly focused process that smushes different
inputs to produce output that can appear reasonable in some cases but is
really flat and lacks any structure for complex reasoning.
Ed Porter
This is certainly not true of a Novamente-type system
dishonest as you become when you start loosing an
argument.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com]
Sent: Friday, December 26, 2008 1:03 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke was Building a
machine
being mixed a
bit.
Gotta get back to xmas! Yuletide stuff to you.
===ED's reply===
Agreed.
Ed Porter
-Original Message-
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au]
Sent: Tuesday, December 23, 2008 7:55 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might
probably have to accomplish some of the same general
functions, such as automatic pattern learning, credit assignment, attention
control, etc.
Ed Porter
-Original Message-
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au]
Sent: Monday, December 22, 2008 11:36 PM
To: agi@v2
windbag.
Ed Porter
P.S. Your postscript is not sufficiently clear to provide much support for
your position.
P.P.S. You below
-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com]
Sent: Tuesday, December 23, 2008 9:53 AM
To: agi@v2.listbox.com
Subject: Re
Richard,
Please describe some of the counterexamples, that you can easily come up
with, that make a mockery of Tononi's conclusion.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com]
Sent: Monday, December 22, 2008 8:54 AM
To: agi@v2.listbox.com
Subject
-fertilization there that I
have heard people at such event describe the benefits of.
Ed Porter
-Original Message-
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au]
Sent: Monday, December 22, 2008 6:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke
in
particular.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:b...@goertzel.org]
Sent: Sunday, December 21, 2008 12:17 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke was Building a
machine that can learn from experience
I know
there is
something important I am missing.
I would appreciate it very much if you could tell me what it is that I am
missing.
Ed Porter
-Original Message-
From: Moshe Looks [mailto:madscie...@google.com]
Sent: Monday, December 15, 2008 1:33 PM
To: agi@v2.listbox.com
Subject
at. But it is not clear
to me that it is a win for a majority of the types of problems which the
human brain performs relatively well.
I would be interested in your thoughts (and those of any others on this
list) concerning the above.
Ed Porter
-Original Message-
From
An article related to how changes in the epigenonme could affect learning
and memory (the subject which started this thread a week ago)
http://www.technologyreview.com/biomedicine/21801/
---
agi
Archives:
cannot retrieve
it due to damage to neural circuits, she adds.
-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 11, 2008 10:28 AM
To: 'agi@v2.listbox.com'
Subject: FW: [agi] Lamarck Lives!(?)
An article related to how changes in the epigenonme
local changes to mitochondrial DNA, near a synapse are involved. The
article does not appear to shed in any light on this issue of how changes in
the expression of DNA would affect learning at the synapse level, where most
people think it occurs.
Ed Porter
-Original Message-
From: Richard
think the article failed to mention an important part of the theory of
what is going on.
Ed Porter
-Original Message-
From: Terren Suydam [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 03, 2008 12:16 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Lamarck Lives!(?)
Ed,
That's a good
information in synapses.
Ed Porter
-Original Message-
From: Terren Suydam [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 03, 2008 2:00 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Lamarck Lives!(?)
Ed,
Though it seems obvious that synapses are *involved* with memory
appearance. They could
also unchanged that particular epigenomic trait back to what it had been in
a parent or grandparent. So they were able to change and unchanged traits
that were inheritable.
So the answer is yes.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto
of how changes in gene expression in a
neuron's nucleus would store memories, even given the knowledge that the
epigenome can store information.
If there is such an explanation, either now or in the future, I would
welcome hearing it.
Ed Porter
-Original Message-
From: Ben
imaginings of a drugged mind, arguably far
beyond the complexity of the observable universe, without requiring for its
representation more than an infinitesimal fraction of anything that could be
accurately called infinite.
Ed Porter
-Original Message-
From: Hector Zenil [mailto:[EMAIL
curiosity.)
Ed Porter
-Original Message-
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 02, 2008 4:15 PM
To: agi@v2.listbox.com
Subject: Re: RE: FW: [agi] A paper that actually does solve the problem
of consciousness
On Dec 2, 2008, at 8:31 AM, Ed Porter
is capable of conceiving of them, and of
seeing evidence that might suggest to some their existence, such as was
suggested to Einstein, who for many years I have been told believed in a
universe that was infinite in time.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL
of a lot
cheaper to build.
Ed Porter
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=120640061
.
But that in no way means your statements are correct descriptions of
external reality, as many of your statements would appear to claim to be.
And you have provided no evidence, other than drug induced experience within
your own mind, that they are.
Ed Porter
-Original Message
the probability of this at roughly at least one in ten, a
large enough possibility that it should, at least, be considered in
discussions of the future of AGI and the singularity.
Ed Porter.
-Original Message-
From: Bob Mottram [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 26, 2008 3:27
and if they are monitoring us --- because it would mean we
would be at the start of a rapid technological development that would mean
we could become much more equal with them --- making us either more valuable
--- or more threatening --- to them.
Ed Porter
-Original Message-
From: Aleksei Riikonen [mailto
Translate into English, please.
-Original Message-
From: Bob Mottram [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 26, 2008 1:35 PM
To: agi@v2.listbox.com
Subject: Re: [agi] If aliens are monitoring us, our development of AGI might
concern them
2008/11/26 Ed Porter [EMAIL
to better and more rapidly communicate visual
information to humans.
Ed Porter
-Original Message-
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 26, 2008 11:03 AM
To: agi@v2.listbox.com
Subject: RE: [agi] The Future of AGI
Although a lot of AI-type research
of physics and the cosmological constants that inform
them may arise from the content of communication and computation being
performed at the psycho-atomic level, where mind manifests in quark
form!
What could possibly concern such a superior race!
On 11/26/08, Ed Porter [EMAIL PROTECTED] wrote
the above
quote's definition of mu.
Even questions that more clearly fall within the meaning of mu, such as
what is the sound of one hand clapping, can have some value for providing
an example of mu and warning us of the types of trick language can play on
us.
Ed Porter
-Original
or an old though --- if alien life forms are
actually monitoring us, our achieving AGI would substantially change our
relationship to them and may substantially change their behavior toward us
--- and that might just be a very important thought.
'd be interested in your thoughts.
Ed
it, to see to what extent it
agrees with the above hypothesis.
Ed Porter
-Original Message-
From: Eric Burton [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 23, 2008 10:50 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness
Ego death
will
fare well --- as civilization, as we know it, is increasingly and more
rapidly distorted by the momentus changes that face us.
But I am 60 years old, so maybe my viewpoint is out of date.
Ed Porter
---
agi
Archives: https://www.listbox.com/member
, Ed Porter [EMAIL PROTECTED] wrote:
Since I assume Ben, as well as a lot of the rest of us, want the AGI
movement to receive respectability in the academic and particularly in the
funding community, it is probably best that other than brain-science- or
AGI-focused discussions of the effects of drugs
by
the momentous changes that face us.
-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED]
Sent: Monday, November 24, 2008 1:30 PM
To: agi@v2.listbox.com
Subject: [agi] Entheogins, understainding
That there is so much other discussion of drug experiences on the web is one
of the reason I think discussions of such experiences here should be limited
to discussions that attempt to add to the understanding of AGI or related
aspects of brain science.
-Original Message-
From: BillK
.
If you have communicable evidence to the contrary, please enlighten me.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Monday, November 24, 2008 2:57 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Entheogins, understainding the brain, and AGI
subjectively experience it, it is real in a
certain, very important to us, sense.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Monday, November 24, 2008 4:59 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Entheogins, understainding the brain, and AGI
Dennett
such sensations or by the chattering of the chatbot
most of us have inside our heads --- other than for the standard effects on
sensations and emotions one would routinely associate with being
entheogenned.
Ed Porter
-Original Message-
From: Eric Burton [mailto:[EMAIL PROTECTED]
Sent: Sunday
reality, including matter, can be considered to have information, and
information would not be orthogonal to matter.
I am sure others on this list could describe this better.
Ed Porter
-Original Message-
From: Harry Chesley [mailto:[EMAIL PROTECTED]
Sent: Friday, November 21, 2008 10:17
.
This would tend to agree with what you say in your post below.
Ed Porter
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 22, 2008 2:57 PM
To: agi@v2.listbox.com
Subject: RE: [agi] A paper that actually does solve the problem of
consciousness
You guys
surgery.
It may well have great potential for the early stages of the transhumanist
transformation.
http://www.technologyreview.com/biomedicine/21699/
Ed Porter
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
of these three parts in the
proper pronunciations.
It is a word that would be deeply appreciated by many at my local Unitarian
Church.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 20, 2008 7:11 PM
To: agi@v2.listbox.com
Subject: Re
mysteries, but probably enough to remove some of
them.
More interesting is your belief that computational systems *focus
consciousness in particular ways. Can you be any more specific about this
belief?
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED
meditation wise, and have achieve a seen of oneness with the cosmic
consciousness. If so, I tip my hat (and Colbert wag of the finger) to you.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 20, 2008 5:46 PM
To: agi@v2.listbox.com
I attended a two day seminar on brain science at MIT about six years ago in
which one of the papers was about neurognesis in the hippocampus. The
speaker said he though neurogenisis was necessary in the hippocampus because
hippocampus cells tend to die much more rapidly than most cells, and thus
repeated statement that our subjective experiences are
the most real things we have.
But just because they are subjective to us now, does not necessarily mean
that they are largely beyond the scope of human and AGI assisted science.
Ed Porter
1. Kurzweil has claimed we will be able to so
that either system you describe would have
anything approaching a human consciousness, and thus a human experience of
pain, since they lack the type of computation normally associated with
reports by humans of conscious experience.
Ed Porter
-Original Message-
From: Matt Mahoney [mailto
on this list might have meaningful additions to the definition of
what it is that we should be looking for when we search to understand
consciousness.
Ed Porter
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 5:39 PM
To: agi@v2
possible.
In fifty years, humankind will probably know for sure.
Ed Porter
-Original Message-
From: Trent Waddington [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 6:19 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] A paper that actually does solve the problem
Waddington [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 7:36 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter [EMAIL PROTECTED] wrote:
I am talking about the type
,
simultaneity, and meaning.
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 8:46 PM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Mon, 11/17/08, Ed Porter
where on the zombie/non-zombie continuum
they reside.
Ed Porter
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 15, 2008 10:02 PM
To: agi@v2.listbox.com
Subject: RE: [agi] A paper that actually does solve the problem of
consciousness
appreciate the serious, careful, respectful tone of
Richard's paper, I disagree strongly with about two thirds of its basic
conclusions.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Friday, November 14, 2008 12:28 PM
To: agi@v2
-machine_n_140115.html
I thought Cassio's remarks were some of the most interesting in the article.
I am guessing all the financial AI he and Ben did at Web Mind made them
pretty knowledgeable about finance.
Ed Porter
---
agi
Archives: https
Have only had time to skim it, but appears to be a real resourse for info on
an important subject. Ed Porter
-Original Message-
From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 01, 2008 6:20 AM
To: agi@v2.listbox.com
Subject: [agi] Whole Brain Emultion (WBE
of
humanity that we continue to use models which involve simplifications
derived from levels of organization higher than those described by what is
traditionally called physics.
Ed Porter
-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 30, 2008 5:09 PM
To: agi
Pei,
My understanding is that when you reason from data, you often want the
ability to extrapolate, which requires some sort of assumptions about the
type of mathematical model to be used. How do you deal with that in NARS?
Ed Porter
-Original Message-
From: Pei Wang [mailto:[EMAIL
don't
need any specific pre-selected set of priors.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 28, 2008 5:50 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Occam's Razor and its abuse
Au contraire, I suspect that the fact
forward to your confirmation, comments, or corrections
about what I have said in this email, and I thank you for your efforts to
enlight me.
Ed Porter
-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 21, 2008 7:52 PM
To: agi@v2.listbox.com
of the problem, since
that might prove difficult enough, and since I was just trying to get some
rough feeling for whether or not node assemblies might offer substantial
gains in possible representational capability, before delving more deeply
into the subject.
Ed Porter
-Original Message
this formula is a proper
lower bounds, in a little more detail than in the email in which you first
presented it, I would appreciate it very much, it would let me know how much
faith I should put into the above numerical results.
Ed Porter
-Original Message-
From: Vladimir Nesov
higher than the cost of a link.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 21, 2008 11:28 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?
makes sense, yep...
i guess my intuition
, but, as I said above, if your formula is
correct, I think it is quite important, and I would like to understand it.
And if it has limitations, or if it could use corrections, I would like to
know what they are.
Thank you again for your time.
Ed Porter
-Original Message-
From
cross talk between concept assemblies, might not be above that
which would be desired for such other purposes.
Ed Porter
-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 21, 2008 3:09 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Who is smart
be interested to see what results it would give.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 16, 2008 10:38 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?
Well, coding theory does let you
the population of
different attractors could have different timing or timing patterns, and if
the auto associatively was sensitive to such timing, this problem could be
greatly reduced.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2008 4
Ben Goertzel wrote on Wednesday, October 15, 2008 7:57 PM
Is the other node assembly B fixed? So you're asking how many assemblies
of size S will have less than O nodes overlap with some specific node
assembly B with size S?
[Ed Porter]
Ben,
If I understand your above quoted
of the combinations in A is allowed to overlap by more than O with
any other combination in A makes things much more complex, and way beyond my
understanding.
Ed Porter
-Original Message-
From: Eric Burton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 15, 2008 8:05 PM
To: agi@v2.listbox.com
with the
participation of individual nodes with different assemblies change by
different amounts over time.
Also most of the neurobiological discussion I have read about cell
assemblies indicates that often an individual neuron can be in many
different cell assemblies as once.
Ed Porter
-Original
keyphrases to use in hunting down related theorems if you want to.
You are right that it's a nontrivial combinatorial problem
-- Ben
On Thu, Oct 16, 2008 at 11:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
Eric,
Actually I am looking for a function A =f(N,S,O).
If one leaves out the O, and merely
/ practical stuff...
On Thu, Oct 16, 2008 at 2:35 PM, Ed Porter [EMAIL PROTECTED] wrote:
Ben,
Thanks. I spent about an hour trying to understand this paper, and, from my
limited reading and understanding, it was not clear it would answer my
question, even if I took the time that would be necessary
don't see it
anymore. It was published as chapter 3 in Bar-Cohen, Y. [Ed.] Biomimetics:
Biologically Inspired Technologies, CRC Press, Boca Raton, FL (2006).
Ed Porter
-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 16, 2008 3:40 PM
To: agi@v2
me with a
few impressive examples.
Ed Porter
-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 16, 2008 3:40 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?
On Thu, Oct 16, 2008 at 7:01 PM, Ed Porter
can represent.
Ed Porter
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
the question, why don't
you send a brief polite email to geof at the email address on his web site.
He responded to a message I sent him with a brief, but helpful reply.
Ed Porter
-Original Message-
From: Eric Burton [mailto:[EMAIL PROTECTED]
Sent: Monday, September 22, 2008 5:57 PM
To: agi@v2
are linear, would
be extremely difficult to get much out of. Hinton uses a more multi-leveled
net, with non-linear nodes, and with a specified learning algorithm, and
actually shows the imagining or dreaming that the video mentioned below
only talks about.
Ed Porter
-Original Message-
From
. But it is
possible that it will be quite a while before we can develop electronics as
cheap and as power efficient as neurons. (Of course it is also possible that
electronic advances will happen so fast that there is no real reason for
using wetware.)
Ed Porter
-Original Message-
From: Mike
A 'Frankenrobot' with a biological brain
Meet Gordon, probably the world's first robot controlled exclusively by
living brain tissue.
Article at
http://www.breitbart.com/article.php?id=080813192458.ud84hj9hshow_article=1
---
agi
Archives:
. It suggests wetware supplemented hardware could, perhaps, prove to
be the earliest, cheapest path to computing platforms capable of human level
AGI at a reasonable cost and at reasonable levels of power consumption.
Ed Porter
-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED
magnification to see their detail.
Ed Porter
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=108809214
with a new argument
that a people on the list I respected said was important that indicated that
SFI complexity would make it impossible to design AGI from
Novamente/OpenCog-like elements, I would be interested in reading it.
Ed Porter
-Original Message-
From: David Clark [mailto:[EMAIL
it objectively.
If Richard were motivated more by trying to understand the truth, and less
by wanting to feel smarter than everyone else, I think he could contribute
much more to this list.
Ed Porter
P.S.
To be fair I have read much less of Richard's posts since the FOUR FEATURES
OF DESIGN
dishonest.
Richard, I think you are an intelligent guy. It is a shame your
intelligence is not freed from the childishness, and neediness, and
dishonesty of your ego.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Saturday, August 02, 2008 6:23 PM
is not found quickly, could cause even more attention to be
allocated to the search, pushing the search and its failure into clear
conscious awareness.
Ed Porter
-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED]
Sent: Monday, July 28, 2008 4:25 PM
To: agi@v2.listbox.com
, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.
Ed Porter
P.S. I would appreciate answers for Abram or any else on this list who
understands the question and has some knowledge on the subject
Jim, Sorry. Obviously I did not understand you. Ed Porter
-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 15, 2008 9:33 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE
BINDING PROBLEM?
Ed
Abram,
Thanks, for the info. The concept that the only purpose of backward
chaining to find appropriate forward chaining paths, is an important
clarification of my understanding.
Ed Porter
-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 15, 2008 11
what time I have to spend on this list conversing with people
who are more concerned about truth than trying to sound like they know more
than others, particularly when they don't.
Anyone who reads this thread will know who was being honest and reasonable
and who was not.
Ed Porter
-Original
ego, if you gave
reasons for your criticisms, and if you took the time to ensure your
criticism actually addressed what you are criticizing.
In your post immediately below you did neither.
Ed Porter
-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008
Response to Abram Demski message of Monday, July 14, 2008 10:59 AM
Abram It is true that Mark Waser did not provide much
justification, but I
think he is right. The if-then rules involved in forward/backward
chaining do not need to be causal, or temporal.
[Ed Porter] I
Mark,
Still fails to deal with what I was discussing. I will leave it up to you
to figure out why.
Ed Porter
-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 10:54 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL
are likely to be able to purchase for one or two cents from the
back of a comic book.
If however the same rule were applied to me, I would be able to buy an AGI
as powerful as Phantom Decoder Ring worth at least a buck.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:[EMAIL
probability at all.
Interestingly increasing or decreasing a nodes activation tends to spread
? activation seeking feedback on whether the increased or decrease in
probability is supported or contradicted by other information in the
network.
Ed Porter
-Original Message-
From: Abram Demski
and
presumably large semantic space. Unfortunately I was unable to understand
from your description how you claimed to have accomplished this.
Could you please clarify you description with regard to this point.
Ed Porter
-Original Message-
From: Jim Bromer [mailto:[EMAIL
Jim,
Thanks for your questions.
Ben Goertzel is coming out with a book on Novamente soon and I assume it
will have a lot of good things to say on the topics you have mentioned.
Below are some of my comments
Ed Porter
JIM BROMER WROTE===
Can you describe some
have unfairly criticized the
statements of another.
Ed Porter
==Wikipedia defines forward chaining as: ==
Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.
Forward chaining
actually spend some time thinking about how to generalize Shruiti.
If they, or there equivalent, are not in Ben's new Novamente book I may take
the trouble to write them up, but I am expecting a lot form Ben's new book.
I did not understand your last sentence
Ed Porter
-Original
time to look at one small part of your post today...
Ed Porter wrote:
The Does Mary own a book? example, once the own relationship is
activated with Mary in the owner slot and a book in the owned-object
slot, spreads ? activation, which asks if there any related
relationships or instances
.
== MIKE'S RESPONSE=
Do you think the brain works by massive search in dealing with problems?
Chess - a top master may consider consciously v. roughly 150 moves in a
minute. Do you think his unconscious brain is considering a lot more? How
many, roughly in what time?
===ED PORTER =
Big Blue
1 - 100 of 251 matches
Mail list logo