Good points I think, especially Jef's and Joel's. Yes, we might also
consider it not necessarily impossible that, no matter how exotic
their structures, there could be self-interested, self-modelling
systems. Perhaps there isn't necessarily a limit to sentient
inferential speeds, or to radically a
Michael makes a good point that it's intellectually permissible to
argue ad nauseam over side claims but that it's still important to
have a general consensus on an explicit description of the very idea
that would allow almost every literate person to elicit the concept of
the Singularity in the f
Bruce,
Thank you for clarifying further. If you ever have the opportunity, I
think you'd be deeply interested particularly in the second chapter,
"Truth Mining," in the science-fiction novel /Diaspora/ by Greg Egan.
Since your ideas seem similarly attracted, perhaps you've already read
it. Indeed
Bruce,
I do, however, believe there is some overlap among these knowledge
workers, viz., some of their background knowledge. So I would take it
you mean that they're optimized in terms of niche, each worker having
something unique and valuable to offer at precisely the right moments,
without a si
Hi Bruce,
I've just been thinking about this idea of 'variable scope of
system-centricity'.
Your model probably indicates that there are too many islands of
redundant data. If we can somehow model sociologic/economic
interactions better we could try to create inter-organizational
systems that ac
Bruce LaDuke wrote:
In other words, a full understanding of questions and knowleddge creation is
the step required to realize 'artificial knowledge creation,' which is
singularity. Within the construct of these interactions, 'artificial
intelligence' already exists as knowledge stored and recall
Samantha Atkins wrote:
Of course I got that. It was the "infinitely self-sufficient
environment of infinite layers of infinite
media" stuff that wasn't doing it for me.
That would be neither my problem nor yours.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
Matt Mahoney wrote:
I thought the goal of cognitive psychology was to understand behavior.
Yes, and there's more to its application. Understanding behavior, and
nothing else, is predominantly a mind-to-world direction of fit. Using
that understanding, and thence fully exploiting cognitive psych
Update to second description:
*. . . infinite /distinctive/ layers . . .
(Sorry folks, that should do it for a while now until I'm schooled.)
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAI
Update to first description:
The ultimate aim of applied cognitive psychology is for one to be an
infinitely self-sufficient environment of infinite layers of infinite
media where from any non-empty set of media information can be decoded
from any other non-empty set of media including its own se
I'm searching for an intuitive description about the final limit of
applied cognitive psychology that could be expected to be recognized
by any type of intelligent mind that already had the luxury to see
that intelligence was arbitrary (and not divine (like how the ancients
mistakenly thought abou
Shane Legg wrote:
I still haven't given up on a negative proof, I've got a few more
ideas brewing. I'd also like to encourage anybody else to try
to come up with a negative proof, or for that matter, a positive
one if you think that's more likely.
Merely an offer for the idea brew. . .
Anothe
On 9/15/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
However, I think that the introduction of provability into the
discussion is largely a red herring. Current mathematics and science
do not suffice to prove rigorous, nontrivial theorems about the
behavior of complex systems in complex environme
On 9/14/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
However, I am not so sure this is the most sensible approach to
take The details of my own personal Friendliness criterion are
not that important (nor are the details of *anyone*'s particular
Friendliness criterion). It may be more sensibl
Would this have been like the answer you were looking for, Steve?
http://sl4.org/archive/0512/13006.html
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
On 9/12/06, Stephen Reed <[EMAIL PROTECTED]> wrote:
such. How do imagine a safe upload of all humanity
would unfold?
Other good exploitive persuasion probably'll come along, but it could
be conventional that each person uploads into her infinitesimal
polymorphic hypercomputer and then, afterwa
Hi all,
Thanks for starting this list, Ben.
I have a concern that I believe is adequately related to the question
of the limits of FAI theory. It's a mathematical concern using the
very simple notion of reflexive identity, and it's prior to the
circularity of Bayesian map/territory probability s
17 matches
Mail list logo