The only thing I find surprising in that story is:
The findings go against one prominent theory that says children can only show
smart, flexible behavior if they have conceptual knowledge – knowledge about
how things work...
I don't see how anybody who's watched human beings at all can come
http://www.robotcast.com/site/
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
There've been enough responses to this that I will reply in generalities, and
hope I cover everything important...
When I described Nirvana attractractors as a problem for AGI, I meant that in
the sense that they form a substantial challenge for the designer (as do many
other
In my visualization of the Cosmic All, it is not surprising.
However, there is an undercurrent of the Singularity/AGI community that is
somewhat apocaliptic in tone, and which (to my mind) seems to imply or assume
that somebody will discover a Good Trick for self-improving AIs and the jig
will
On Friday 13 June 2008 02:42:10 pm, Steve Richfield wrote:
Buddhism teaches that happiness comes from within, so stop twisting the
world around to make yourself happy, because this can't succeed. However, it
also teaches that all life is sacred, so pay attention to staying healthy.
In short,
If you have a program structure that can make decisions that would otherwise
be vetoed by the utility function, but get through because it isn't executed
at the right time, to me that's just a bug.
Josh
On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote:
If you have a fixed-priority
Right. You're talking Kurzweil HEPP and I'm talking Moravec HEPP (and shading
that a little).
I may want your gadget when I go to upload, though.
Josh
On Thursday 12 June 2008 10:59:51 am, Matt Mahoney wrote:
--- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Hmmph. I
The real problem with a self-improving AGI, it seems to me, is not going to be
that it gets too smart and powerful and takes over the world. Indeed, it
seems likely that it will be exactly the opposite.
If you can modify your mind, what is the shortest path to satisfying all your
goals? Yep,
evidence that people fall into this
kind of attractor, as the word nirvana indicates (and you'll find similar
attractors at the core of many religions).
Josh
On Wednesday 11 June 2008 09:09:20 am, Vladimir Nesov wrote:
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote
I'm getting several replies to this that indicate that people don't understand
what a utility function is.
If you are an AI (or a person) there will be occasions where you have to make
choices. In fact, pretty much everything you do involves making choices. You
can choose to reply to this or
Hmmph. I offer to build anyone who wants one a human-capacity machine for
$100K, using currently available stock parts, in one rack. Approx 10
teraflops, using Teslas. (http://www.nvidia.com/object/tesla_c870.html)
The software needs a little work...
Josh
On Wednesday 11 June 2008 08:50:58
wishes to go into detail about specifics of his idea
that explain empirical facts that mine don't, I'm all ears. Otherwise, I have
code to debug...
Josh
On Wednesday 11 June 2008 09:43:52 pm, Vladimir Nesov wrote:
On Thu, Jun 12, 2008 at 5:12 AM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
I'm
On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote:
On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
I claim that there's plenty of historical evidence that people fall into
this
kind of attractor, as the word nirvana indicates (and you'll find similar
http://www.spectrum.ieee.org/print/6268
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
important?
On Thursday 05 June 2008 03:44:14 pm, Matt Mahoney wrote:
--- On Thu, 6/5/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
http://www.spectrum.ieee.org/print/6268
Some rough calculations. A human brain has a volume of 10^24 nm^3. A scan
of 5 x 5 x 50 nm voxels requires about
basically on the right track -- except there isn't just one cognitive level.
Are you thinking of working out the function of each topographically mapped
area a la DNF? Each column in a Darwin machine a la Calvin? Conscious-level
symbols a la Minsky?
On Thursday 05 June 2008 09:37:00 pm,
Actually, the nuclear spins in the rock encode a single state of an ongoing
computation (which is conscious). Successive states occur in the rock's
counterparts in adjacent branes of the metauniverse, so that the rock is
conscious not of unfolding time, as we see it, but of a journey across
On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote:
Back to those ~200 different types of neurons. There are probably some cute
tricks buried down in their operation, and you probably need to figure out
substantially all ~200 of those tricks to achieve human intelligence. If I
were an
, there seems (to me) that
there is probably no simple solution, as otherwise it would have already
evolved during the last ~200 million years, instead of evolving the highly
complex creatures that we now are.
That having been said, I will comment on your posting...
On 6/4/08, J Storrs Hall
Strongly disagree. Computational neuroscience is moving as fast as any field
of science has ever moved. Computer hardware is improving as fast as any
field of technology has ever improved.
I would be EXTREMELY surprised if neuron-level simulation were necessary to
get human-level
is giving us, and thus look under the hood, similar to the way
we can understand more about the visual process by studying optical
illusions.
Josh
On Monday 02 June 2008 01:55:32 am, Jiri Jelinek wrote:
On Sun, Jun 1, 2008 at 6:28 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
Why do I believe
One good way to think of the complexity of a single neuron is to think of it
as taking about 1 MIPS to do its work at that level of organization. (It has
to take an average 10k inputs and process them at roughly 100 Hz.)
This is essentially the entire processing power of the DEC KA10, i.e.
On Monday 02 June 2008 03:00:24 pm, John G. Rose wrote:
A rock is either conscious or not conscious. Is it less intellectually
sloppy to declare it not conscious?
A rock is not conscious. I'll stake my scientific reputation on it.
(this excludes silicon rocks with micropatterned circuits :-)
On Saturday 31 May 2008 10:23:15 pm, Matt Mahoney wrote:
Unfortunately AI will make CAPTCHAs useless against spammers. We will need
to figure out other methods. I expect that when we have AI, most of the
world's computing power is going to be directed at attacking other computers
and
Originally sent several days back...
Why do I believe anyone besides me is conscious? Because they are made of
meat? No, it's because they claim to be conscious, and answer questions about
their consciousness the same way I would, given my own conscious
experience -- and they have the same
On Monday 26 May 2008 09:55:14 am, Mark Waser wrote:
Josh,
Thank you very much for the pointers (and replying so rapidly).
You're welcome -- but also lucky; I read/reply to this list a bit sporadically
in general.
You're very right that people misinterpret and over-extrapolate econ
On Monday 26 May 2008 06:55:48 am, Mark Waser wrote:
The problem with accepted economics and game theory is that in a proper
scientific sense, they actually prove very little and certainly far, FAR
less than people extrapolate them to mean (or worse yet, prove).
Abusus non tollit usum.
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the literature, some parts of completely standard and
accepted economics and game
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote:
Read the appendix, p37ff. He's not making arguments -- he's explaining,
with a
few pointers into the literature, some parts of completely standard and
accepted economics and game theory. It's all very basic stuff.
The problem with
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote:
This is NOT the paper that is under discussion.
WRONG.
This is the paper I'm discussing, and is therefore the paper under discussion.
---
agi
Archives:
In the context of Steve's paper, however, rational simply means an agent who
does not have a preference circularity.
On Sunday 25 May 2008 10:19:35 am, Mark Waser wrote:
Rationality and irrationality are interesting subjects . . . .
Many people who endlessly tout rationally use it as an
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote:
...Omuhundro's claim...
YES! But his argument is that to fulfill *any* motivation, there are
generic submotivations (protect myself, accumulate power, don't let my
motivation get perverted) that will further the search to fulfill your
I disagree with your breakdown. There are several key divides:
Concrete vs abstract
Continuous vs discrete
spatial vs symbolic
deliberative vs reactive
I can be very deliberative, thinking in 2-d pictures (when designing a machine
part in my head, for example). I know lots of people who are
This is all pretty old stuff for mainstream AI -- see Herb Simon and bounded
rationality. What needs work is the cross-modal interaction, and
understanding the details of how the heuristics arise in the first place from
the pressures of real-time processing constraints and deliberative
This is poppycock. The people who are really good at something like that so
something as simple but much more general. They have an associative memory of
lots of balls they have seen and tried to catch. This includes not only the
tracking sight of the ball, but things like the feel of the
On Tuesday 22 April 2008 01:22:14 pm, Richard Loosemore wrote:
The solar system, for example, is not complex: the planets move in
wonderfully predictable orbits.
http://space.newscientist.com/article/dn13757-solar-system-could-go-haywire-before-the-sun-dies.html?feedId=online-news_rss20
How
Thank you! This feeds back into the feedback discussion, in a way, at a high
level. There's a significant difference between research programming and
production programming. The production programmer is building something which
if (nominally) understood and planned ahead of time. The
(Aplogies for inadvertent empty reply to this :-)
On Saturday 19 April 2008 11:35:43 am, Ed Porter wrote:
WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?
In a single word: feedback.
At a very high level of abstraction, most the AGI (and AI for that matter)
schemes I've seen can be caricatured
On Saturday 19 April 2008 11:35:43 am, Ed Porter wrote:
WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?
With the work done by Goertzel et al, Pei, Joscha Bach
http://www.micropsi.org/ , Sam Adams, and others who spoke at AGI 2008, I
feel we pretty much conceptually understand how build
On Monday 21 April 2008 05:33:01 pm, Ed Porter wrote:
I don't think your 5 steps do justice to the more sophisticated views of AGI
that are out their.
It was, as I said, a caricature. However, look, e.g., at the overview graphic
of this LIDA paper (page 8)
On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
If you could build a (completely safe, I am assuming) system that could
think in *every* way as powerfully as a human being, what would you
teach it to become:
1) A travel Agent.
2) A medical researcher who could learn to
Well, I haven't seen any intelligent responses to this so I'll answer it
myself:
On Thursday 17 April 2008 06:29:20 am, J Storrs Hall, PhD wrote:
On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
If you could build a (completely safe, I am assuming) system that could
think
On Wednesday 16 April 2008 04:15:40 am, Steve Richfield wrote:
The problem with every such chip that I have seen is that I need many
separate parallel banks of memory per ALU. However, the products out there
only offer a single, and sometimes two banks. This might be fun to play
with, but
On Monday 14 April 2008 04:56:18 am, Steve Richfield wrote:
... My present
efforts are now directed toward a new computer architecture that may be more
of interest to AGI types here than Dr. Eliza. This new architecture should
be able to build new PC internals for about the same cost, using
On Tuesday 15 April 2008 04:28:25 pm, Steve Richfield wrote:
Josh,
On 4/15/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Monday 14 April 2008 04:56:18 am, Steve Richfield wrote:
... My present
efforts are now directed toward a new computer architecture that may be
more
On Tuesday 15 April 2008 07:36:56 pm, Steve Richfield wrote:
As I understand things, speed requires low capacitance, which DRAM requires
higher capacitance, depending on how often you intend to refresh. However,
refresh operations look a LOT like vector operations, so probably all that
would
On Friday 11 April 2008 03:17:21 pm, Steve Richfield wrote:
Steve: If you're saying that your system builds a model of its world of
discourse as a set of non-linear ODEs (which is what Systems Dynamics is
bout) then I (and presumably Richard) are much more likely to be
interested...
No
On Friday 11 April 2008 01:59:42 am, Steve Richfield wrote:
Your experience with the medical community is not too surprising: I
believe that the Expert Systems folks had similar troubles way back when.
IMO the Expert Systems people deserved bad treatment!
Actually, the medical expert
Just noticed that last month, a computer program beat a professional Go player
(at a 9x9 game) (one game in 4). First time ever in a non-blitz setting.
http://www.earthtimes.org/articles/show/latest-advance-in-artificial-intelligence,345152.shtml
http://www.computer-go.info/tc/
Note that in the brain, there is a fair extent to which functions are mapped
to physical areas -- this is why you can find out anything using fMRI, for
example, and is the source of the famous sensory and motor homunculi
(e.g. http://faculty.etsu.edu/currie/images/homunculus1.JPG).
There's
Many of us there met Celeste Biever, the NS correspondent. Her piece is now
up:
http://technology.newscientist.com/channel/tech/dn13446-virtual-child-passes-mental-milestone-.html
Josh
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote:
1) If I physically destroy every other intelligent thing, what is
going to threaten me?
Given the size of the universe, how can you possibly destroy every other
intelligent thing (and be sure that no others ever successfully arise
On Thursday 06 March 2008 08:45:00 pm, Vladimir Nesov wrote:
On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
The scenario takes on an entirely different tone if you replace weed out
some
wild carrots with kill all the old people who are economically
On Thursday 06 March 2008 12:27:57 pm, Mark Waser wrote:
TAKE-AWAY: Friendliness is an attractor because it IS equivalent
to enlightened self-interest -- but it only works where all entities
involved are Friendly.
Check out Beyond AI pp 178-9 and 350-352, or the Preface which sums up the
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote:
This is different from what I replied to (comparative advantage, which
J Storrs Hall also assumed), although you did state this point
earlier.
I think this one is a package deal fallacy. I can't see how whether
humans conspire
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote:
My argument doesn't need 'something of a completely different kind'.
Society and human is fine as substitute for human and carrot in my
example, only if society could extract profit from replacing humans
with 'cultivated humans'. But
On Wednesday 27 February 2008 12:22:30 pm, Richard Loosemore wrote:
Mike Tintner wrote:
As Ben said, it's something like multisensory integrative
consciousness - i.e. you track a subject/scene with all senses
simultaneously and integratedly.
Conventional approaches to AI may well have
On Tuesday 26 February 2008 12:33:32 pm, Jim Bromer wrote:
There is a lot of evidence that children do not learn through imitation, at
least not in its truest sense.
Haven't heard of any children born into, say, a purely French-speaking
household suddenly acquiring a full-blown competence in
February 2008 03:34:27 am, Bob Mottram wrote:
On 20/02/2008, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
So, looking at the moon, what color would you say it was?
As Edwin Land showed colour perception does not just depend upon the
wavelength of light, but is a subjective property actively
Looking at the moon won't help -- it might be the case that it described a
particular appearance that only had a slight resemblance to other blue things
(as in red hair), for example. There are some rare conditions (high
stratospheric dust) which can make the moon look actually blue.
In fact
, the moon varies from a deep orange to brilliant white depending on
atmospheric conditions and time of night... none of which would help me
understand the text references.
On Wednesday 20 February 2008 02:02:52 pm, Ben Goertzel wrote:
On Feb 20, 2008 1:34 PM, J Storrs Hall, PhD [EMAIL PROTECTED
On Wednesday 20 February 2008 02:58:54 pm, Ben Goertzel wrote:
I note also that a web-surfing AGI could resolve the color of the moon
quite easily by analyzing online pictures -- though this isn't pure
text mining, it's in the same spirit...
U -- I just typed moon into google and at the
OK, imagine a lifetime's experience is a billion symbol-occurences. Imagine
you have a heuristic that takes the problem down from NP-complete (which it
almost certainly is) to a linear system, so there is an N^3 algorithm for
solving it. We're talking order 1e27 ops.
Now using HEPP = 1e16 x 30
A PROBABILISTIC logic network is a lot more like a numerical problem than a
SAT problem.
On Wednesday 20 February 2008 04:41:51 pm, Ben Goertzel wrote:
On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
OK, imagine a lifetime's experience is a billion symbol
It's probably not worth too much taking this a lot further, since we're
talking in analogies and metaphors. However, it's my intuition that the
connectivity in a probabilistic formulation is going to produce a much denser
graph (less sparse matrix) than what you find in the SAT problems that
It's worth noting in this connection that once you get up to the level of
mammals, everything is very high compliance, low stiffness, mostly serial
joint architecture (no natural Stewart platforms, although you can of course
grab something with two hands if need be) typically with significant
[ http://www.chron.com/disp/story.mpl/headline/biz/5524028.html ]
Steve Wozniak has given up on artificial intelligence.
What is intelligence? Apple's co-founder asked an audience of about 550
Thursday at the Houston area's first Up Experience conference in Stafford.
His answer? A robot that
On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:
J Storrs Hall, PhD wrote:
Any system builders here care to give a guess as to how long it will be
before
a robot, with your system as its controller, can walk into the average
suburban home, find the kitchen, make coffee
Breeds There a Man...? by Isaac Asimov
On Saturday 19 January 2008 04:42:30 pm, Eliezer S. Yudkowsky wrote:
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
I guess the moral here is Stay away from attempts to hand-program a
database of common-sense
On Friday 21 December 2007 09:51:13 pm, Ed Porter wrote:
As a lawyer, I can tell you there is no clear agreed upon definition for
most words, but that doesn't stop most of us from using un-clearly defined
words productively many times every day for communication with others. If
you can only
... that during sleep, the brain fills in some inferencing and does memory
organization
http://www.nytimes.com/2007/10/23/health/23memo.html?_r=2adxnnl=1oref=sloginref=scienceadxnnlx=1193144966-KV6FdDqmqr8bctopdX24dw
(pointer from Kurzweil)
-
This list is sponsored by AGIRI:
On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
... but dynamic long-term memory, in my view, is a wildly
self-organizing mess, and would best be modeled algebraically as a quadratic
iteration over a high-dimensional real non-division algebra whose
multiplication table is
On Monday 22 October 2007 08:01:55 pm, Richard Loosemore wrote:
Did you ever try to parse a sentence with more than one noun in it?
Well, all right: but please be assured that the rest of us do in fact
do that.
Why make insulting personal remarkss instead of explaining your reasoning?
On Monday 22 October 2007 08:48:20 pm, Russell Wallace wrote:
On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Still don't buy it. What the article amounts to is that speed-reading is
fake. No kind of recognition beyond skimming (e.g. just ignoring a
substantial proportion
On Monday 22 October 2007 09:33:24 pm, Edward W. Porter wrote:
Richard,
...
Are you capable of understanding how that might be considered insulting?
I think in all seriousness that he literally cannot understand. Richard's
emotional interaction is very similar to that of some autistic people I
, what
appears through the hole is a blur.
Josh
On Monday 22 October 2007 10:23:12 pm, Russell Wallace wrote:
On 10/23/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Still don't buy it. Saccades are normally well below the conscious level,
and
a vast majority of what goes on cognitively
On Friday 19 October 2007 10:36:04 pm, Mike Tintner wrote:
The best way to get people to learn is to make them figure things out for
themselves .
Yeah, right. That's why all Americans understand the theory of evolution so
well, and why Britons have such an informed acceptance of
On Friday 19 October 2007 01:30:43 pm, Mike Tintner wrote:
Josh: An AGI needs to be able to watch someone doing something and produce a
program such that it can now do the same thing.
Sounds neat and tidy. But that's not the way the human mind does it.
A vacuous statement, since I stated
There's a really nice blog at
http://karmatics.com/docs/evolution-and-wisdom-of-crowds.html talking about
the intuitiveness (or not) of evolution-like systems (and a nice glimpse of
his Netflix contest entry using a Kohonen-like map builder).
Most of us here understand the value of a market or
Remember that Eliezer is using holonic to describe *conflict resolution* in
the interpretation process. The reason it fits Koestler's usage is that it
uses *both* information about the parts that make up a possible entity and
the larger entities it might be part of.
Suppose we see the
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI? (e.g.
we need a way to do X or we just need more development of Y or we have the
ideas, just need hardware, etc)
2. Do you have an idea as to what should should be
On Thursday 18 October 2007 09:28:04 am, Edward W. Porter wrote:
Josh,
According to that font of undisputed truth, Wikipedia, the general
definition of a holon is:
...
Since a holon is embedded in larger wholes, it is influenced by and
influences these larger wholes. And since a holon
-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 16, 2007 11:01 PM
To: agi@v2.listbox.com
Subject: Re: [agi] symbol grounding QA
On Tuesday 16 October 2007 08:43:23 pm, Edward W. Porter wrote:
... holonic pattern matching, ...
Now there's a word you don't
On Monday 15 October 2007 04:45:22 pm, Edward W. Porter wrote:
I mis-understood you, Josh. I thought you were saying semantics could be
a type of grounding. It appears you were saying that grounding requires
direct experience, but that grounding is only one (although perhaps the
best)
On Tuesday 16 October 2007 09:24:34 am, Richard Loosemore wrote:
If I may interject: a lot of confusion in this field occurs when the
term semantics is introduced in a way that implies that it has a clear
meaning [sic].
Semantics does have a clear meaning, particularly in linguistics and
On Tuesday 16 October 2007 03:24:07 pm, Edward W. Porter wrote:
AS I SAID ABOVE, I AM THINKING OF LARGE COMPLEX WEBS OF COMPOSITIONAL AND
GENERALIZATIONAL HIERARCHIES, ASSOCIATIONS, EPISODIC EXPERIENCES, ETC, OF
SUFFICIENT COMPLEXITY AND DEPTH TO REPRESENT THE EQUIVALENT OF HUMAN WORLD
On Monday 15 October 2007 10:21:48 am, Edward W. Porter wrote:
Josh,
Also a good post.
Thank you!
You seem to be defining grounding as having meaning, in a semantic
sense.
Certainly it has meaning, as generally used in the philosophical literature.
I'm arguing that its meaning makes an
On Monday 15 October 2007 01:25:22 pm, Edward W. Porter wrote:
I'm arguing that its meaning makes an assumption about the nature of
semantics that obscures rather than informing some important questions
WHAT EXACTLY DO YOU MEAN?
I think that will become clearer below:
I JUST READ THE
On Monday 15 October 2007 01:57:18 pm, Richard Loosemore wrote:
AI programmers, in their haste to get something working, often simply
write some code and then label certain symbols as if they are
meaningful, when in fact they are just symbols-with-labels.
This is quite true, but I think it
This is a very nice list of questions and makes a good framework for talking
about the issues. Here are my opinions...
On Saturday 13 October 2007 11:29:16 am, Pei Wang wrote:
*. When is a symbol grounded?
Grounded is not a good way of approaching what we're trying to get at, which
is
It's probably worth pointing out that Conway's Life is not only Turing
universal but that it can host self-replicating machines. In other words, an
infinite randomly initialized Life board will contain living creatures
which will multiply and grow, and ultimately come to dominate the entire
, if they could.
Josh
On Sunday 07 October 2007 10:57:41 am, Russell Wallace wrote:
On 10/7/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
[rest of post and other recent ones agreed with]
It remains to be seen whether replicating Life patterns could evolve to
become
intelligent
On Sunday 07 October 2007 01:55:14 pm, Russell Wallace wrote:
On 10/7/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
That's interesting perspective - it defines a class of series
generators (where for example in GoL one element is the whole board on
given tick) that generate intelligence
Does anyone know of any decent estimates of how many scientists are working in
cog-sci related fields, roughly AI, psychology, and neuroscience?
Josh
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
On Friday 05 October 2007 12:13:32 pm, Richard Loosemore wrote:
Try walking into any physics department in the world and saying Is it
okay if most theories are so complicated that they dwarf the size and
complexity of the system that they purport to explain?
You're conflating a theory and
On Thursday 04 October 2007 05:19:29 pm, Edward W. Porter wrote:
I have no idea how new the idea is. When Schank was talking about
scripts ...
From the MIT Encyclopedia of the Cognitive Sciences (p729):
Schemata are the psychological constructs that are postulated to account for
the molar
On Wednesday 03 October 2007 09:37:58 pm, Mike Tintner wrote:
I disagree also re how much has been done. I don't think AGI - correct me -
has solved a single creative problem - e.g. creativity - unprogrammed
adaptivity - drawing analogies - visual object recognition - NLP - concepts -
On Thursday 04 October 2007 10:42:46 am, Mike Tintner wrote:
... I find
no general sense of the need for a major paradigm shift. It should be
obvious that a successful AGI will transform and revolutionize existing
computational paradigms ...
I find it difficult to imagine a development
On Thursday 04 October 2007 11:06:11 am, Richard Loosemore wrote:
As far as we can tell, GoL is an example of that class of system in
which we simply never will be able to produce a theory in which we
plug in the RULES of GoL, and get out a list of all the patterns in GoL
that are
On Thursday 04 October 2007 11:50:21 am, Bob Mottram wrote:
To me this seems like elevating that status of nanotech to magic.
Even given RSI and the ability of the AGI to manufacture new computing
resources it doesn't seem clear to me how this would enable it to
prevent other AGIs from also
1 - 100 of 278 matches
Mail list logo