not be run by clicking
on a single link (as AiMind.html can), so
here is a sample interaction with MindForth:
First we type in five statements.
> tom writes jokes
> ben writes books
> jerry writes rants
> ben writes articles
> will writes poems
We then query the AI in Tutoria
t;I" self-concept and the "you"
concept of the non-self "other". In the case of MindForth AI,
the relationships between the "I" concept and a predicate
nominative (such as the very name "Andru" by which the AI
is known), are external to the "I
MindForth Programming Journal (MFPJ)
Wed.22.SEP.2010 -- Solving the Missing "seq"
Yesterday we solved the problem of the missing "seq" tags
rather quickly, when we noticed that each time point with
a missing "seq" was just outside the search-range of ten
t
MindForth Programming Journal (MFPJ)
Tues.21.SEP.2010 -- (work in progress)
We are now in a strange situation as AI Mind coders.
We have created an extremely powerful AI Mind at
http://www.scn.org/~mentifex/mindforth.txt
but we have been so relentlessly in pursuit of basic
AI functionality
interpret the above exchange as showing that the
response-idea "I AM ANDRU" was initially inhibited as a
pair of two identical thoughts, one in the innate knowledge
of the EnBoot English bootstrap, and one in the response
made by the AI when asked, "What are you?" The inhibiiti
Human: boys
Robot: THE BOYS MAKE THE CARS
Human: boys
Robot: THE BOYS MAKE THE GUNS
Chief AGI guru Dr. Goertzel! The above is not
a cherry-picked, post-mucho experimentation
routine test result put out for PR purposes.
It just happened during hard-core AI coding.
Now, before everybody jumps in and
Mad Science Theory-Based Artificial Intelligence
Abstract
The patient insists that he has created an
artificial Mind, a virtual entity capable of
abstract thought and self-awareness. Further,
his research is too dangerous to be published
outside of the Tesla Journal, because Mentifex
AI
The Wrong Stuff : Error Message: Google Research Director
Peter Norbig on Being Wrong
http://bit.ly/cQpUpx
translates to
http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/08/03/error-message-google-research-director-peter-norvig-on-being-wrong.aspx
-
David Jones wrote:
>
> I've suddenly realized that computer vision
> of real images is very much solvable and that
> it is now just a matter of engineering. [...]
Would you (or anyone else on this list) be
interested in learning Forth and working on
http://code.google.com
t;, until
KbTraversal "rescued" the situation. However,
we know why the AI got stuck in a rut. It was
able to answer the query "who are you" with
"I AM ANDRU", but it did not know anything
further to say about ANDRU, so it repeated
"ANDRU AM ANDRU". I
David Jones wrote:
>
>Arthur,
>
>Thanks. I appreciate that. I would be happy to aggregate some of those
>things. I am sometimes not good at maintaining the website because I get
>bored of maintaining or updating it very quickly :)
>
>Dave
>
>On Sat, Jul 24, 2010 at
The Web site of David Jones at
http://practicalai.org
is quite impressive to me
as a kindred spirit building AGI.
(Just today I have been coding MindForth AGI :-)
For his "Practical AI Challenge" or similar
ventures, I would hope that David Jones is
open to the idea of aggr
Thurs.22.JUL.2010 -- Mindplex for Is-a Functionality
As we contemplate AI coding for responses
to such questions as
"Who is Andru? What is Andru?"
"Who are you? What are you?"
we realize that simple memory-activation of
question-words like "who" or "what
Tues.20.JUL.2010 -- Seeking Is-a Functionality
Recently our overall goal in coding MindForth
has been to build up an ability for the AI to
engage in self-referential thought. In fact,
"SelfReferentialThought" is the "Milestone"
next to be achieved on the "Roa
Deepak wrote on Sun, 18 Jul 2010:
>
> I wanted to know if there is any bench mark test
> that can really convince a majority of today's AGIers
> that a System is true AGI?
Obvious AGI functionality is the "default" test for AGI.
http://www.scn.org/~mentifex/AiMin
The free, open-source JavaScript AI Mind at
http://www.scn.org/~mentifex/AiMind.html
for Microsoft Internet Explorer (MSIE)
has been updated on 13 July 2010 with
a major bugfix imported from the
http://www.scn.org/~mentifex/mindforth.txt
AI Mind in Win32Forth. This update fixes a
bug present
Carlos A Mejia invited questions for an AGI!
> If you could ask an AGI anything, what would you ask it?
Who killed Donald Young, a gay sex partner
of U.S. President Barak Obama, on December
24, 2007, in Obama's home town of Chicago,
when it began to look like Obama could
actually be
Ben Goertzel wrote:
>
>And, just to clarify: the fact that I set up this list and pay $12/month for
>its hosting, and deal with the occasional list-moderation issues that
>arise, is not supposed to give my **AI opinions** primacy over anybody
>else's on the list, in discussions I only interv
Artificial Minds in Win32Forth are online at
http://mind.sourceforge.net/mind4th.html and
http://AIMind-i.com -- a separate AI branch.
http://mentifex.virtualentity.com/js080819.html
is the JavaScript AI Mind Programming Journal
about the development of a tutorial program at
http
> Steve Richfield
Bellevue?! 'Fraid not, although I used to be a teacher
of German and Latin at The Overlake School in Redmond.
Seattle?! Yes. If you ever go to Northgate or to Green
Lake or to the University of Washington off-campus area,
I can meet you there -- especially in a
John G. Rose wrote:
> [...]
>> > Hey you guys with some gray hair and/or bald spots,
>> > WHAT THE HECK ARE YOU THINKING?
>>
>> prin Goertzel genesthai, ego eimi
"Before Goertzel came to be, I am." (a Biblical allusion in Greek :-)
>>
>&
The "abnormalis sapiens" Herr Doktor Steve Richfield wrote:
>
>
> Hey you guys with some gray hair and/or bald spots,
> WHAT THE HECK ARE YOU THINKING?
prin Goertzel genesthai, ego eimi
http://www.scn.org/~mentifex/mentifex_faq.html
My hair is graying so much and such a
John Rose communicated:
>
> Consciousness with minimal intelligence may be easier
> to build than general intelligence. [...]
IMHO consciusness emerges from any level of intelligence.
Please see
http://mentifex.virtualentity.com/conscius.html
"Is MindForth conscious?"
http://mentifex.virtualenti
For teaching computer programming.
For teaching JavaScript to students.
For learning JavaScript
For teaching artificial intelligence at a school for the gifted.
For teaching artificial intelligence on the high-school level.
For teaching artificial intelligence at a community college.
For teaching
In our JSAI coding over the last few days, we kept noticing
that the activation-level on S-V-O verbs was going to zero
immediately after the generation of a sentence of thought.
It looked obvious to us that something in there was
arbitrarily zeroing out the verbs. Last night we looked into
Vladimir Nesov wrote:
On Sun, May 4, 2008 at 11:09 AM, rooftop8000 <[EMAIL PROTECTED]> wrote:
hi,
I have a lot of parallel processes that are in control of their own activation
(they can decide which processes are activated and for how long). I need some
kind of organisation (a
rooftop8000 wrote:
hi,
I have a lot of parallel processes that are in control of their own activation (they can decide which processes are activated and for how long). I need some kind of organisation (a simple example would be a hierarchy of processes that only activate downwards).
I
Vladimir Nesov wrote:
On Sat, Apr 26, 2008 at 12:52 AM, a <[EMAIL PROTECTED]> wrote:
My approach of visual reasoning involves some form of searching for similar
images. It associates images using spreading activation techniques to
disambiguate vision and to speed up image ma
.
Connections between nodes strengthen as they are simultaneously
activated while preserving their context sensitivity. It is a bottom-up
emergent approach that learns the basic visual features first so it can
selectively concentrate on higher-level features, such as letters or
words, while avoiding
Jim Bromer wrote:
But the idea
that vision is necessary for true advancements in AGI is not warranted
by any hard evidence. This is significant since good computational
vision systems have been around for years now. Vision systems
programming suffers from the same kind of complexity problems tha
Russell Wallace wrote:
What you say is true, but even though there's no sharp dividing line,
the difference is still relevant.
The best way I can think of to summarize the difference is between a
program that deals with "The cat sat on the mat" or "SatOn(Cat, Mat)"
on
Russell Wallace wrote:
I don't think this is an accurate paraphrase of Mike's statement. "X
is secret sauce" implies X to be _both necessary and sufficient_ (or
at least that the other ingredients are trivial compared to X) - a
type of claim AI has certainly seen plenty of.
Ben Goertzel wrote:
I wouldn't agree with such a strong statement. I think the grounding
of ratiocination in image-ination is characteristic of human
intelligence, and must thus be characteristic of any highly human-like
intelligent system ... but, I don't see any reason to believe it&
Steve Richfield wrote:
>
> The process that we call "thinking" is VERY
> different in various people. [...]
[...]
> Any thoughts?
>
> Steve Richfield
The post above -- real food for thought -- was the most
interesting post that I have ever read on the AGI list.
Arthur T. Murray
--
http://mentif
Bob Mottram writes:
>
> Good advice. There are of course sometimes
> people who are ahead of the field,
Like Ben Goertzel (glad to send him a referral
recently from South Africa on the OpenCog list :-)
> but in conversation you'll usually find that the
> genuine i
d uses feature extraction methods such as edge detection,
motion detection, etc. The visual cortex does that function. This is
like converting a bitmap image to vector images for better manipulation.
It even discriminates objects by the use of probabilistic-like methods.
The human mind does not do
icant*
subjective unimportant qualities are unclassified
complexity is a product of two factor-independent symbols
complexity is the incompatibility between input and output
for example, random black and white dots
is considered nonrandom because our senses
overabstracts the dots as one unit
i
purposes and the
latter to make a living ;-p
The term "artificial general intelligence" is an oxymoron. That term is
metaphysical, since there is no such thing as "general".
This point is well-understood already.
Hutter's theoretical analyses of AIX
Mike Tintner wrote:
Richard,
Thanks for response. But it surely *is* still a puzzle as to how and
indeed where that distorted image on the retina gets rectified and
raises major questions about vision. No one as, I understand, has the
answer. I am too ignorant to have a POV here - but my
Only robots above a certain level of sophistication may receive
a mind-implant via MindForth. The computerized robot needs to have
an operating system that will support Forth and sufficient memory
to hold both the AI program code and a reasonably large knowledge
base (KB) of experience. A
>From the rewrite-in-progress of the User Manual --
1.5 Can MindForth feel emotions?
When a robot is in love, it needs to feel a physiological response
to its internal state of mind. Regardless of what causes the love,
the robot will not experience what the ancient Greeks called
dame
>From the rewrite-in-progress of the User Manual --
1.4 Is MindForth conscious?
MindForth has been engineered for artificial consciousness
but most likely will not report its own consciousness unless
it is installed in a robot body with a sufficient motorium
and adequate sensorium to engen
es Mind.Forth
think, and what proof is there that Mind.Forth thinks?
Mind.Forth thinks by having concepts at a deep level
in the artificial mind, and by letting activation spread
from one concept to another to another in a chain of thought
under the guidance of a Chomskyan linguistic superstruc
Joseph Gentle wrote on Sun, 10 Feb 2008, in a message now at
http://www.mail-archive.com/agi@v2.listbox.com/msg09803.html
>
> On Feb 9, 2008 11:53 PM, A. T. Murray <[EMAIL PROTECTED]> wrote:
>> It is not a chatbot.
>> The AI engine is arguably the first True AI. It
ail expressing
his amazement that anyone would try to do AI in REXX.
Mentifex mailed back the entire Mind.REXX source code.
Another fellow, an IBM mainframe programmer, tried to
port the Amiga Rexxmind to run on his IBM mainframe --
which would have been a Kitty-Hawk-to-Concorde leap --
but the R
>From the rewrite-in-progress of the User Manual --
1.1 What is MindForth?
Mind.Forth AI is a rudimentary replica of the human mind
programmed in the Forth programming language. The AI Mind
is the software implementation of a theory of mind based on
Chomskyan linguistics -- the rules
>From the rewrite-in-progress of the User Manual --
1.6 Uses of MindForth
1.6.1 For a Computer Science course in artificial intelligence
Just as a JavaScript program can be serverside or
clientside, an AI Mind program can be teacher-side
or student-side in an academic environment. If
orde at the first attempt,
> you just have to get your plane off the ground and
> show that it can travel any distance
> at all under its own power.
Let me sketch out a few not-so-obvious details here.
When ATM/Mentifex here comes in and announces
"MindForth achieves True AI
Mike Tintner wrote in the message archived at
http://www.mail-archive.com/agi@v2.listbox.com/msg09744.html
> [...]
> The first thing is that you need a definition
> of the problem, and therefore a test of AGI.
> And there is nothing even agreed about that -
> although I th
In response to Richard Loosemore below,
>
>A. T. Murray wrote:
>> MindForth free open AI source code on-line at
>> http://mentifex.virtualentity.com/mind4th.html
>> has become a True AI-Complete thinking mind
>> after years of tweaking and debugging.
>>
&g
e the knowledge capture into a game or
something that people will do as entertainment. Possibly the Second
Life approach will provide a new avenue for acquiring commonsense.
On 19/01/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
What's depressing is trying to get folks to build a co
Mind.Forth Programming Journal (MFPJ) Tues.15.JAN.2008
Yesterday on 14 January 2008 the basic scaffolding for
the Moving Wave Algorithm of artificial intelligence
was installed in Mind.Forth and released on the Web.
Now it is time to clean up the code a little and to
deal with some stray
a wrote:
Vladimir Nesov wrote:
Peter Turney compiled a list of materials on analogy-making, which may
be of interest to members of this list:
http://apperceptual.wordpress.com/2007/12/20/readings-in-analogy-making/
Thank you very much for your link. Most of them are symbolic
analogical
Vladimir Nesov wrote:
Peter Turney compiled a list of materials on analogy-making, which may
be of interest to members of this list:
http://apperceptual.wordpress.com/2007/12/20/readings-in-analogy-making/
Thank you very much for your link. Most of them are symbolic analogical
reasoning
Benjamin Goertzel wrote:
So, is your argument that digital computer programs can never be creative,
since you have asserted that programmed AI's can never be creative
Hard-wired AI (such as KB, NLP, symbol systems) cannot be creative.
-
This list is sponsored by AGIRI: http://www.agiri.org/
Benjamin Goertzel wrote:
I don't really understand what you mean by "programmed" ... nor by "creative"
You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...
How about, for instance, a computer simulation of a human brain? T
Mind.Forth Programming Journal (MFPJ) Thurs.27.DEC.2007
http://tech.groups.yahoo.com/group/win32forth/message/13076
In Mind.Forth artificial intelligence for robots,
as we try to make the AI Mind balk at thinking a
thought for which it has insufficent knowledge,
we need to coordinate a
After solving the aboriginal audRecog bug in 5dec07B.F,
now we need to perform a few housekeeping details as we
move on in the Mind.Forth coding. We must do the following.
We must convert some of the 5dec07B.F troubleshooting
messages into genuine diagnostic-mode messages. One way
to proceed
John G. Rose wrote:
>
> It'd be interesting, I kind of wonder about this
> sometimes, if an AGI, especially one that is heavily
> complex systems based would independently come up
> with the existence some form of a deity.
http://mind.sourceforge.net/theology.html
is my
Mike Tintner wrote on Thu, 6 Dec 2007:
>
> ATM:
>> http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --
>> has just gone through a major bug-solving update, and is now much
>> better at maintaining chains of continuous thought -- after the
>> user ha
ess my ideas so clearly as BenG does." To wit:
>
>About PolyWorld and Alife in general...
>
>I remember playing with PolyWorld 10 years ago or so And, I had a grad
>student at Uni. of Western Australia build a similar system, back in my
>Perth days... (it was called SEE,
On Oct 21, 2007, at 6:47 PM, J. Andrew Rogers wote:
>
>On Oct 21, 2007, at 6:37 PM, Richard Loosemore wrote:
>> It took me at least five years of struggle to get to the point
>> where I could start to have the confidence to call a spade a spade
>
>
>It still looks lik
http://www.mail-archive.com/agi@v2.listbox.com/msg08026.html
is where Ben Goertzel wrote stimuli evoking AGI list response.
> Some semi-organized responses to points raised in this thread...
> [...]
> Furthermore, it seems to be the case that
> the brain stores a lot of detai
> [...]
> Reigning orthodoxy of thought is *very hard* to dislodge,
> even in the face of plentiful evidence to the contrary.
Amen, brother! "Rem acu tetigisti!" That's why
http://mentifex.virtualentity.com/theory5.html
is like the small mammals scurrying beneath dinosaurs.
ATM
--
http://min
Matt Mahoney wrote:
> [...]
>
>> 4. How long to (a) and (b) if AI research continues
>> more or less as it is doing now?
>
> It would make not a bit of difference.
> There is already a US $66 trillion/year incentive
> to develop AGI (the value of all human labo
Are you trying to make an "intelligent" program or want to launch a
singularity? I think you are trying to do the former, not the latter.
I think you do not have a plan and are "thinking out loud". Chatting in
this list is equivalent to "thinking out loud". T
It is a waste of time arguing. We don't know the basic definitions of
intelligence, "auditory grounding", etc.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=86
Bayesian nets, Copycat, Shruiti, Fair Isaac, and CYC, are a failure,
probably because of their lack of grounding. According to Occam's Razor,
the simplest method of grounding visual images is not words, but vision.
As Albert Einstein quoted "Make everything as simple as possible
Mark Waser wrote:
Only from your side. Science looks at facts. I have the irrefutable
fact of intelligent blind people. You have nothing -- so you decide
that it is an opinion thing. Tell me how my position is not cold,
hard science. You are the one whose position is wholly faith with no
When I read “The plaintiff is an Illinois corporation selling services for
the maintenance of photocopiers” it is probably not until I get to
“photocopiers” than anything approaching a concrete image pops into my
mind.
I think the words may be subconscious and many people would get so used
Edward W. Porter wrote:
In response to Charles Hixson’s 10/12/2007 7:56 PM post:
Different people’s minds probably work differently. For me dredging up of
memories, including verbal memories, is an important part of my mental
processes. Maybe that is because I have been trained as a lawyer
Mark Waser wrote:
You have shown me *ZERO* evidence that vision is required for
intelligence and blind from birth individuals provide virtually proof
positive that vision is not necessary for intelligence. How can you
continue to argue the converse?
It is my solid opinion that vision is requ
Look at the article and it mentions spatial and vision are interrelated:
http://en.wikipedia.org/wiki/Visual_cortex
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=531
Mark Waser wrote:
Visualspatial intelligence is required for almost anything.
I'm sorry. This is all pure, unadulterated BS. You need spatial
intelligence (i.e. a world model). You do NOT need visual anything.
The only way in which you need visual is if you contort it's meaning
If you cannot explain it, then how do you know you do not do that? No
offense, but autistic savants also have trouble describing their process
when they do math. They have high visuospatial intelligence, but low
verbal. Mathematicians have a high Autism Spectrum Quotient. [1]
Mathematicians
Benjamin Goertzel wrote:
On 10/12/07, *a* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
Benjamin Goertzel wrote:
>
> So then you're reduced to arguing that mathematicians who don't feel
> like they're visualizing when they prove thi
Benjamin Goertzel wrote:
So then you're reduced to arguing that mathematicians who don't feel
like they're visualizing when they prove things, are somehow
unconsciously doing so.
I meant visually manipulating mathematical expressions.
-
This list is sponsored by AGIRI: http://www.agiri.o
Mathematician-level mathematics must be visually grounded. Without
groundedness, simplified and expanded forms of expressions are the same,
so there is no motive to simplify. If it is not visually grounded, then
it will only reach the level of the top tier computer algebra systems
(full of bugs
Vladimir Nesov wrote:
Generation of such abstract-description-based scenes can be a tedious
process at start, involving calculations 'by hand' on part of AGI, but
gradually through introduction of intermediate concepts this process
will become more intuitive and finally world model
"In 2000, Hutter [21,22] proved that finding the optimal behavior of a
rational agent is equivalent to compressing its observations.
Essentially he proved Occam's Razor [23], the simplest answer is
usually the correct answer."
Vision is the simplest answer.
-
This list is spo
It's impossible for a human reading a book written in an exotic foreign
language, so you are going too far. It's like cracking a Rijndael
encrypted file with a 1000-bit key size, but worse. Infinite
possible interpretations.
John G. Rose wrote:
This is how I "envi
Mark Waser wrote:
Why can't echo-location lead to spatial perception without vision?
Why can't touch?
For instance, how can humans mentally manipulate or mentally rotate
spatial objects without visualizing them?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
Mark Waser wrote:
spatial perception cannot exist without vision.
How does someone who is blind from birth have spatial perception then?
Vision is one particular sense that can lead to a 3-dimensional model
of the world (spatial perception) but there are others (touch &
echo-loca
Mark Waser wrote:
I'll buy internal spatio-perception (i.e. a three-d world model) but
not the visual/vision part (which I believe is totally unnecessary).
Why is *vision* necessary for grounding or to completely "understand"
natural language?
My mistake. I misinterpreted the
ot; <[EMAIL PROTECTED]>
Reply-To: agi@v2.listbox.com
To: agi@v2.listbox.com
Subject: RE: [agi] Re: [META] Re: Economic libertarianism .
Date: Thu, 11 Oct 2007 15:03:34 -0600
I agree though there may be some room for discussing AGI dealing with
politics as a complex system. How an AGI would inter
Mark Waser wrote:
Concepts cannot be grounded without vision.
So . . . . explain how people who are blind from birth are
functionally intelligent.
It is impossible to completely "understand" natural language without
vision.
So . . . . you believe that blind-from-birth people don't complet
Yes, I think that too.
On the practical side, I think that investing in AGI requires
significant tax cuts, and we should elect a candidate that would do that
(Ron Paul). I think that the government has to have more respect to
potential weapons (like AGI), so we should elect a candidate who is
I think that building a "human-like" reasoning system without /visual/
perception is theoretically possible, but not feasible in practice. But
how is it "human like" without vision? Communication problems will
arise. Concepts cannot be grounded without vision.
It is impo
With googling, I found that older people has lower IQ
http://www.sciencedaily.com/releases/2006/05/060504082306.htm
IMO, the brain is like a muscle, not an organ. IQ is said to be highly
genetic, and the heritability increases with age. Perhaps that older
people do not have much mental
by cognitive biases of various kinds.
On 06/10/2007, BillK <[EMAIL PROTECTED]> wrote:
On 10/6/07, a wrote:
A free market is just a nice intellectual theory that is of no use in
the real world.
No. Not true. Anti-competitive structures and monopolies won't exist in
a true
Linas Vepstas wrote:
My objection to economic libertarianism is its lack of discussion of
"self-organized criticality". A common example of self-organized
criticality is a sand-pile at the critical point. Adding one grain
of sand can trigger an avalanche, which can be small
;s institutions, including the purchase of
IQ tests.)
I disagree with your theory. I primarily see the IQ drop as a result of
the Flynn effect, not the age.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/me
Linas Vepstas wrote:
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
I like to think of myse
Peter Norvig wrote:
>
> Yes, there will be. The authors are discussing
> the process of writing a third edition now,
> but don't yet have a schedule.
>
> -Peter Norvig
>
> On 10/1/07, per.nyblom <[EMAIL PROTECTED]> wrote:
>> Will there be a next editi
What you see is dependent on your reaction.
How you react is dependent on what you see.
Memory recall is an reaction. You are reacting to the image by recalling
things relating to the image.
Reaction is impossible if and only if you didn't see it.
That means that not reacting to a stimu
Bob Mottram wrote:
it seems infeasible that 2D templates
need to be created for every possible viewing angle and scale of an
object
I think this is similar to how our vision works. We have visual short
term memory that seem to hold 2D templates for a few seconds.
We have specialized
I doubt "video analysis" it will be AGI. What kinds of video should we
"analyze"? But is "analysis" going to turn out to AGI? The
implementation I think must be holistic. What does "video analysis"
mean? Is it just extracting the direction of motion or orientation? The
machine must learn and a
>a> Sure, I can write a program to differentiate between a square and a circle,
>a> but it is not AGI. I need the program to automatically train and
>a> recognize different shapes.
>
>This is the most important question you have to ponder before
>doing anything specif
Hello,
I have been trying to make an AGI program that passes spatial reasoning IQ
tests such as Raven Progressive Matrices.
Spatial reasoning IQ tests have shapes and colors. Our minds cannot manipulate
the shapes exactly in the correct position. A certain degree of fuzziness is
inevitable
The scholar and gentleman Jean-Paul Van Belle wrote:
> Universal compassion and tolerance are the ultimate
> consequences of enlightenment which one Matt on the
> list equated IMHO erroneously to high-orbit intelligence
> methinx subtle humour is a much better proxy for intelligen
1 - 100 of 131 matches
Mail list logo