[agi] Inching closer on the Singularity Clock.

2010-10-19 Thread A. T. Murray
not be run by clicking on a single link (as AiMind.html can), so here is a sample interaction with MindForth: First we type in five statements. > tom writes jokes > ben writes books > jerry writes rants > ben writes articles > will writes poems We then query the AI in Tutoria

[agi] MindForth Programming Journal (MFPJ) 2010 September 24

2010-09-24 Thread A. T. Murray
t;I" self-concept and the "you" concept of the non-self "other". In the case of MindForth AI, the relationships between the "I" concept and a predicate nominative (such as the very name "Andru" by which the AI is known), are external to the "I

[agi] Mother of all Singularities

2010-09-22 Thread A. T. Murray
MindForth Programming Journal (MFPJ) Wed.22.SEP.2010 -- Solving the Missing "seq" Yesterday we solved the problem of the missing "seq" tags rather quickly, when we noticed that each time point with a missing "seq" was just outside the search-range of ten t

[agi] Technological Singularity -- a work in progress

2010-09-21 Thread A. T. Murray
MindForth Programming Journal (MFPJ) Tues.21.SEP.2010 -- (work in progress) We are now in a strange situation as AI Mind coders. We have created an extremely powerful AI Mind at http://www.scn.org/~mentifex/mindforth.txt but we have been so relentlessly in pursuit of basic AI functionality

[agi] MindForth Programming Journal (MFPJ) 2010 September 13

2010-09-14 Thread A. T. Murray
interpret the above exchange as showing that the response-idea "I AM ANDRU" was initially inhibited as a pair of two identical thoughts, one in the innate knowledge of the EnBoot English bootstrap, and one in the response made by the AI when asked, "What are you?" The inhibiiti

[agi] Attn: Ben Goertzel -- SINGULARITY ALERT!!!

2010-09-07 Thread A. T. Murray
Human: boys Robot: THE BOYS MAKE THE CARS Human: boys Robot: THE BOYS MAKE THE GUNS Chief AGI guru Dr. Goertzel! The above is not a cherry-picked, post-mucho experimentation routine test result put out for PR purposes. It just happened during hard-core AI coding. Now, before everybody jumps in and

[agi] Tesla Journal Submission: Mentifex Mad Science

2010-08-07 Thread A. T. Murray
Mad Science Theory-Based Artificial Intelligence Abstract The patient insists that he has created an artificial Mind, a virtual entity capable of abstract thought and self-awareness. Further, his research is too dangerous to be published outside of the Tesla Journal, because Mentifex AI

[agi] The Wrong Stuff (Norvig interview)

2010-08-04 Thread A. T. Murray
The Wrong Stuff : Error Message: Google Research Director Peter Norbig on Being Wrong http://bit.ly/cQpUpx translates to http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/08/03/error-message-google-research-director-peter-norvig-on-being-wrong.aspx -

Re: [agi] Computer Vision not as hard as I thought!

2010-08-03 Thread A. T. Murray
David Jones wrote: > > I've suddenly realized that computer vision > of real images is very much solvable and that > it is now just a matter of engineering. [...] Would you (or anyone else on this list) be interested in learning Forth and working on http://code.google.com

[agi] Tweaking a few parameters

2010-07-28 Thread A. T. Murray
t;, until KbTraversal "rescued" the situation. However, we know why the AI got stuck in a rut. It was able to answer the query "who are you" with "I AM ANDRU", but it did not know anything further to say about ANDRU, so it repeated "ANDRU AM ANDRU". I

Re: [agi] Huge Progress on the Core of AGI

2010-07-25 Thread A. T. Murray
David Jones wrote: > >Arthur, > >Thanks. I appreciate that. I would be happy to aggregate some of those >things. I am sometimes not good at maintaining the website because I get >bored of maintaining or updating it very quickly :) > >Dave > >On Sat, Jul 24, 2010 at

Re: [agi] Huge Progress on the Core of AGI

2010-07-24 Thread A. T. Murray
The Web site of David Jones at http://practicalai.org is quite impressive to me as a kindred spirit building AGI. (Just today I have been coding MindForth AGI :-) For his "Practical AI Challenge" or similar ventures, I would hope that David Jones is open to the idea of aggr

[agi] Mindplex for Is-a Functionality

2010-07-22 Thread A. T. Murray
Thurs.22.JUL.2010 -- Mindplex for Is-a Functionality As we contemplate AI coding for responses to such questions as "Who is Andru? What is Andru?" "Who are you? What are you?" we realize that simple memory-activation of question-words like "who" or "what

[agi] Seeking Is-a Functionality

2010-07-20 Thread A. T. Murray
Tues.20.JUL.2010 -- Seeking Is-a Functionality Recently our overall goal in coding MindForth has been to build up an ability for the AI to engage in self-referential thought. In fact, "SelfReferentialThought" is the "Milestone" next to be achieved on the "Roa

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread A. T. Murray
Deepak wrote on Sun, 18 Jul 2010: > > I wanted to know if there is any bench mark test > that can really convince a majority of today's AGIers > that a System is true AGI? Obvious AGI functionality is the "default" test for AGI. http://www.scn.org/~mentifex/AiMin

[agi] Tutorial AI Mind updated in JavaScript

2010-07-14 Thread A. T. Murray
The free, open-source JavaScript AI Mind at http://www.scn.org/~mentifex/AiMind.html for Microsoft Internet Explorer (MSIE) has been updated on 13 July 2010 with a major bugfix imported from the http://www.scn.org/~mentifex/mindforth.txt AI Mind in Win32Forth. This update fixes a bug present

Re: [agi] Questions for an AGI

2010-06-24 Thread A. T. Murray
Carlos A Mejia invited questions for an AGI! > If you could ask an AGI anything, what would you ask it? Who killed Donald Young, a gay sex partner of U.S. President Barak Obama, on December 24, 2007, in Obama's home town of Chicago, when it began to look like Obama could actually be

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread A. T. Murray
Ben Goertzel wrote: > >And, just to clarify: the fact that I set up this list and pay $12/month for >its hosting, and deal with the occasional list-moderation issues that >arise, is not supposed to give my **AI opinions** primacy over anybody >else's on the list, in discussions I only interv

[agi] MindForth puts AI theory into practice.

2008-08-28 Thread A. T. Murray
Artificial Minds in Win32Forth are online at http://mind.sourceforge.net/mind4th.html and http://AIMind-i.com -- a separate AI branch. http://mentifex.virtualentity.com/js080819.html is the JavaScript AI Mind Programming Journal about the development of a tutorial program at http

Re: [agi] Who here is located in the Seattle area?

2008-06-10 Thread A. T. Murray
> Steve Richfield Bellevue?! 'Fraid not, although I used to be a teacher of German and Latin at The Overlake School in Redmond. Seattle?! Yes. If you ever go to Northgate or to Green Lake or to the University of Washington off-campus area, I can meet you there -- especially in a

RE: [agi] Pearls Before Swine...

2008-06-08 Thread A. T. Murray
John G. Rose wrote: > [...] >> > Hey you guys with some gray hair and/or bald spots, >> > WHAT THE HECK ARE YOU THINKING? >> >> prin Goertzel genesthai, ego eimi "Before Goertzel came to be, I am." (a Biblical allusion in Greek :-) >> >&

Re: [agi] Pearls Before Swine...

2008-06-08 Thread A. T. Murray
The "abnormalis sapiens" Herr Doktor Steve Richfield wrote: > > > Hey you guys with some gray hair and/or bald spots, > WHAT THE HECK ARE YOU THINKING? prin Goertzel genesthai, ego eimi http://www.scn.org/~mentifex/mentifex_faq.html My hair is graying so much and such a

Re: [agi] Consciousness vs. Intelligence

2008-05-28 Thread A. T. Murray
John Rose communicated: > > Consciousness with minimal intelligence may be easier > to build than general intelligence. [...] IMHO consciusness emerges from any level of intelligence. Please see http://mentifex.virtualentity.com/conscius.html "Is MindForth conscious?" http://mentifex.virtualenti

[agi] Uses of Mind.html tutorial Artificial General Intelligence

2008-05-22 Thread A. T. Murray
For teaching computer programming. For teaching JavaScript to students. For learning JavaScript For teaching artificial intelligence at a school for the gifted. For teaching artificial intelligence on the high-school level. For teaching artificial intelligence at a community college. For teaching

[agi] Porting MindForth AI into JavaScript Mind.html

2008-05-17 Thread A. T. Murray
In our JSAI coding over the last few days, we kept noticing that the activation-level on S-V-O verbs was going to zero immediately after the generation of a sentence of thought. It looked obvious to us that something in there was arbitrarily zeroing out the verbs. Last night we looked into

Re: [agi] organising parallel processes

2008-05-04 Thread a
Vladimir Nesov wrote: On Sun, May 4, 2008 at 11:09 AM, rooftop8000 <[EMAIL PROTECTED]> wrote: hi, I have a lot of parallel processes that are in control of their own activation (they can decide which processes are activated and for how long). I need some kind of organisation (a

Re: [agi] organising parallel processes

2008-05-04 Thread a
rooftop8000 wrote: hi, I have a lot of parallel processes that are in control of their own activation (they can decide which processes are activated and for how long). I need some kind of organisation (a simple example would be a hierarchy of processes that only activate downwards). I&#

Re: [agi] Why Symbolic Representation P.S.

2008-04-26 Thread a
Vladimir Nesov wrote: On Sat, Apr 26, 2008 at 12:52 AM, a <[EMAIL PROTECTED]> wrote: My approach of visual reasoning involves some form of searching for similar images. It associates images using spreading activation techniques to disambiguate vision and to speed up image ma

Re: [agi] Why Symbolic Representation P.S.

2008-04-25 Thread a
. Connections between nodes strengthen as they are simultaneously activated while preserving their context sensitivity. It is a bottom-up emergent approach that learns the basic visual features first so it can selectively concentrate on higher-level features, such as letters or words, while avoiding

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread a
Jim Bromer wrote: But the idea that vision is necessary for true advancements in AGI is not warranted by any hard evidence. This is significant since good computational vision systems have been around for years now. Vision systems programming suffers from the same kind of complexity problems tha

Re: [agi] Why Symbolic Representation without Imaginative Simulation Won't Work

2008-04-24 Thread a
Russell Wallace wrote: What you say is true, but even though there's no sharp dividing line, the difference is still relevant. The best way I can think of to summarize the difference is between a program that deals with "The cat sat on the mat" or "SatOn(Cat, Mat)" on

Re: [agi] Why Symbolic Representation without Imaginative Simulation Won't Work

2008-04-24 Thread a
Russell Wallace wrote: I don't think this is an accurate paraphrase of Mike's statement. "X is secret sauce" implies X to be _both necessary and sufficient_ (or at least that the other ingredients are trivial compared to X) - a type of claim AI has certainly seen plenty of.

Re: [agi] Other AGI-like communities

2008-04-23 Thread a
Ben Goertzel wrote: I wouldn't agree with such a strong statement. I think the grounding of ratiocination in image-ination is characteristic of human intelligence, and must thus be characteristic of any highly human-like intelligent system ... but, I don't see any reason to believe it&

Re: [agi] Random Thoughts on Thinking...

2008-04-22 Thread A. T. Murray
Steve Richfield wrote: > > The process that we call "thinking" is VERY > different in various people. [...] [...] > Any thoughts? > > Steve Richfield The post above -- real food for thought -- was the most interesting post that I have ever read on the AGI list. Arthur T. Murray -- http://mentif

Re: [agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread A. T. Murray
Bob Mottram writes: > > Good advice. There are of course sometimes > people who are ahead of the field, Like Ben Goertzel (glad to send him a referral recently from South Africa on the OpenCog list :-) > but in conversation you'll usually find that the > genuine i

Re: [agi] Symbols

2008-03-31 Thread a
d uses feature extraction methods such as edge detection, motion detection, etc. The visual cortex does that function. This is like converting a bitmap image to vector images for better manipulation. It even discriminates objects by the use of probabilistic-like methods. The human mind does not do

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-31 Thread a
icant* subjective unimportant qualities are unclassified complexity is a product of two factor-independent symbols complexity is the incompatibility between input and output for example, random black and white dots is considered nonrandom because our senses overabstracts the dots as one unit i

Re: [agi] Artificial general intelligence

2008-02-27 Thread a
purposes and the latter to make a living ;-p The term "artificial general intelligence" is an oxymoron. That term is metaphysical, since there is no such thing as "general". This point is well-understood already. Hutter's theoretical analyses of AIX

Re: [agi] A Follow-Up Question re Vision.. P.S.

2008-02-26 Thread a
Mike Tintner wrote: Richard, Thanks for response. But it surely *is* still a puzzle as to how and indeed where that distorted image on the retina gets rectified and raises major questions about vision. No one as, I understand, has the answer. I am too ignorant to have a POV here - but my

[agi] Installing MindForth in a robot

2008-02-18 Thread A. T. Murray
Only robots above a certain level of sophistication may receive a mind-implant via MindForth. The computerized robot needs to have an operating system that will support Forth and sufficient memory to hold both the AI program code and a reasonably large knowledge base (KB) of experience. A

[agi] Can MindForth feel emotions?

2008-02-13 Thread A. T. Murray
>From the rewrite-in-progress of the User Manual -- 1.5 Can MindForth feel emotions? When a robot is in love, it needs to feel a physiological response to its internal state of mind. Regardless of what causes the love, the robot will not experience what the ancient Greeks called dame

[agi] Is MindForth conscious?

2008-02-12 Thread A. T. Murray
>From the rewrite-in-progress of the User Manual -- 1.4 Is MindForth conscious? MindForth has been engineered for artificial consciousness but most likely will not report its own consciousness unless it is installed in a robot body with a sufficient motorium and adequate sensorium to engen

[agi] Does MindForth think?

2008-02-11 Thread A. T. Murray
es Mind.Forth think, and what proof is there that Mind.Forth thinks? Mind.Forth thinks by having concepts at a deep level in the artificial mind, and by letting activation spread from one concept to another to another in a chain of thought under the guidance of a Chomskyan linguistic superstruc

Re: [agi] What is MindForth?

2008-02-10 Thread A. T. Murray
Joseph Gentle wrote on Sun, 10 Feb 2008, in a message now at http://www.mail-archive.com/agi@v2.listbox.com/msg09803.html > > On Feb 9, 2008 11:53 PM, A. T. Murray <[EMAIL PROTECTED]> wrote: >> It is not a chatbot. >> The AI engine is arguably the first True AI. It

[agi] History of MindForth

2008-02-10 Thread A. T. Murray
ail expressing his amazement that anyone would try to do AI in REXX. Mentifex mailed back the entire Mind.REXX source code. Another fellow, an IBM mainframe programmer, tried to port the Amiga Rexxmind to run on his IBM mainframe -- which would have been a Kitty-Hawk-to-Concorde leap -- but the R

[agi] What is MindForth?

2008-02-09 Thread A. T. Murray
>From the rewrite-in-progress of the User Manual -- 1.1 What is MindForth? Mind.Forth AI is a rudimentary replica of the human mind programmed in the Forth programming language. The AI Mind is the software implementation of a theory of mind based on Chomskyan linguistics -- the rules

[agi] Using Mind.Forth in a CS AI course

2008-02-08 Thread A. T. Murray
>From the rewrite-in-progress of the User Manual -- 1.6 Uses of MindForth 1.6.1 For a Computer Science course in artificial intelligence Just as a JavaScript program can be serverside or clientside, an AI Mind program can be teacher-side or student-side in an academic environment. If

[agi] Re: Mindforth and the Wright Brothers

2008-02-05 Thread A. T. Murray
orde at the first attempt, > you just have to get your plane off the ground and > show that it can travel any distance > at all under its own power. Let me sketch out a few not-so-obvious details here. When ATM/Mentifex here comes in and announces "MindForth achieves True AI

Re: [agi] The Test

2008-02-04 Thread A. T. Murray
Mike Tintner wrote in the message archived at http://www.mail-archive.com/agi@v2.listbox.com/msg09744.html > [...] > The first thing is that you need a definition > of the problem, and therefore a test of AGI. > And there is nothing even agreed about that - > although I th

Re: [agi] MindForth achieves True AI functionality

2008-01-26 Thread A. T. Murray
In response to Richard Loosemore below, > >A. T. Murray wrote: >> MindForth free open AI source code on-line at >> http://mentifex.virtualentity.com/mind4th.html >> has become a True AI-Complete thinking mind >> after years of tweaking and debugging. >> &g

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread a
e the knowledge capture into a game or something that people will do as entertainment. Possibly the Second Life approach will provide a new avenue for acquiring commonsense. On 19/01/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote: What's depressing is trying to get folks to build a co

[agi] MindForth 15.JAN.2008

2008-01-16 Thread A. T. Murray
Mind.Forth Programming Journal (MFPJ) Tues.15.JAN.2008 Yesterday on 14 January 2008 the basic scaffolding for the Moving Wave Algorithm of artificial intelligence was installed in Mind.Forth and released on the Web. Now it is time to clean up the code a little and to deal with some stray

Re: [agi] Readings in Analogy-Making

2008-01-13 Thread a
a wrote: Vladimir Nesov wrote: Peter Turney compiled a list of materials on analogy-making, which may be of interest to members of this list: http://apperceptual.wordpress.com/2007/12/20/readings-in-analogy-making/ Thank you very much for your link. Most of them are symbolic analogical

Re: [agi] Readings in Analogy-Making

2008-01-11 Thread a
Vladimir Nesov wrote: Peter Turney compiled a list of materials on analogy-making, which may be of interest to members of this list: http://apperceptual.wordpress.com/2007/12/20/readings-in-analogy-making/ Thank you very much for your link. Most of them are symbolic analogical reasoning

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread a
Benjamin Goertzel wrote: So, is your argument that digital computer programs can never be creative, since you have asserted that programmed AI's can never be creative Hard-wired AI (such as KB, NLP, symbol systems) cannot be creative. - This list is sponsored by AGIRI: http://www.agiri.org/

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread a
Benjamin Goertzel wrote: I don't really understand what you mean by "programmed" ... nor by "creative" You say that, according to your definitions, a GA is programmed and ergo cannot be creative... How about, for instance, a computer simulation of a human brain? T

[agi] MindForth AI updated 27.DEC.2007

2007-12-27 Thread A. T. Murray
Mind.Forth Programming Journal (MFPJ) Thurs.27.DEC.2007 http://tech.groups.yahoo.com/group/win32forth/message/13076 In Mind.Forth artificial intelligence for robots, as we try to make the AI Mind balk at thinking a thought for which it has insufficent knowledge, we need to coordinate a

[agi] Mind.Forth Programming Journal (MFPJ) 14.DEC.2007

2007-12-15 Thread A. T. Murray
After solving the aboriginal audRecog bug in 5dec07B.F, now we need to perform a few housekeeping details as we move on in the Mind.Forth coding. We must do the following. We must convert some of the 5dec07B.F troubleshooting messages into genuine diagnostic-mode messages. One way to proceed

Re: [agi] AGI and Deity

2007-12-09 Thread A. T. Murray
John G. Rose wrote: > > It'd be interesting, I kind of wonder about this > sometimes, if an AGI, especially one that is heavily > complex systems based would independently come up > with the existence some form of a deity. http://mind.sourceforge.net/theology.html is my

Re: [agi] None of you seem to be able ...

2007-12-06 Thread A. T. Murray
Mike Tintner wrote on Thu, 6 Dec 2007: > > ATM: >> http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype -- >> has just gone through a major bug-solving update, and is now much >> better at maintaining chains of continuous thought -- after the >> user ha

Re: [agi] Polyworld: Using Evolution to Design Artificial

2007-11-15 Thread A. T. Murray
ess my ideas so clearly as BenG does." To wit: > >About PolyWorld and Alife in general... > >I remember playing with PolyWorld 10 years ago or so And, I had a grad >student at Uni. of Western Australia build a similar system, back in my >Perth days... (it was called SEE,

[agi] Re: Bogus Neuroscience [...]

2007-10-22 Thread A. T. Murray
On Oct 21, 2007, at 6:47 PM, J. Andrew Rogers wote: > >On Oct 21, 2007, at 6:37 PM, Richard Loosemore wrote: >> It took me at least five years of struggle to get to the point >> where I could start to have the confidence to call a spade a spade > > >It still looks lik

Re: [agi] Human memory and number of synapses.. P.S.

2007-10-21 Thread A. T. Murray
http://www.mail-archive.com/agi@v2.listbox.com/msg08026.html is where Ben Goertzel wrote stimuli evoking AGI list response. > Some semi-organized responses to points raised in this thread... > [...] > Furthermore, it seems to be the case that > the brain stores a lot of detai

Re: [agi] Poll

2007-10-20 Thread A. T. Murray
> [...] > Reigning orthodoxy of thought is *very hard* to dislodge, > even in the face of plentiful evidence to the contrary. Amen, brother! "Rem acu tetigisti!" That's why http://mentifex.virtualentity.com/theory5.html is like the small mammals scurrying beneath dinosaurs. ATM -- http://min

Re: [agi] Poll

2007-10-18 Thread A. T. Murray
Matt Mahoney wrote: > [...] > >> 4. How long to (a) and (b) if AI research continues >> more or less as it is doing now? > > It would make not a bit of difference. > There is already a US $66 trillion/year incentive > to develop AGI (the value of all human labo

Re: [agi] The Grounding of Maths

2007-10-13 Thread a
Are you trying to make an "intelligent" program or want to launch a singularity? I think you are trying to do the former, not the latter. I think you do not have a plan and are "thinking out loud". Chatting in this list is equivalent to "thinking out loud". T

Re: [agi] The Grounding of Maths

2007-10-13 Thread a
It is a waste of time arguing. We don't know the basic definitions of intelligence, "auditory grounding", etc. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=86

Re: [agi] The Grounding of Maths

2007-10-13 Thread a
Bayesian nets, Copycat, Shruiti, Fair Isaac, and CYC, are a failure, probably because of their lack of grounding. According to Occam's Razor, the simplest method of grounding visual images is not words, but vision. As Albert Einstein quoted "Make everything as simple as possible

Re: [agi] The Grounding of Maths

2007-10-13 Thread a
Mark Waser wrote: Only from your side. Science looks at facts. I have the irrefutable fact of intelligent blind people. You have nothing -- so you decide that it is an opinion thing. Tell me how my position is not cold, hard science. You are the one whose position is wholly faith with no

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
When I read “The plaintiff is an Illinois corporation selling services for the maintenance of photocopiers” it is probably not until I get to “photocopiers” than anything approaching a concrete image pops into my mind. I think the words may be subconscious and many people would get so used

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
Edward W. Porter wrote: In response to Charles Hixson’s 10/12/2007 7:56 PM post: Different people’s minds probably work differently. For me dredging up of memories, including verbal memories, is an important part of my mental processes. Maybe that is because I have been trained as a lawyer

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
Mark Waser wrote: You have shown me *ZERO* evidence that vision is required for intelligence and blind from birth individuals provide virtually proof positive that vision is not necessary for intelligence. How can you continue to argue the converse? It is my solid opinion that vision is requ

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
Look at the article and it mentions spatial and vision are interrelated: http://en.wikipedia.org/wiki/Visual_cortex - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=531

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
Mark Waser wrote: Visualspatial intelligence is required for almost anything. I'm sorry. This is all pure, unadulterated BS. You need spatial intelligence (i.e. a world model). You do NOT need visual anything. The only way in which you need visual is if you contort it's meaning

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
If you cannot explain it, then how do you know you do not do that? No offense, but autistic savants also have trouble describing their process when they do math. They have high visuospatial intelligence, but low verbal. Mathematicians have a high Autism Spectrum Quotient. [1] Mathematicians

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
Benjamin Goertzel wrote: On 10/12/07, *a* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote: Benjamin Goertzel wrote: > > So then you're reduced to arguing that mathematicians who don't feel > like they're visualizing when they prove thi

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
Benjamin Goertzel wrote: So then you're reduced to arguing that mathematicians who don't feel like they're visualizing when they prove things, are somehow unconsciously doing so. I meant visually manipulating mathematical expressions. - This list is sponsored by AGIRI: http://www.agiri.o

Re: [agi] The Grounding of Maths

2007-10-12 Thread a
Mathematician-level mathematics must be visually grounded. Without groundedness, simplified and expanded forms of expressions are the same, so there is no motive to simplify. If it is not visually grounded, then it will only reach the level of the top tier computer algebra systems (full of bugs

Re: [agi] Do the inference rules.. P.S.

2007-10-12 Thread a
Vladimir Nesov wrote: Generation of such abstract-description-based scenes can be a tedious process at start, involving calculations 'by hand' on part of AGI, but gradually through introduction of intermediate concepts this process will become more intuitive and finally world model

Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread a
"In 2000, Hutter [21,22] proved that finding the optimal behavior of a rational agent is equivalent to compressing its observations. Essentially he proved Occam's Razor [23], the simplest answer is usually the correct answer." Vision is the simplest answer. - This list is spo

Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread a
It's impossible for a human reading a book written in an exotic foreign language, so you are going too far. It's like cracking a Rijndael encrypted file with a 1000-bit key size, but worse. Infinite possible interpretations. John G. Rose wrote: This is how I "envi

Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread a
Mark Waser wrote: Why can't echo-location lead to spatial perception without vision? Why can't touch? For instance, how can humans mentally manipulate or mentally rotate spatial objects without visualizing them? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread a
Mark Waser wrote: spatial perception cannot exist without vision. How does someone who is blind from birth have spatial perception then? Vision is one particular sense that can lead to a 3-dimensional model of the world (spatial perception) but there are others (touch & echo-loca

Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread a
Mark Waser wrote: I'll buy internal spatio-perception (i.e. a three-d world model) but not the visual/vision part (which I believe is totally unnecessary). Why is *vision* necessary for grounding or to completely "understand" natural language? My mistake. I misinterpreted the

Re: [agi] Re: [META] Re: Economic libertarianism .....

2007-10-11 Thread a
ot; <[EMAIL PROTECTED]> Reply-To: agi@v2.listbox.com To: agi@v2.listbox.com Subject: RE: [agi] Re: [META] Re: Economic libertarianism . Date: Thu, 11 Oct 2007 15:03:34 -0600 I agree though there may be some room for discussing AGI dealing with politics as a complex system. How an AGI would inter

Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread a
Mark Waser wrote: Concepts cannot be grounded without vision. So . . . . explain how people who are blind from birth are functionally intelligent. It is impossible to completely "understand" natural language without vision. So . . . . you believe that blind-from-birth people don't complet

Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-11 Thread a
Yes, I think that too. On the practical side, I think that investing in AGI requires significant tax cuts, and we should elect a candidate that would do that (Ron Paul). I think that the government has to have more respect to potential weapons (like AGI), so we should elect a candidate who is

Re: [agi] Do the inference rules.. P.S.

2007-10-11 Thread a
I think that building a "human-like" reasoning system without /visual/ perception is theoretically possible, but not feasible in practice. But how is it "human like" without vision? Communication problems will arise. Concepts cannot be grounded without vision. It is impo

Re: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-09 Thread a
With googling, I found that older people has lower IQ http://www.sciencedaily.com/releases/2006/05/060504082306.htm IMO, the brain is like a muscle, not an organ. IQ is said to be highly genetic, and the heritability increases with age. Perhaps that older people do not have much mental

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread a
by cognitive biases of various kinds. On 06/10/2007, BillK <[EMAIL PROTECTED]> wrote: On 10/6/07, a wrote: A free market is just a nice intellectual theory that is of no use in the real world. No. Not true. Anti-competitive structures and monopolies won't exist in a true

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-06 Thread a
Linas Vepstas wrote: My objection to economic libertarianism is its lack of discussion of "self-organized criticality". A common example of self-organized criticality is a sand-pile at the critical point. Adding one grain of sand can trigger an avalanche, which can be small

Re: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-06 Thread a
;s institutions, including the purchase of IQ tests.) I disagree with your theory. I primarily see the IQ drop as a result of the Flynn effect, not the age. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/me

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread a
Linas Vepstas wrote: On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote: As to exactly how, I don't know, but since the AGI is, by assumption, peaceful, friendly and non-violent, it will do it in a peaceful, friendly and non-violent manner. I like to think of myse

[agi] Re: [aima-talk] Next edition?

2007-10-01 Thread A. T. Murray
Peter Norvig wrote: > > Yes, there will be. The authors are discussing > the process of writing a third edition now, > but don't yet have a schedule. > > -Peter Norvig > > On 10/1/07, per.nyblom <[EMAIL PROTECTED]> wrote: >> Will there be a next editi

Re: [agi] Video Mining

2007-08-05 Thread a
What you see is dependent on your reaction. How you react is dependent on what you see. Memory recall is an reaction. You are reacting to the image by recalling things relating to the image. Reaction is impossible if and only if you didn't see it. That means that not reacting to a stimu

Re: [agi] Video Mining

2007-08-05 Thread a
Bob Mottram wrote: it seems infeasible that 2D templates need to be created for every possible viewing angle and scale of an object I think this is similar to how our vision works. We have visual short term memory that seem to hold 2D templates for a few seconds. We have specialized

Re: [agi] Video Mining

2007-08-04 Thread a
I doubt "video analysis" it will be AGI. What kinds of video should we "analyze"? But is "analysis" going to turn out to AGI? The implementation I think must be holistic. What does "video analysis" mean? Is it just extracting the direction of motion or orientation? The machine must learn and a

Re: [agi] Passing an IQ test

2007-07-05 Thread a
>a> Sure, I can write a program to differentiate between a square and a circle, >a> but it is not AGI. I need the program to automatically train and >a> recognize different shapes. > >This is the most important question you have to ponder before >doing anything specif

[agi] Passing an IQ test

2007-07-04 Thread a
Hello, I have been trying to make an AGI program that passes spatial reasoning IQ tests such as Raven Progressive Matrices. Spatial reasoning IQ tests have shapes and colors. Our minds cannot manipulate the shapes exactly in the correct position. A certain degree of fuzziness is inevitable

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-23 Thread A. T. Murray
The scholar and gentleman Jean-Paul Van Belle wrote: > Universal compassion and tolerance are the ultimate > consequences of enlightenment which one Matt on the > list equated IMHO erroneously to high-orbit intelligence > methinx subtle humour is a much better proxy for intelligen

  1   2   >