Re: [agi] Study hints that fruit flies have free will

2008-01-22 Thread Philip Goetz
On May 16, 2007 10:05 AM, Mark Waser [EMAIL PROTECTED] wrote: Actually, a pretty good article in a very public place http://www.msnbc.msn.com/id/18684016/?GT1=9951 I don't get it. It says that flies movie in accordance with a non-flat distribution instead of a flat distribution. That has

Re: [agi] BMI/BCI Growing Fast

2007-12-22 Thread Philip Goetz
On Dec 14, 2007 9:07 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote: Just to be clear: I am sure that mindreading technology is coming, it's your relative timing estimate that perplexes me... Wasn't me that said it, but... If we define mindreading as knowing whether someone is telling the

Re: [agi] BMI/BCI Growing Fast

2007-12-22 Thread Philip Goetz
Oh - I haven't read the report, but I did look into the state of the art of BCI several months ago. Some things I remember: - Arrays can receive pulses from at most 100 neurons. - Wireless devices don't have enough bandwidth to transmit the pulses from more than about 100 neurons. Even doing

Re: [agi] Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto

2007-06-23 Thread Philip Goetz
On 6/22/07, Bo Morgan [EMAIL PROTECTED] wrote: You make AGI sound like a members only club by this obligatory comment. ;) Reinforcement learning is a simple theory that only solves problems for which we can design value functions. Can you explain what you mean by a value function? If success

Re: [agi] My proposal for an AGI agenda

2007-04-10 Thread Philip Goetz
On 4/10/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Mathematical code is exactly what's MOST EASILY OPTIMIZABLE using techniques as exist in the Stalin Scheme compiler, or better yet in the Java supercompiler. Each numeric operation is of course no faster after optimized or super

Re: [agi] My proposal for an AGI agenda

2007-04-09 Thread Philip Goetz
On 3/20/07, David Clark [EMAIL PROTECTED] wrote: Java has static typing and no introspection. It has no way of making programs of itself and then executing them. Multiple running programs require very expensive multi-threading and the huge mutex overhead for synchronization. Java has more

Re: [agi] My proposal for an AGI agenda

2007-04-09 Thread Philip Goetz
On 3/23/07, Samantha Atknis [EMAIL PROTECTED] wrote: 8,Fast where most of the processing is done. In the language or in things written in the language or both? Lisp has been interpreted and compiled simultaneously and nearly seamlessly for 20 years and has efficiency approaching compiled C

Re: [agi] My proposal for an AGI agenda

2007-04-09 Thread Philip Goetz
On 3/19/07, rooftop8000 [EMAIL PROTECTED] wrote: Hi, I've been thinking for a bit about how a big collaboration AI project could work. I browsed the archives and i see you guys have similar ideas I'd love to see someone build a system that is capable of adding any kind of AI algorithm/idea

Re: [agi] My proposal for an AGI agenda

2007-04-09 Thread Philip Goetz
Some more notes on cognitive infrastructures: IKAROS (http://www.lucs.lu.se/IKAROS/index.html) IKAROS components correspond to brain areas, which are linked to each other through arrays of real variables that represent neurons. IKAROS is focused on representing the human brain accurately at a

Re: [agi] Why C++ ?

2007-03-30 Thread Philip Goetz
On 3/23/07, Ben Goertzel [EMAIL PROTECTED] wrote: Additionally, we need real-time, very fast coordinated usage of multiple processors in an SMP environment. Java, for one example, is really slow at context switching between different threads. Java's threads are fairly heavy. You can use

Re: [agi] Project proposal: MindPixel 2

2007-01-26 Thread Philip Goetz
On 1/18/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Totally disagree! I actually examined a few cases of *real-life* commonsense inference steps, and I found that they are based on a *small* number of tiny rules of thought. I don't know why you think massive knowledge items are needed

Re: [agi] SOTA

2007-01-12 Thread Philip Goetz
On 1/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: http://www.thermostatshop.com/ Not sure what you've been Googling on but here they are. Haven't been googling. But the fact is that I've never actually /seen/ one in the wild. My point is that the market demand for such simple and

Re: [agi] SOTA

2007-01-11 Thread Philip Goetz
On 06/01/07, Gary Miller [EMAIL PROTECTED] wrote: I like the idea of the house being the central AI though and communicating to house robots through an wireless encrypted protocol to prevent inadvertant commands from other systems and hacking. This is the way it's going to go in my opinion.

Re: [agi] SOTA

2007-01-06 Thread Philip Goetz
On 1/6/07, Bob Mottram [EMAIL PROTECTED] wrote: Reflectors have been used on AGVs for quite some time. However, even using reflectors the robot has no real idea of what its environment looks like. Most of the time it's flying blind, guessing its way between reflectors, like a moth navigating

Re: [agi] SOTA

2007-01-05 Thread Philip Goetz
On 10/23/06, Bob Mottram [EMAIL PROTECTED] wrote: Grinding my own axe, I also think that stereo vision systems will bring significant improvements to robotics over the next few years. Being able to build videogame-like 3D models of the environment in real time is now a feasible proposition

Re: [agi] A question on the symbol-system hypothesis

2006-12-26 Thread Philip Goetz
On 12/2/06, Matt Mahoney [EMAIL PROTECTED] wrote: I know a little about network intrusion anomaly detection (it was my dissertation topic), and yes it is an important lessson. The reason such anomalies occur is because when attackers craft exploits, they follow enough of the protocol to make

Re: [agi] Goals and subgoals

2006-12-24 Thread Philip Goetz
On 12/22/06, Ben Goertzel [EMAIL PROTECTED] wrote: I don't consider there is any correct language for stuff like this, but I believe my use of supergoal is more standard than yours... It's just that, on this list in particular, when people speak of supergoals, they're usually asking whether

Re: [agi] Goals and subgoals

2006-12-22 Thread Philip Goetz
On 12/7/06, Ben Goertzel [EMAIL PROTECTED] wrote: erased along with it. So, e.g. even though you give up your supergoal of drinking yourself to death, you may involuntarily retain your subgoal of drinking (even though you started doing it only out of a desire to drink yourself to death). I

Re: [agi] RSI - What is it and how fast?

2006-12-15 Thread Philip Goetz
On 12/13/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: Nope. I think, for example, that the process of evolution is universal -- it shows the key feature of exponential learning growth, but with a very slow clock. So there're other models besides a mammalian brain. My mental model is to

Re: [agi] Geoffrey Hinton's ANNs

2006-12-13 Thread Philip Goetz
On 12/8/06, Bob Mottram [EMAIL PROTECTED] wrote: Hinton basically seems to be using the same kind of architecture as Edelman, in that you have both bottom-up and top-down streams of information (or I often just call this feed-forward and feed-back to keep the terminology more consistent with

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-13 Thread Philip Goetz
On 12/5/06, Matt Mahoney [EMAIL PROTECTED] wrote: --- Eric Baum [EMAIL PROTECTED] wrote: Matt We have slowed evolution through medical advances, birth control Matt and genetic engineering, but I don't think we have stopped it Matt completely yet. I don't know what reason there is to think

Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-13 Thread Philip Goetz
On 12/5/06, BillK [EMAIL PROTECTED] wrote: The good news is that Minsky appears to be making the book available online at present on his web site. *Download quick!* http://web.media.mit.edu/~minsky/ See under publications, chapters 1 to 9. The Emotion Machine 9/6/2006( 1 2 3 4 5 6 7 8 9

Re: [agi] RSI - What is it and how fast?

2006-12-13 Thread Philip Goetz
On 12/8/06, J. Storrs Hall [EMAIL PROTECTED] wrote: If I had to guess, I would say the boundary is at about IQ 140, so the top 1% of humanity is universal -- but that's pure speculation; it may well be that no human is universal, because of inductive bias, and it takes a community to search the

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote: Why must you argue with everything I say? Is this not a sensible statement? I don't argue with everything you say. I only argue with things that I believe are wrong. And no, the statements You cannot turn off hunger or pain. You cannot

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz
On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote: The statement, You cannot turn off hunger or pain is sensible. In fact, it's one of the few statements in the English language that is LITERALLY so. Philosophically, it's more certain than I think, therefore I am. If you maintain your

Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Philip Goetz
On 12/3/06, Mark Waser [EMAIL PROTECTED] wrote: This sounds very Searlian. The only test you seem to be referring to is the Chinese Room test. You misunderstand. The test is being able to form cognitive structures that can serve as the basis for later more complicated cognitive structures.

Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Philip Goetz
On 12/1/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Friday 01 December 2006 20:06, Philip Goetz wrote: Thus, I don't think my ability to follow rules written on paper to implement a Turing machine proves that the operations powering my consciousness are Turing-complete. Actually, I

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz
On 12/4/06, Philip Goetz [EMAIL PROTECTED] wrote: If you maintain your assertion, I'll put you in my killfile, because we cannot communicate. Richard Loosemore told me that I'm overreacting. I can tell that I'm overly emotional over this, so it might be true. Sorry for flaming. I am

Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Philip Goetz
On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote: A nice story but it proves absolutely nothing . . . . . It proves to me that there is no point in continuing this debate. Further, and more importantly, the pattern matcher *doesn't* understand it's results either and certainly could build upon

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Philip Goetz
On 12/2/06, Richard Loosemore [EMAIL PROTECTED] wrote: I am disputing the very idea that monkeys (or rats or pigeons or humans) have a part of the brain which generates the reward/punishment signal for operant conditioning. Well, there is a part of the brain which generates a

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Philip Goetz
On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote: Philip Goetz snidely responded Some people would call it repeating the same mistakes I already dealt with. Some people would call it continuing to disagree. :) Richard's point was that the poster was simply repeating previous points

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Philip Goetz
On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote: With many SVD systems, however, the representation is more vector-like and *not* conducive to easy translation to human terms. I have two answers to these cases. Answer 1 is that it is still easy for a human to look at the closest matches to

Re: [agi] RSI - What is it and how fast?

2006-12-01 Thread Philip Goetz
On 12/1/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: I don't think so. The singulatarians tend to have this mental model of a superintelligence that is essentially an analogy of the difference between an animal and a human. My model is different. I think there's a level of universality,

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/28/06, Matt Mahoney [EMAIL PROTECTED] wrote: First order logic (FOL) is good for expressing simple facts like all birds have wings or no bird has hair, but not for statements like most birds can fly. To do that you have to at least extend it with fuzzy logic (probability and

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
Oops - looking back at my earlier post, I said that English sentences translate neatly into predicate logic statements. I should have left out logic. I like using predicates to organize sentences. I made that post because Josh was pointing out some of the problems with logic, but then making

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/14/06, Mark Waser [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Models that are simple enough to debug are too simple to scale. The contents of a knowledge base for AGI will be beyond our ability to comprehend. Given sufficient time, anything should be able to be understood and

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote: I defy you to show me *any* black-box method that has predictive power outside the bounds of it's training set. All that the black-box methods are doing is curve-fitting. If you give them enough variables they can brute force solutions through

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote: If you look into the literature of the past 20 years, you will easily find several thousand examples. I'm sorry but either you didn't understand my point or you don't know what you are talking about (and the constant terseness of your

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Wednesday 29 November 2006 16:04, Philip Goetz wrote: On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: There will be many occurances of the smaller subregions, corresponding to all different sizes and positions of Tom's

Re: [agi] Funky Intel hardware, a few years off...

2006-11-29 Thread Philip Goetz
On 10/31/06, Ben Goertzel [EMAIL PROTECTED] wrote: This looks exciting... http://www.pcper.com/article.php?aid=302type=expertpid=1 A system Intel is envisioning, with 100 tightly connected cores on a chip, each with 32MB of local SRAM ... If you want to go in that direction, you can start

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread Philip Goetz
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote: The goal-stack AI might very well turn out simply not to be a workable design at all! I really do mean that: it won't become intelligent enough to be a threat. Specifically, we may find that the kind of system that drives itself using

Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote: I was saying that *because* (for independent reasons) these people's usage of terms like intelligence is so disconnected from commonsense usage (they idealize so extremely that the sense of the word no longer bears a reasonable connection

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/24/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote: You talked mainly about how sentences require vast amounts of external knowledge to interpret, but it does not imply that those sentences cannot be represented in (predicate)

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/27/06, Ben Goertzel [EMAIL PROTECTED] wrote: An issue with Hopfield content-addressable memories is that their memory capability gets worse and worse as the networks get sparser and sparser. I did some experiments on this in 1997, though I never bothered to publish the results ... some

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/26/06, Pei Wang [EMAIL PROTECTED] wrote: Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most natural and

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
as predicates magically provides semantics. On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Tuesday 28 November 2006 14:47, Philip Goetz wrote: The use of predicates for representation, and the use of logic for reasoning, are separate issues. I think it's pretty clear that English

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
Oops, Matt actually is making a different objection than Josh. Now it seems to me that you need to understand sentences before you can translate them into FOL, not the other way around. Before you can translate to FOL you have to parse the sentence, and before you can parse it you have to

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: Sorry -- should have been clearer. Constructive Solid Geometry. Manipulating shapes in high- (possibly infinite-) dimensional spaces. Suppose I want to represent a face as a point in a space. First, represent it as a raster. That is in

Re: [agi] Natural versus formal AI interface languages

2006-11-28 Thread Philip Goetz
On 11/9/06, Eric Baum [EMAIL PROTECTED] wrote: It is true that much modern encryption is based on simple algorithms. However, some crypto-experts would advise more primitive approaches. RSA is not known to be hard, even if P!=NP, someone may find a number-theoretic trick tomorrow that factors.

Re: [agi] SOTA

2006-10-20 Thread Philip Goetz
On 10/19/06, Olie Lamb [EMAIL PROTECTED] wrote: For instance, the soccer-bots get better every year, cars can now finish DARPA grand challenge -like events in reasonable time... (I personally think that we're fast approaching a critical point where the technology is just good enough to attract

Re: [agi] SOTA

2006-10-20 Thread Philip Goetz
On 10/20/06, Josh Treadwell [EMAIL PROTECTED] wrote: The resembling system is only capable of processing information based on algorithms, and not reworking an algorithm based on the reasoning for executing the function. This appears to be the same argument Spock made in an old Star Trek

Re: [agi] SOTA

2006-10-20 Thread Philip Goetz
On 10/19/06, Peter Voss [EMAIL PROTECTED] wrote: I'm often asked about state-of-the-art in AI, and would like to get some opinions. What do you regard, or what is generally regarded as SOTA in the various AI aspects that may be, or may be seen to be relevant to AGI? - NLP components such as

Re: [agi] method for joining efforts

2006-10-20 Thread Philip Goetz
Commercially, I'm not sure if OS or CS is better. Remember Steve Job's APPLE lost the PC market to IBM because IBM provided a more open architecture (in addition to the fact that IBM was more resourceful). We need to be careful not to lose the same way... Remember also that IBM lost its OWN

Re: [agi] AGI open source license

2006-09-05 Thread Philip Goetz
On 9/4/06, Charles D Hixson [EMAIL PROTECTED] wrote: Philip Goetz wrote: It is a good idea, for these reasons: 1. The money would be paid to the people who wrote the software. Under the GPL model you're promoting, the authors get nothing. The GPL does not prohibit you from selling

Re: [agi] AGI open source license

2006-09-04 Thread Philip Goetz
On 8/30/06, Charles D Hixson [EMAIL PROTECTED] wrote: ... some snipping ... - Phil The idea with the GPL is that if you want to also sell the program commercially, you should additionally make it available under an alternate license. Some companies have been successful in this mode.

Re: [agi] AGI open source license

2006-09-04 Thread Philip Goetz
On 9/1/06, Stephen Reed [EMAIL PROTECTED] wrote: Rather than cash payments I have in mind a scheme similar to the pre-world wide web bulletin board system in which FTP sites had upload and download ratios. If you wished to benefit from the site by downloading, you had to maintain a certain

Re: [agi] AGI open source license

2006-08-30 Thread Philip Goetz
On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote: An assumption that some may challenge is that AGI software should be free in the first place. I think that this approach has proved useful for both software (e.g. MySQL database) and knowledge (Wikipedia). Could additional terms and conditions

MAGIC (was Re: [agi] AGI open source license)

2006-08-30 Thread Philip Goetz
Wilbur Peng I developed a set of standards for AGI components, called MAGIC, that was intended to form the foundation of an open-source AGI effort. Unfortunately, the company decided not to make MAGIC open-source, rather losing sight of the entire purpose of the project. I can describe MAGIC,

Re: [agi] Lossy ** lossless compressi

2006-08-27 Thread Philip Goetz
On 8/25/06, Matt Mahoney [EMAIL PROTECTED] wrote: As I stated earlier, the fact that there is normal variation in human language models makes it easier for a machine to pass the Turing test. However, a machine with a lossless model will still outperform one with a lossy model because the

Re: [agi] Lossy ** lossless compressio

2006-08-25 Thread Philip Goetz
On 8/20/06, Matt Mahoney [EMAIL PROTECTED] wrote: The argument for lossy vs. lossless compression as a test for AI seems to be motivated by the fact that humans use lossy compression to store memory, and cannot do lossless compression at all. The reason is that lossless compression requires

Re: [agi] Lossy ** lossless compressio

2006-08-25 Thread Philip Goetz
On 8/20/06, Matt Mahoney [EMAIL PROTECTED] wrote: Uncompressed video would be the absolutely worst type of test data. Uncompressed video is about 10^8 to 10^9 bits per second. The human brain has a long term learning rate of around 10 bits per second. So all the rest is noise. How are you

Re: [agi] confirmation paradox

2006-08-15 Thread Philip Goetz
A further example is: S1 = The fall of the Roman empire is due to Christianity. S2 = The fall of the Roman empire is due to lead poisoning. I'm not sure whether S1 or S2 is more true. But the question is how can you define the meaning of the NTV associated with S1 or S2? If we can't, why

Re: [agi] confirmation paradox

2006-08-15 Thread Philip Goetz
On 8/15/06, Ben Goertzel [EMAIL PROTECTED] wrote: Phil, I see no conceptual problems with using probability theory to define context-dependent or viewpoint-dependent probabilities... Regarding YKY's example, causation is a subtle concept going beyond probability (but strongly probabilistically

Re: Goertzel/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Philip Goetz
On 8/15/06, Mark Waser [EMAIL PROTECTED] wrote: Ben Conceptually, a better (though still deeply flawed) contest would be: Compress this file of advanced knowledge, assuming as background knowledge this other file of elementary knowledge, in terms of which the advanced knowledge is defined.

Re: Goertzel/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Philip Goetz
On 8/15/06, Matt Mahoney [EMAIL PROTECTED] wrote: Ben wrote: Conceptually, a better (though still deeply flawed) contest would be: Compress this file of advanced knowledge, assuming as background knowledge this other file of elementary knowledge, in terms of which the advanced knowledge is

Re: Goetz/Goertzel/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Philip Goetz
On 8/15/06, Mark Waser [EMAIL PROTECTED] wrote: Actually, instructing the competitors to compress both the OpenCyc corpus AND then the Wikipedia sample in sequence and measuring the size of both *would* be an interesting and probably good contest. I think it would be more interesting for it to

Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Philip Goetz
I proposed knowledge-based text compression as a dissertation topic, back around 1991, but my advisor turned it down. I never got back to the topic because there wasn't any money in it - text is already so small, relative to audio and video, that it was clear that the money was in audio and

Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Philip Goetz
On 8/15/06, Matt Mahoney [EMAIL PROTECTED] wrote: I realize it is tempting to use lossy text compression as a test for AI because that is what the human brain does when we read text and recall it in paraphrased fashion. We remember the ideas and discard details about the expression of those

Re: Goetz/Goertzel/Sampo: [agi] Marcus Hutter's lossless compression of human knowledge prize

2006-08-15 Thread Philip Goetz
On 8/15/06, Mark Waser [EMAIL PROTECTED] wrote: I think it would be more interesting for it to use the OpenCyc corpus as its knowledge for compressing the Wikipedia sample. The point is to demonstrate intelligent use of information, not to get a wider variety of data. :-) My assumption is

Re: [agi] Re: Google wins

2006-07-31 Thread Philip Goetz
On 7/31/06, Ben Goertzel [EMAIL PROTECTED] wrote: Google's data will be accessible to any AI anywhere, right? No. Google's data will be /searchable/, only in that any AI anywhere can submit a search expression, and get back a few scraps of the text. But the data is still under copyright,

[agi] Bayes in the brain

2006-06-11 Thread Philip Goetz
An article with an opposing point of view than the one I mentioned yesterday... http://www.bcs.rochester.edu/people/alex/pub/articles/KnillPougetTINS04.pdf --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL

Re: [agi] information in the brain?

2006-06-09 Thread Philip Goetz
On 6/9/06, Eugen Leitl [EMAIL PROTECTED] wrote: Most of information is visual, and retina purportedly compresses 1:126 (obviously, some of it lossy). http://www.4colorvision.com/dynamics/mechanism.htm claims 23000 receptor cells on the foveola, so I would just do a rough calculation of some 50

[agi] Neural representations of negation and time?

2006-06-09 Thread Philip Goetz
Various people have the notion that events, concepts, etc., are represented in the brain as a combination of various sensory percepts, contexts, subconcepts, etc. This leads to a representational scheme in which some associational cortex links together the sub-parts making up a concept or a

[agi] list vs. forum

2006-06-09 Thread Philip Goetz
Why do we have both an email list and a forum? Seems they both serve the same purpose. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] Grossber: A brain without Bayes

2006-06-08 Thread Philip Goetz
An interesting abstract, from a talk presented at the 10th intl. conf on cognitive neural systems last month: A brain without bayes: Temporal dynamics of decision-making during form motion perception by the laminar circuits of visual cortex by Praveen Pilly Stephen Grossberg

[agi] information in the brain?

2006-06-08 Thread Philip Goetz
Does anyone know how to compute how much information, in bits, arrives at the frontal lobes from the environment per second in a human? For a specific brain region, you can compute its channel capacity if you know the number of neurons, and the refractory period of the neurons in that region,

Re: [agi] procedural vs declarative knowledge

2006-06-05 Thread Philip Goetz
On 5/30/06, Yan King Yin [EMAIL PROTECTED] wrote: It seems that your approach is to store the function add(x,y) directly *inside* a node. This destroys the nice uniformity of the KR. Secondly, the AGI should be able to process addition just like ANY other concept. add(x,y) is inside a node

Re: [agi] AGIRI Summit

2006-05-31 Thread Philip Goetz
On the subject of declarative memories vs procedural ones, I've come across accounts of patients who lost their declarative memory totally (the common amnesia), but retained procedural memory. For example, the patient was able to drive or dine with forks and knives etc but forgot everything that

Re: [agi] Re: Superrationality

2006-05-26 Thread Philip Goetz
On 5/25/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: Ben Goertzel wrote: I wonder if anyone knows of any mathematical analysis of superrationality. I worked out an analysis based on correlated computational processes - you treat your own decision system as a special case of computation

Re: [agi] Re: Superrationality

2006-05-26 Thread Philip Goetz
Tell me if this is also a superrationality-type issue: I commented to Eliezer that, during the last panel of the conference, I looked around for Eliezer didn't find him, and wondered if there was a bomb in the room. He replied something to the effect that he has a strong committment to ethics.