--- Ben Goertzel <[EMAIL PROTECTED]> wrote: > Well, off-the-cuff, here are some biases that can be stated without > reference to underlying machinery: > > -- Assume perceptual inputs refer to a 3D space
We don't really know if 3D models of space are innate or learned, and it would be hard to do an experiment to find out (raise an organism in 2D or 4D space). People differ in their spatial reasoning abilities. > -- Bias perceptual pattern search toward patterns among nearby percepts Probably so. Look at the spatial organization of the visual cortex. Also, animals tend to learn associations between events separated by time t at rate 1/t. > -- Assume there will be persistent objects in the 3D space This is not innate. Babies don't recognize that when an object is hidden from view that it still exists. My point is that inductive biases might be very complex. There is no simple list. Psychologists have been studying this for years. My point about bees knowing how to build hives is that a great deal of knowledge can be encoded in DNA. Bees cannot learn. Humans can. Humans aren't born knowing how to build houses, but they are born knowing how to learn a complex set of skills needed to build houses. This is a harder problem. There is no simple enumeration of what can or can't be learned. For example, we know that humans can learn languages that have certain properties but not others, but there is currently no simple test that we can apply to an arbitrary language to determine if a human could learn it or not. > > AGI might still be harder than we think. It has happened before. > Of course it might be -- we can't know for sure till the task of > building an AGI is successfully done. Maybe we can. We tried the other approach in the late 1950's and now we know why it failed. I don't mean to be pessimistic, but we at least ought to estimate the difficulty of the problem. How much information do we need, in bits? The inductive bias (encoded in DNA) is almost certainly less than the learned knowledge (encoded in synapses), but still it is a big number. We need a map to guide us to those problems for which success is likely. It might tell us, for example, that the most promising approach is to extract the inductive bias from DNA analysis, or from dissections of the brain, or from animal experiments, rather than trying to figure it out from scratch. It might tell us that some problems are easier than others, e.g. it might be easier to build an AGI to manage a corporation than an AGI that can distinguish good music from bad. We are not yet at the point where we can model the brains of insects. We need to consider the possibility that human brains are more complex. -- Matt Mahoney, [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303