I've taken a look at PM's design in general detail and will go on record that it is very cool.
As far as the cloud rule-based issue.... it seems like cloud analytic/hadoop firms are popping up like wildfire. I was just reading about "Cloudera" the other day. I don't have the answer though.... On 11/8/13, Matt Mahoney <[email protected]> wrote: > On Fri, Nov 8, 2013 at 7:57 PM, Piaget Modeler > <[email protected]> wrote: >> >> The article is free. I think scribd is trying to make some money for >> itself. >> Did not know that. Sign up for a scribd account and you should be able >> to download it for free. or instead I can e-mail it to you personally if >> you prefer. >> Let me know which way you want to go. > > Why don't you just put the PDF file on your website? > >> The goal is to have the infant develop into a more mature general >> intelligence. > > Are you going to raise it like a parent, or do you have some way to > automate or speed up the training? > >> Some compression may be done on the audio video streams but largely >> those percepts will be represented internally as monads. > > The retina compresses a 10^10 bit per second video stream down to > about 10^7 bits per second transmitted over the optic nerve. By the > time our visual perception reaches long term episodic memory, it is > compressed to about 5 bits per second. We know this is the case > because of limits on our ability to recall images or to notice > differences as measured in cognitive tests. But we have no idea how to > do the later stages of this type of lossy compression. > > We do have a pretty good idea of what the retina and lower layers of > the visual cortex are doing. We can use neural networks like Hawkins' > HTM to model the pattern recognition capabilities we have observed in > animal and human experiments. Presumably the higher layers are doing a > similar thing, but with more complex patterns such as faces, words, > and other familiar objects. But to model this, we need a human brain > sized neural network (1 petaflop) and train it on years worth of > video, like 10^9 frames with 10^7 pixels each (10 petabytes). Our most > ambitious experiments, like Google's cat face recognizer, fall far > short of what a toddler can see. And that required 3 days of training > on 8000 CPU cores on 10^-4 as much data. > > So how do you propose doing that with monads? Sure, they are elegant, > but do you expect a 10^6 speedup? > > And, of course, such simple, highly repetitive structures as we > typically use cannot model our genetically programmed fear of heights, > snakes, and spiders. How do you code that into an untrained vision > system? > >> Humans are very complex, PAM-P2 will be less so. > > Are you expecting human level results? If not, then what results would > you consider successful? > > -- > -- Matt Mahoney, [email protected] > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
