On Thu, May 14, 2015 at 10:17:11AM -0700, Dorian Aur wrote: > We need to create an infrastructure e.g. the *Institute of General > Intelligence, *elect/appoint a board of directors to manage the entire > organization*.* Only a small fraction of funding that is currently > allocated for the BRAIN Initiative or Human Brain Project in EU would be > enough to build the first hybrid system . This project can be the bucket > list for an entire generation of computer scientists / neuroscientists whom > should collaborate- our brain uses less than 30 watts to perform all kind > of "intelligent" computations. Having first completed this step would > increase our chance to deliver a more "synthetic" approach as Colin > proposed. > > > Here is the rationale: > a.Why use a digital computer to simulate/map or emulate the whole brain > • It cannot express all forms of computation that are built within > biological structure (see neuroelectrodynamics); > • Needs many megawatts to power the system (huge issue); > • Requires billions of dollars; > • Cannot generate emotion, consciousness... > • No reliable model for brain diseases. > b. Why not shape a biological structure, connect it with a digital computer > use machine learning (e.g DL) and perform all kinds of computations - Can > we build a conscious machine http://arxiv.org/abs/1411.5224. > • Naturally, emotion, consciousness ....are expressed > • Can be used as a model for therapy for about 600 brain diseases
that's a pretty far off goal. as Ben G and his team understand, it makes more sense to do more near term immediate things with an AGI, like play the stock market. I'm working on interlingual computer programming collaboration, which in short term can make money with translation services, and longer term in education and certification. > • Can be connected to a laptop, iPhone uses digital and biological > computation together which can make any digital computer highly interactive > • Far less amount of funding required. AGI can become fast an academic > discipline, it can attract funding not only from private companies Ideally it should be self-funding, a product you can sell, or a service you can offer using it. > My previous answers on FB > > 5.Does an AGI need to be conscious? > Yes, it has to be conscious otherwise AGI can be dangerous (see 9). hmm, I think you may have conscious and conscience confused, it is an unfortunate homonym, can split it as morals and sentience. morals are good traditions, or habits, which encourage cohesive collaborative behaviour. sentience is more of a political category, similar to the pro-life vs pro-choice debate. we have plenty of computers that can outperform low-IQ humans, yet those low-IQ humans are still considered sentient, and computers not for political reasons. some legal systems even consider "human vegetables" or people in commas to be living. > 6.Can AGI be creative? > > If we build hybrid systems AGI can become creative there are already plenty of creative narrow AI's. creativity is simply the mixing of previous knowledge in new coherant ways. > 7.Will AGI have emotions? > > Biological structure embedded in the hybrid system will allow any AGI > system to experience emotions seems rather unnecessary, emotions are just pre-language drives and urges, similar to having lists of priorities and process queues. if a server goes down, run a script to boot it back up. if energy reservers are low, initialize hunger, motivate searching and eating food to get it back up. what's more interesting is emotions in decision making, since they are pretty much essential, though basically from my understanding, the decision is made based on what benefits the lower brains most. Females tend to be better decision makers since they can integrate more brain areas, and also have better emotion-language circuitry. > 8.How far off is AGI? > With current technology the first prototype can be implemented in less than > 5 years, far less than the BIG detour (2001 - 2015) planning fallacy. > 9.Will AGI be dangerous? > > The system needs to be conscious about its actions, otherwise it can be > dangerous > An example : the missile crisis in Cuba, less intelligent actions can lead > to an apocalypse for everyone ( it should be embedded in consciousness) it's a purely political point whether or not a computer is aware of it's actions. I don't have any reason to suspect that the computer is not aware of what it is doing as I type this. What seperates a rock from a plant, is that a plant has desires. what seperates a plant or lower animal from a human or doliphin, is that humans/dolphins can make choices. so the ability to make it's own choices is critical for having a human level intelligence, though likely the other basic things are required also, such as the ability to reproduce. for instance if an AGI does well and collaborates, it could motivate humans to make more copies of it on more hardware. though ideally of course AGI's would be able to reproduce in an autonomous fashion as do biological organisms. after choice is setting it's own goals... the common people just set the goals which society programs for them: study study, work work work work, retire, complain, die. which covers all the decades of a "normal" life in order. it's possible to do it differently, but then would have to think and set own goals and life plan. > It's time for action sure, a very masculine thing to do :-). males have ample emotion-action circuitry. > Best, > > Dorian have fun :-D Logan ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
