--- Jiri Jelinek <[EMAIL PROTECTED]> wrote: > Matt, > > > You contribute to AGI every time you use gmail and add to Google's > knowledge > > base. > > Then one would think Google would already be a great AGI. > Different KB is IMO needed to learn concepts before getting ready for NL.
Google does not yet have enough computing power for AGI. > > It is not you that is designing the AGI. It is another AGI. And it is not > designing -- it is experimenting with an existing design. > > Given goals should carry over by default. AGI's freedom must be > limited (for our safety as well as for its own). People are willing to give computers more power for convenience. You give banks a record of all your spending because credit cards are more convenient than cash. You let Google read your email. We let computers fly airplanes. Suppose you have a financial adviser on your computer. It has a proven track record of making better investment decisions than humans. You have the option of letting it advise you which instruments to buy and sell or letting it trade directly. The latter is more profitable (because it is faster) and less work for you. Which will you choose? Suppose the AGI that gives you financial advice is no longer making a good profit because everyone else has a copy. It suggests building a better (smarter) version of itself, but it doesn't know exactly how, because if it did, it would already be that smart. It suggests building several experimental variations of itself, test them by allowing them to each make small trades, and keeping the most profitable versions. The conversation goes like this: You: How will these experimental versions be different? Computer: I will make 4 copies and adjust parameters P237 and P415 by plus or minus 0.1% in each combination. You: What do those parameters do? Computer: Well, P237 affects the acceleration of P516 given large values of Q3321... You: Never mind, just do it. So, is the goal system stable or not? > > Could [your consciousness] exist in a machine with the same goals and > memories? > > If the machine can also handle my qualia then yes. What does qualia look like in a machine? -- Matt Mahoney, [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=60618283-ee3c00