Matt Mahoney wrote: >> 'Access to' isn't the same thing as 'augmented with' of course, but I'm >> not sure exactly what you mean by this (and I'd rather wait for you to >> explain than guess). > > I was referring to one possible implementation of AGI consisting of part > neural > or brainlike implementation and part conventional computer (or network) > to combine the strengths of both.
I'm sure that a design like this is possible, and there are quite a few people trying to build AGIs like this, either with close integration between the connectionist and code-like parts or having them as relavtively discrete but communicating parts. Yes it should be more powerful than connectionism on its own, no it's not necessarily any more Friendly, but any kind of hard structural constraints (what can trigger what, what can modify what) can be reliably enforced via the non-connectionist elements then it has the potential to be more Friendly than a connectionist system could be. What I'm not sure about is that you gain anything from 'neural' or 'brainlike' elements at all. The brain should not be put on a pedestal. It's just what evolution on earth happened to come up with, blindly following incremental paths and further hobbled by all kinds of cruft and design constraints. There's no a priori reason to believe that the brain is a /good/ way to do anything, given hardware that can execute arbitrary Turing-equivalent code. Of course it's still pragmatic to try copying the brain when we can't think of anything better (i.e. don't have the theoretical basis or tools to do better than attempt crude immitations). As with rational AGI (and FAI) in general, I don't expect people (who haven't deeply studied it and tried to build these systems) to accept that this is true, just that it might be true; there may be much more efficient algorithms that effectively outperform connectionism in all cases. Getting some confirmation (or otherwise) of that is something that is one of the things I'm working on at present. > The architecture of this system would be that the neural part has the > capability to write programs and run them on the conventional part in > the same way that humans interact with computers. Neural nets are a really bad fit with code design. Current ANNs aren't generally capable of from-requirements design anyway, as opposed to pattern recognition and completion. Writing code involves juggling lots of logical constraints and boolean conditions, so it's actually one of the few real world tasks that is a natural fit with predicate logic. This is why humans currently use high-level languages and error-checking compilers. You could of course use a connectionist system as the control mechanism to direct inference in a logic system, in a roughly analogous manner. > This seems to me to be the most logical way to build an AGI, and > probably the most dangerous I'd agree that it looks good when you first start attacking the problem. Classic ANNs have some demonstrated competencies, classic symbolic AI has some different demonstrated competencies, as do humans and existing non-AI software. I was all for hybridising various forms of connectionism, fuzzy symbolic logic, genetic algorithms and more at one point. It was only later that I began to realise that most if not all of those mechanisms were neither optimal, adequate or even all that useful. Most dangerous, perhaps, in that highly hybridised systems that overcome the representational communication barrier between their subcomponents are probably unusually prone to early takeoff. It's easy to proceed without really understanding what you're doing if you take the 'kitchen sink' approach of tossing in everything that looks useful (letting the AI sort out how to actually use it). Not all integrative projects are like that, but quite a few are, and yes they are dangerous. > I believe that less interaction means less monitoring and control, and > therefore greater possibility that something will go wrong. Plus humans in the decision loop inherently slow things down greatly compared to an autonomous intelligence running at electronic speeds. > As long as human brains remain an essential component of a superhuman > intelligence, it seems less likely that this combined intelligence will > destroy itself. Probably true, but 'destroy itself' is a minor and recoverable failure scenario unless the intelligence takes a good chunk of the scenery with it. It's the 'start restructuring everything in reach according to a non-Friendly goal system' outcome that's the real problem. > If AGI is external or independent of human existence, then there is a > great risk. But if you follow the work of people trying to develop AGI, > it seems that is where we are headed, if they are successful. It's inevitable. Someone is going to build one eventually. The only useful argument is 'we should develop intelligence enhancement first, so that we have a better chance of getting AGI right'. You can go and research that if you want, but the IA tech is going to be subject to politics and destructive goals in a way that a post-takeoff AGI won't be, and in the mean time other people are going to continue researching AGI as fast as they can. I personally think my time is best spent trying to develop FAI technology rather than intelligence augmentation, and clearly a fair few people agree. > Consider this possibility. We build an AGI that is part neural, part > conventional computer, modeled after a system of humans with > programming skills and a network of computers. Note that replicating humans accurately is really, really hard. Unless you're using uploads, you have no way of knowing that these 'neural' parts will be even comprehensible never mind humanlike. > Even if you could prove friendliness (which you can't), Provable Friendliness in such a system isn't necessarily impossible, it'd just be ridiculously hard. > then you still have the software engineering problem. Program > specifications are written in natural language, which is ambiguous, > imprecise and incomplete. People make assumptions. People make > mistakes. Neural models of people will make mistakes. Each time the > AGI programs a more intelligent AGI, it will make programming errors. Absolutely. You've listed many of the reasons why I'd never build either a hybrid AGI or an AGI from-scatch-by-humans like that. How a provable, incrementally constructed FAI design can avoid these problems is a different (though very interesting) topic. > Proving program correctness is equivalent to the halting problem, Only for /arbitrary programs/. This brain bug really has to be squashed. We are not in the business of verifying whether arbitrary programs are in fact FAIs; that would be literally impossible. We are interested in /constructing/ programs that are provably Friendly. Every time we find something that our proving techniques can't handle, we throw it out and find something else that does the same job that they can handle. I suspect this is a major reason why Yudkowsky thinks FAI is all abstract maths with a tiny bit of engineering at the end; he's spending most of his time looking for useful stuff he can actually prove. I spend a lot more time writing code probably because I'm using more powerful (narrow-)AI-assisted proving instead of doing it by hand. > so the problem will not go away no matter how smart the AGI is. The problem may go away if the AGI restructures itself to dump the unprovable connectionist stuff and switch to verified self-redesign. If it is possible at all, this is highly favourable under most goal systems, because most goal systems imply a desire to preserve invariants reliably through self-modification steps. > Using heuristic methods won't help, because after the first cycle of > the AGI programming itself, the level of sophistication of the software > will be beyond our capability to understand it (or else we would have > written it that way in the first place). You will have no choice but to > trust the AGI detect its own errors. If you mean 'external heuristic methods' then yes; presumably any heuristics internal to the AGI are being upgraded and added to as the AGI learns. It's conceivable that the AGI might be able to stay within the bounds of what external non-AGI methods can verify, and it's also conceivable that the AGI could export all of its learned self-modification heuristics in a comprehensible-to-humans form, but neither of those things seem very likely to work to me. Michael Wilson Director of Research and Development Bitphase AI Ltd - http://www.bitphase.com ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
