On Mon, Nov 24, 2014 at 5:56 PM, John Clark <johnkcl...@gmail.com> wrote:
> On Mon, Nov 24, 2014 Telmo Menezes <te...@telmomenezes.com> wrote: > > > >> > All the AI we have so far gives as a little from a lot. The real goal >> of AI is to get a lot from a little. >> > > A human translator can't get good at translating language X to Y unless he > hears a lot of both languages X and Y, and the same is true of computers. > Right, but here what is meant is the effort in building the translator, and what you get from it. This dictum comes from the school of thought that claims that human-level AI will be grown/evolved instead of directly programmed. I suspect you might give this idea some credence, given that you say that we might create human-level AI before understanding how it works. Google's translator is not grown/evolved, as far as I can tell, so it might be a dead-end in terms of the effort that will actually lead us to the next quantitive jump in AI. > > > With what I consider real AI, an artificial translator could also be >> taught how to drive a car. >> > > Computers can do both and subroutines exist so what's the problem? > The problem is that human-level AI might require a level of complexity in terms of subroutine calls, shared data and so on that transcends the ability of any human programmer. Modern software development subscribes to the "divide and conquer" school of engineering. This makes a lot of sense when it comes to building large banking systems or even search engines, but it might be a dead end when it comes to building human-level AI, because there is no guarantee that all classes of problems can be modularised down to chunks small enough for a puny human programmer to be able to reason about. In a sense, I suspect we are stuck in a local maxima of software development common sense, and a lot of heresy will have to be attempted before anything of consequence is achieved. > > > The extreme compartmentalisation of capabilities is the smoking gun that >> the "intelligence" part of AI is not increasing. >> > > A computer that beat the 2 best human players of Jeopardy on planet Earth > blew that argument into (sorry but I just have to say it) bits . > > >> And human beings move from being mediocre translators to being very >>> good translators by observing how great translators do it. >>> >> >> > And they can also do this for a number of different skills with the >> same software. >> > > I see no evidence that humans use the same mental software to translate > languages, solve differential equations, walk and chew gum at the same time, > and write about philosophy on the internet; I think humans use different > subroutines for different tasks just as computers do. > I would just repeat what I wrote above, but still. What computers can do is a superset of what humans can program computers to do. What humans can program computers to do is largely determined by tools. I am questioning that our current set of tools is adequate for the problem of creating human-level AI, and that most encouraging achievement are not dead ends. > > > >> Translation certainly won't be the last profession where machines >>> become better at there job than any human; and I predict that the next >>> time it happens somebody will try to find a excuse for it just like you did >>> and say "Yes a machine is a better poet or surgeon or joke writer or >>> physicists than I am but it doesn't really count because (insert lame >>> excuse here)". >>> >> >> > I am sure of that too, but I reserve my decision on which side of the >> argument I'm in until I see these "surgeons", "joke writers" or >> "physicists" that you talk about. >> > > That just means you are a reasonable man. The people who exasperate me are > those who say that even though X does very intelligent things that doesn't > mean that X is intelligent. My point is that I don't believe in magic so I > think that all the brilliant things humans have done over the last few > thousand years happened because of the way the atoms in the 3 pounds of > grey goo inside their bone box were organized, and so there is no reason > that other things, like computers, couldn't be as intelligent or more so if > they were organized in the right way. > Ok, we have no disagreement over this. My problem is more with people who are trying to mess with the goal post, usually for marketing purposes. I don't mind the bragging, but it reinforces the idea that the goal can be achieved though iterative improvement over current systems, something that I am skeptical of. Telmo. > > John K Clark > > > > > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to everything-list+unsubscr...@googlegroups.com. > To post to this group, send email to everything-list@googlegroups.com. > Visit this group at http://groups.google.com/group/everything-list. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.