Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: Consider the following subset of possible requirements: the program is correct if and only if it halts. It's a perfectly valid requirement, and I can write all sorts of software that satisfies it. I can't take a piece of software

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote: When your computer can write and debug software faster and more accurately than you can, then you should worry. A tool that could generate computer code from formal specifications would be a wonderful thing, but not an autonomous

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 11:22 AM, Bob Mottram [EMAIL PROTECTED] wrote: On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: Consider the following subset of possible requirements: the program is correct if and only if it halts. It's a perfectly valid requirement, and I can write all sorts

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 2:08 PM, Bob Mottram [EMAIL PROTECTED] wrote: On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: You can try checking out for example this paper (link from LtU discussion), which presents a rather powerful language for describing terminating programs:

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: I can take even external arbitrary program (say, a Turing machine that I can't check in general case), place it on a dedicated tape in UTM, and add control for termination, so that if it doesn't terminate in 10^6 tacts, it will be

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 28, 2008 4:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Consider the following subset of possible requirements: the program is correct if and only if it halts. It's a perfectly valid requirement, and I can write all sorts

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote: It is undecidable whether a program satisfies the requirements of a formal specification, which is the same as saying that it is undecidable whether two programs are equivalent.

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Kaj Sotala
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote: Theoretically yes, but behind my comment was a deeper analysis (which I have posted before, I think) according to which it will actually be very difficult for a negative-outcome singularity to occur. I was really trying to make the point

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 7:41 PM, Matt Mahoney [EMAIL PROTECTED] wrote: It is easy to construct programs that you can prove halt or don't halt. There is no procedure to verify that a program is equivalent to a formal specification (another program). Suppose there was. Then I can take any program P

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote: It is undecidable whether a program satisfies the requirements of a formal specification, which is the same as saying that it is undecidable whether two programs are equivalent. The halting problem reduces to it. Yes it is, if

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: Exactly. That's why it can't hack provably correct programs. Which is useless because you can't write provably correct programs that aren't extremely simple. *All* nontrivial properties of programs are undecidable.

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Lukasz Stafiniak
On Jan 29, 2008 12:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: Exactly. That's why it can't hack provably correct programs. Which is useless because you can't write provably correct programs that aren't extremely simple. *All* nontrivial

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Vladimir Nesov
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be magically solved by AGI. The problem will actually get worse, because complex systems are harder to get right.

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be magically solved by AGI. The problem will actually get worse, because

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Vladimir Nesov
On Jan 28, 2008 1:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread William Pearson
On 27/01/2008, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be magically

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Ben Goertzel
Google already knows more than any human, This is only true, of course, for specific interpretations of the word know ... and NOT for the standard ones... and can retrieve the information faster, but it can't launch a singularity. Because, among other reasons, it is not an intelligence, but

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: No computer is going to start writing and debugging software faster and more accurately than we can UNLESS we design it to do so, and during the design process we will have ample opportunity to ensre that the machine will

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Maybe you can program it with a moral code, so it won't write malicious code. But the two sides of the security problem require almost identical skills. Suppose you ask the AGI to examine some operating system or

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: This whole scenario is filled with unjustified, unexamined assumptions. For example, you suddenly say I foresee a problem when the collective computing power of the network exceeds the collective computing power of the humans

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: This whole scenario is filled with unjustified, unexamined assumptions. For example, you suddenly say I foresee a problem when the collective computing power of the network exceeds the collective computing power of the humans that administer

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: You must demonstrate some reason why the collective net of dumb computers will be intelligent: it is not enough to simply assert that they will, or might, become intelligent. The intelligence comes from an infrastructure

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: You suggest that a collection of *sub-intelligent* (this is crucial) computer programs can ad up to full intelligence just in virtue of their existence. This is not the same as a collection of *already-intelligent* humans appearing more

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: You suggest that a collection of *sub-intelligent* (this is crucial) computer programs can ad up to full intelligence just in virtue of their existence. This is not the same as a collection of *already-intelligent* humans

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One example is the idea that there will be a situation in

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Randall Randall
On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions.

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Randall Randall wrote: On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Because recursive self improvement is a competitive evolutionary process even if all agents have a common ancestor. As explained in parallel post: this is a non-sequiteur. OK, consider a network of agents, such as my

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Because recursive self improvement is a competitive evolutionary process even if all agents have a common ancestor. As explained in parallel post: this is a non-sequiteur. OK, consider a network of

Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Richard Loosemore
Charles D Hixson wrote: Richard Loosemore wrote: Matt Mahoney wrote: ... Matt, ... As for your larger point, I continue to vehemently disagree with your assertion that a singularity will end the human race. As far as I can see, the most likely outcome of a singularity would be exactly

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One example is the idea that there will be a situation in the world in which