On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
Consider the following subset of possible requirements: the program is
correct
if and only if it halts.
It's a perfectly valid requirement, and I can write all sorts of
software that satisfies it. I can't take a piece of software
On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
When your computer can write and debug
software faster and more accurately than you can, then you should worry.
A tool that could generate computer code from formal specifications
would be a wonderful thing, but not an autonomous
On Jan 28, 2008 11:22 AM, Bob Mottram [EMAIL PROTECTED] wrote:
On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
Consider the following subset of possible requirements: the program is
correct
if and only if it halts.
It's a perfectly valid requirement, and I can write all sorts
On Jan 28, 2008 2:08 PM, Bob Mottram [EMAIL PROTECTED] wrote:
On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
You can try checking out for example this paper (link from LtU
discussion), which presents a rather powerful language for describing
terminating programs:
On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
I can take even external arbitrary program (say, a
Turing machine that I can't check in general case), place it on a
dedicated tape in UTM, and add control for termination, so that if it
doesn't terminate in 10^6 tacts, it will be
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 28, 2008 4:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Consider the following subset of possible requirements: the program is
correct
if and only if it halts.
It's a perfectly valid requirement, and I can write all sorts
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
It is undecidable whether a program satisfies the requirements of a formal
specification, which is the same as saying that it is undecidable whether
two
programs are equivalent.
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Theoretically yes, but behind my comment was a deeper analysis (which I
have posted before, I think) according to which it will actually be very
difficult for a negative-outcome singularity to occur.
I was really trying to make the point
On Jan 28, 2008 7:41 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
It is easy to construct programs that you can prove halt or don't halt.
There is no procedure to verify that a program is equivalent to a formal
specification (another program). Suppose there was. Then I can take any
program P
On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
It is undecidable whether a program satisfies the requirements of a formal
specification, which is the same as saying that it is undecidable whether two
programs are equivalent. The halting problem reduces to it.
Yes it is, if
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
Exactly. That's why it can't hack provably correct programs.
Which is useless because you can't write provably correct programs that aren't
extremely simple. *All* nontrivial properties of programs are undecidable.
On Jan 29, 2008 12:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
Exactly. That's why it can't hack provably correct programs.
Which is useless because you can't write provably correct programs that aren't
extremely simple. *All* nontrivial
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be magically solved by AGI. The problem will
actually get worse, because complex systems are harder to get right.
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be magically solved by AGI. The problem
will
actually get worse, because
On Jan 28, 2008 1:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be
On 27/01/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be magically
Google
already knows more than any human,
This is only true, of course, for specific interpretations of the word
know ... and NOT for the standard ones...
and can retrieve the information faster,
but it can't launch a singularity.
Because, among other reasons, it is not an intelligence, but
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
No computer is going to start writing and debugging software faster and
more accurately than we can UNLESS we design it to do so, and during the
design process we will have ample opportunity to ensre that the machine
will
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Maybe you can
program it with a moral code, so it won't write malicious code. But the
two
sides of the security problem require almost identical skills. Suppose
you
ask the AGI to examine some operating system or
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
This whole scenario is filled with unjustified, unexamined assumptions.
For example, you suddenly say I foresee a problem when the collective
computing power of the network exceeds the collective computing power of
the humans
--- Richard Loosemore [EMAIL PROTECTED] wrote:
This whole scenario is filled with unjustified, unexamined assumptions.
For example, you suddenly say I foresee a problem when the collective
computing power of the network exceeds the collective computing power of
the humans that administer
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
You must
demonstrate some reason why the collective net of dumb computers will be
intelligent: it is not enough to simply assert that they will, or
might, become intelligent.
The intelligence comes from an infrastructure
--- Richard Loosemore [EMAIL PROTECTED] wrote:
You suggest that a collection of *sub-intelligent* (this is crucial)
computer programs can ad up to full intelligence just in virtue of their
existence.
This is not the same as a collection of *already-intelligent* humans
appearing more
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
You suggest that a collection of *sub-intelligent* (this is crucial)
computer programs can ad up to full intelligence just in virtue of their
existence.
This is not the same as a collection of *already-intelligent* humans
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are
Nightmare Scenarios) is that the vast majority of them involve
completely untenable assumptions. One example is the idea that there
will be a situation in
On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which
are Nightmare Scenarios) is that the vast majority of them
involve completely untenable assumptions.
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are
Nightmare Scenarios) is that the vast majority of them involve
completely untenable assumptions. One
Randall Randall wrote:
On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which
are Nightmare Scenarios) is that the vast majority of them involve
completely
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are
Nightmare Scenarios) is that the vast majority of them involve
completely untenable
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Because recursive self improvement is a competitive evolutionary process
even
if all agents have a common ancestor.
As explained in parallel post: this is a non-sequiteur.
OK, consider a network of agents, such as my
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Because recursive self improvement is a competitive evolutionary process
even
if all agents have a common ancestor.
As explained in parallel post: this is a non-sequiteur.
OK, consider a network of
Charles D Hixson wrote:
Richard Loosemore wrote:
Matt Mahoney wrote:
...
Matt,
...
As for your larger point, I continue to vehemently disagree with your
assertion that a singularity will end the human race.
As far as I can see, the most likely outcome of a singularity would be
exactly
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are
Nightmare Scenarios) is that the vast majority of them involve
completely untenable assumptions. One example is the idea that there
will be a situation in the world in which
33 matches
Mail list logo