Re: [off topic] Some new articles I wrote about science

2007-04-26 Thread Micha Feigin
On Wed, 25 Apr 2007 16:27:23 +0300
"Uri Even-Chen" <[EMAIL PROTECTED]> wrote:

> By the way, it's very easy to prove that for any specific decision
> problem, *there is* an algorithm who returns a correct answer in O(1).
>  Consider these two algorithms: "always return 0" and "always return
> 1".  At least one of them returns a correct answer for *any input*.

For each _specific_ input. This version won't work for a class of problems and
you can't say that you can build a table of answers since you still have the
problem of choice, which taken straight forward is exponential (since the
number of options grows exponentially with n).

> But if a general problem is considered to be hard, it must contain a
> subset of "really hard problems", which cannot be calculated in O(1)
> by *any* algorithm.  But at least one of my algorithms returns a
> correct answer for each of those "really hard problems".  Then a
> subset of "really hard problems" can never be proved to exist.  And if
> a set contains only "easy" problems, then how hard can it be?
> 

Like I said, there are two O(1) algorithms, at least one of which is correct,
but the choice between them can be hard, and you are interested in the choice,
i.e in the correct solution, not whether it can be given easily once you know
it for the _specific_ problem.

> This can even be extended to the halting problem.  For any specific
> algorithm it is possible to return the correct answer in O(1), if a
> correct answer exists.  If it is proved not to be computable in O(1),
> then a definite answer doesn't exist (any definite answer will lead to
> a contradiction).
> 

Like I said, you try to prove your point by answering a completely question
than the one you posed.

For a given problem there is a algorithm that gives the correct answer in O(1),
you just don't know which one to choose. For a given class of problems (like
whether an input integer is divisible by 4) the solution may or may not be O(1)
since the one algorithm doesn't work any more, you need to chose one or the
other and the question of choice is the issue and depends of the specific input.

> Uri.
> 
> =
> To unsubscribe, send mail to [EMAIL PROTECTED] with
> the word "unsubscribe" in the message body, e.g., run the command
> echo unsubscribe | mail [EMAIL PROTECTED]
> 

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-26 Thread Micha Feigin
On Wed, 25 Apr 2007 16:09:44 +0300
"Uri Even-Chen" <[EMAIL PROTECTED]> wrote:

> On 4/25/07, Shachar Shemesh <[EMAIL PROTECTED]> wrote:
> > "Completeness" and "Consistency" relate to the relationship between the
> > provability of an expression (syntax) and it's core truthfulness
> > (semantics, or meaning). Since I was not talking about those, these
> > hardly seem relevant.
> >
> > A theory cannot be either, because a theory is something that needs
> > proof. In other words, using any moderately reasonable tools of proof, a
> > theory can be correct and provable, correct and unprovable or incorrect
> > (we usually do not let go of consistency because that leads to absurds).
> > You will notice, however, that the theory is neither complete NOR
> > consistent. These are measures not meant for theorems, but for logics.
> 
> I agree.
> 
> > Even for logics, the statement above is incorrect. Zero order logic is
> > both consistent AND complete.
> 
> I'm not sure what zero order logic is.  How do you say "this sentence
> is not true" in zero order logic?
> 
> > Trivial example: Is there a number of the form "2n+1", where n is a
> > whole and positive number, that divides by 4?
> > You answer seems to be: Sure there is (or, at least, you cannot prove
> > there isn't). There are an infinite number of such numbers.
> > My answer: No, there isn't. There is, indeed, an infinite quantity of
> > such numbers, but ALL of them divide by 4 with a remainder of either "1"
> > or "3".
> 
> Good example.  You assume this is true for all numbers.  Take any
> positive number, multiply it be 2, add one, devide by 4, and you get
> either 1 or 3.  Although I agree with you that it's true for any
> number we can represent by a real computer, I don't think it's
> infinitely true.  I don't think integer numbers exist to infinity.  We

You think wrong, it's easy do define them to infinity

> can define numbers so big, that 2n and 2n+1 is almost the same.  In

You've got limits completely messed up. (2n+1)/2n does go to one in the limit
but 2n+1 - 2n will always be 1 no matter how large the number n is

> any representation, whether in bits or in turing machines, if we
> devide both numbers in 4 we will not necessarily get two different

Now you are talking about rounding errors and the problems of representation
of floating point numbers on the computer not the results of exact arithmetic.
BTW, as long as you are working with exact integers the results will always be
1 or three no matter how many bits you use (for bit represented numbers you are
only interested in the bottom two in the case and you can have a zillion more
that won't matter)

The only thing that these mails show is some lack of understanding or knowledge
of logic and what a proof actually is. You have a tendency to use points
completely irrelevant to the problem to show that it is unprovable/unsolvable.
Telling that the solution to the question to whether 10 is prime is dependent
on what 10 is, is no proof to the question of how hard it is to compute it or
whether it is computable.

> results.  I can't define such a specific number, since you will be
> able to contradict me.  It's an unknown unknown.  Look what I wrote
> about the largest known prime number.
> 
> http://www.speedy.net/uri/blog/?p=25
> 
> > > I'm referring to the general case, whether any specific problem is not
> > > in O(1).  No specific problem can be proved not to be in O(1) for all
> > > algorithms.
> > Ok. Let's take a simple one:
> > Problem: You have a list of numbers in an array. You want to either
> > create a new array, or in the existing array, list the numbers in
> > reverse order to the input one.
> > Proof: No number in the array, with the possible exception of the middle
> > element in the case of an odd sized array, is located at the right
> > place. Any algorithm, by the very constraints posed by the definition of
> > the problem, must be moved. Ergo: The minimal complexity for an
> > algorithm that performs this operation cannot be lower than O(n). QED
> 
> It's not a decision function.  Decision functions return either 0 or
> 1.  I'm referring to the question whether there is any decision
> function which can be proved not to be in O(the size of the input).
> For example, if the input is a number represented in n bits, and the
> output is a sequence of its prime factors represented in m bits, then
> we are talking about m decision functions, each of them cannot be
> proved not to be calculated in O(n) steps.  The total result would be
> calculated in O(m*n) steps, which is polynomial.  But if each bit is
> calculated by a different computer, in parallel, then they can all be
> calculated simultaneously in O(n).  If technology allows simultaneous
> calculations, such as neural networks, then maybe even O(1) can be
> achieved.
> 
> Your example is simple - it can be calculated in O(n), and I just said
> that there is no proof that it cannot be calculated in O(n^2) [each
> bit in O(n)

Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread Shachar Shemesh
guy keren wrote:
> please stop feeding the troll.
>   
Sorry about that. I was wondering where it will go. I think we have a
definite answer now. I'll stop right now.
> --guy
>   
Shachar

-- 
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html


=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread guy keren

please stop feeding the troll.

--guy

On Wed, 25 Apr 2007, Shachar Shemesh wrote:

> Date: Wed, 25 Apr 2007 16:52:11 +0300
> From: Shachar Shemesh <[EMAIL PROTECTED]>
> To: Uri Even-Chen <[EMAIL PROTECTED]>
> Cc: linux-il <[EMAIL PROTECTED]>
> Subject: Re: [off topic] Some new articles I wrote about science
>
> Uri Even-Chen wrote:
> > On 4/25/07, Shachar Shemesh <[EMAIL PROTECTED]> wrote:
> >> "Completeness" and "Consistency" relate to the relationship between the
> >> provability of an expression (syntax) and it's core truthfulness
> >> (semantics, or meaning). Since I was not talking about those, these
> >> hardly seem relevant.
> >>
> >> A theory cannot be either, because a theory is something that needs
> >> proof. In other words, using any moderately reasonable tools of proof, a
> >> theory can be correct and provable, correct and unprovable or incorrect
> >> (we usually do not let go of consistency because that leads to absurds).
> >> You will notice, however, that the theory is neither complete NOR
> >> consistent. These are measures not meant for theorems, but for logics.
> >
> > I agree.
> >
> >> Even for logics, the statement above is incorrect. Zero order logic is
> >> both consistent AND complete.
> >
> > I'm not sure what zero order logic is.How do you say "this sentence
> > is not true" in zero order logic?
> Zeroorder logic (propositional logic) has no relations. "this sentence
> is false" is represented as "!A", but as it has no relations, there is
> nothing that claims anything else about A beyond being false, which
> means it sees nothing special about you claiming that A is this sentence.
>
> In other words, the logic is not granular enough to contain the paradox.
> > Good example.You assume this is true for all numbers.
> No, I do not. I can prove it's true for all number under the conditions
> specified.
> > Take any
> > positive number,
> Take any positive whole number. Read the premise correctly.
> > multiply it be 2, add one, devide by 4, and you get
> > either 1 or 3.
> Yes, you do.
> > Although I agree with you that it's true for any
> > number we can represent by a real computer, I don't *think* it's
> > infinitely true.
> See, the person doing assumptions is you.
> > I don't think integer numbers exist to infinity.
> It's your right, of course, but unless you have something substantial to
> back this up with, then I'm afraid any further discussion is based on
> differing opinions on how mathematics work, and are therefor meaningless.
>
> Out of curiosity, if natural numbers don't continue to infinity, there
> must be a maximal natural number, right? Assuming we call it "m", what
> is the result of "m+1"?
> > We
> > can define numbers so big, that 2n and 2n+1 is almost the same.
> Almost, yes.
> > In
> > any representation, whether in bits or in turing machines, if we
> > devide both numbers in 4 we will not necessarily get two different
> > results.
> See, not "any representation". The fact that you, or your computer,
> cannot solve a given problem does not impossible to solve make it. In
> mathematics, the numbers exist whether you can represent them in a
> finite space or not.
>
> If you have an infinite number of natural numbers, it is obvious you
> will need an unbounded number of bits to represent an unknown natural
> number. That does not make that number not exist.
>
> As a side note, a Turing machine has a semi-infinite storage, and would
> therefor have no problem to represent any natural number precisely.
> > I can't define such a specific number, since you will be
> > able to contradict me.
> That's where you prove yourself wrong. If a specific number you name
> turnsout not to be the largest natural number, we have proven nothing.
> If, however, we are in agreement that ANY specific number you will name
> will not be maximal, or, in other words, that it is impossible for you
> to name the maximal number, then THERE IS NO MAXIMAL NUMBER.
> > It's an unknown unknown.  Look what I wrote
> > about the largest known prime number.
> >
> > http://www.speedy.net/uri/blog/?p=25
> I'm currently at a client's that employs content filtering. Your site is
> labeled as "propoganda" by fortinet. Being as it is that
> mirror.hamakor.org.il is labeled as "freeware download site", I wouldn't
> necessarily take their categorization too personally. Still, I

Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread Uri Even-Chen

Out of curiosity, if natural numbers don't continue to infinity, there
must be a maximal natural number, right? Assuming we call it "m", what
is the result of "m+1"?


Let's define a few numbers:

google= 10^100
googleplex= 10^google
googleplexplex= 10^googleplex
googleplexplexplex= 10^googleplexplex

Now, how much is googleplexplexplex+1?  I think it's about the same as
googleplexplexplex.  I can't make a difference between them.  Let's
call it googleplexplexplex, too.

Now, you will probably yell I'm inconsistent.  How can a number plus
one be equal to itself?  Well, they are not equal, they are about the
same.  Nothing is equal or not equal in my logic.  Nothing is
definite.  Are googleplexplexplex and googleplexplexplex equal?
Sometimes they're equal, sometimes not.

Now, you can ask me whether googleplexplexplex can be divided by 2,
whether it's a prime number, and so many questions.  The real answer
is - I don't know.  But I can define big numbers and ask you questions
which you will not know, either.  Your guess is as good as mine.

By the way, there is no maximal natural number.  I can define bigger
numbers.  You get the point.



>   We
> can define numbers so big, that 2n and 2n+1 is almost the same.
Almost, yes.
>   In
> any representation, whether in bits or in turing machines, if we
> devide both numbers in 4 we will not necessarily get two different
> results.
See, not "any representation". The fact that you, or your computer,
cannot solve a given problem does not impossible to solve make it. In
mathematics, the numbers exist whether you can represent them in a
finite space or not.

If you have an infinite number of natural numbers, it is obvious you
will need an unbounded number of bits to represent an unknown natural
number. That does not make that number not exist.

As a side note, a Turing machine has a semi-infinite storage, and would
therefor have no problem to represent any natural number precisely.


I know, but all of those models are inconsistent, even in theory.
Each of them can be defined to contradict itself.  The existence of an
unbounded sequence of integers, countably infinite, all well defined,
without contradictions, is an illusion.


>   I can't define such a specific number, since you will be
> able to contradict me.
That's where you prove yourself wrong. If a specific number you name
turns out not to be the largest natural number, we have proven nothing.
If, however, we are in agreement that ANY specific number you will name
will not be maximal, or, in other words, that it is impossible for you
to name the maximal number, then THERE IS NO MAXIMAL NUMBER.


OK, I agree with you, there is no maximal number.  But there is not an
infinite number of integers as well.  An integer number plus one will
not always be an integer.  They are integers by definition, but this
definition contradicts itself.  And therefore, I don't accept it.

For example, take googleplexplexplex (the first one I defined).  And
define these numbers:

googleplexplexplexplex= 10^googleplexplexplex
googleplexplexplexplexprime= the smallest prime number who is larger
than googleplexplexplexplex
googleplexplexplexplexprimeplex= 10^googleplexplexplexplexprime
googleplexplexplexplexprimeplexnumberofprimes= the number of prime
numbers who are smaller than googleplexplexplexplexprimeplex

Now, consider googleplexplexplexplexprimeplexnumberofprimes as the
numeric representation of an algorithm who solves any specific
decision problem for any specific input in O(1).  All you have to do
is call it, and you will get a result in no time.

Now, can you prove whether the result you will get from this algorithm
is correct or not?

Not only you can't prove it, but even in theory I don't think there is
an answer to such a question.  It contains contradictions (for
example, it's unknown whether it halts on any input or not) and
therefore the answer is not defined even in theory.

By the way, when I said O(1) I meant there is a constant number c for
which this algorithm will return an answer in less than c steps.  I
don't know the constant.  You will have to prove my assumption for
*any* constant.

But I can make it even more complicated.  For any input with less than
googleplex bits, the result will be calculated the old way, by brute
forcing - by checking all the possible results.  Only for more than
googleplex bits, googleplexplexplexplexprimeplexnumberofprimes will be
used.  Can you prove it will be incorrect even once?

2^googleplex is still a constant.  If any problem can be solved in
less than 2^googleplex steps, it's still O(1).

There is no limit on how things can get complicated.  I agree with
that.  I'm just trying to show that the ordinary logic is
inconsistent.  If any question can be defined, and there is an answer,
proving that any computer (whether deterministic or not) can't give
the answer in a short time is always inconsistent, and this can never
be proved.

Uri.

=

Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread Shachar Shemesh
Uri Even-Chen wrote:
> On 4/25/07, Shachar Shemesh <[EMAIL PROTECTED]> wrote:
>> "Completeness" and "Consistency" relate to the relationship between the
>> provability of an expression (syntax) and it's core truthfulness
>> (semantics, or meaning). Since I was not talking about those, these
>> hardly seem relevant.
>>
>> A theory cannot be either, because a theory is something that needs
>> proof. In other words, using any moderately reasonable tools of proof, a
>> theory can be correct and provable, correct and unprovable or incorrect
>> (we usually do not let go of consistency because that leads to absurds).
>> You will notice, however, that the theory is neither complete NOR
>> consistent. These are measures not meant for theorems, but for logics.
>
> I agree.
>
>> Even for logics, the statement above is incorrect. Zero order logic is
>> both consistent AND complete.
>
> I'm not sure what zero order logic is.  How do you say "this sentence
> is not true" in zero order logic?
Zero order logic (propositional logic) has no relations. "this sentence
is false" is represented as "!A", but as it has no relations, there is
nothing that claims anything else about A beyond being false, which
means it sees nothing special about you claiming that A is this sentence.

In other words, the logic is not granular enough to contain the paradox.
> Good example.  You assume this is true for all numbers.
No, I do not. I can prove it's true for all number under the conditions
specified.
>   Take any
> positive number,
Take any positive whole number. Read the premise correctly.
> multiply it be 2, add one, devide by 4, and you get
> either 1 or 3.
Yes, you do.
>   Although I agree with you that it's true for any
> number we can represent by a real computer, I don't *think* it's
> infinitely true.
See, the person doing assumptions is you.
>   I don't think integer numbers exist to infinity.
It's your right, of course, but unless you have something substantial to
back this up with, then I'm afraid any further discussion is based on
differing opinions on how mathematics work, and are therefor meaningless.

Out of curiosity, if natural numbers don't continue to infinity, there
must be a maximal natural number, right? Assuming we call it "m", what
is the result of "m+1"?
>   We
> can define numbers so big, that 2n and 2n+1 is almost the same.
Almost, yes.
>   In
> any representation, whether in bits or in turing machines, if we
> devide both numbers in 4 we will not necessarily get two different
> results.
See, not "any representation". The fact that you, or your computer,
cannot solve a given problem does not impossible to solve make it. In
mathematics, the numbers exist whether you can represent them in a
finite space or not.

If you have an infinite number of natural numbers, it is obvious you
will need an unbounded number of bits to represent an unknown natural
number. That does not make that number not exist.

As a side note, a Turing machine has a semi-infinite storage, and would
therefor have no problem to represent any natural number precisely.
>   I can't define such a specific number, since you will be
> able to contradict me.
That's where you prove yourself wrong. If a specific number you name
turns out not to be the largest natural number, we have proven nothing.
If, however, we are in agreement that ANY specific number you will name
will not be maximal, or, in other words, that it is impossible for you
to name the maximal number, then THERE IS NO MAXIMAL NUMBER.
>   It's an unknown unknown.  Look what I wrote
> about the largest known prime number.
>
> http://www.speedy.net/uri/blog/?p=25
I'm currently at a client's that employs content filtering. Your site is
labeled as "propoganda" by fortinet. Being as it is that
mirror.hamakor.org.il is labeled as "freeware download site", I wouldn't
necessarily take their categorization too personally. Still, I cannot
check your logic.
> It's not a decision function.  Decision functions return either 0 or
> 1.  I'm referring to the question whether there is any decision
> function which can be proved not to be in O(the size of the input).
I'm not sure, but as, like I said above, we do not speak the same
language, it seems impossible to debate this in a meaningful way. Since
your language also don't sit well with that of the rest of the
mathematicians in the world, and seems not to be self consistent, then
I'm not sure I will try hard enough.

Shachar

-- 
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html


=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread Uri Even-Chen

By the way, it's very easy to prove that for any specific decision
problem, *there is* an algorithm who returns a correct answer in O(1).
Consider these two algorithms: "always return 0" and "always return
1".  At least one of them returns a correct answer for *any input*.
But if a general problem is considered to be hard, it must contain a
subset of "really hard problems", which cannot be calculated in O(1)
by *any* algorithm.  But at least one of my algorithms returns a
correct answer for each of those "really hard problems".  Then a
subset of "really hard problems" can never be proved to exist.  And if
a set contains only "easy" problems, then how hard can it be?

This can even be extended to the halting problem.  For any specific
algorithm it is possible to return the correct answer in O(1), if a
correct answer exists.  If it is proved not to be computable in O(1),
then a definite answer doesn't exist (any definite answer will lead to
a contradiction).

Uri.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread Uri Even-Chen

On 4/25/07, Shachar Shemesh <[EMAIL PROTECTED]> wrote:

"Completeness" and "Consistency" relate to the relationship between the
provability of an expression (syntax) and it's core truthfulness
(semantics, or meaning). Since I was not talking about those, these
hardly seem relevant.

A theory cannot be either, because a theory is something that needs
proof. In other words, using any moderately reasonable tools of proof, a
theory can be correct and provable, correct and unprovable or incorrect
(we usually do not let go of consistency because that leads to absurds).
You will notice, however, that the theory is neither complete NOR
consistent. These are measures not meant for theorems, but for logics.


I agree.


Even for logics, the statement above is incorrect. Zero order logic is
both consistent AND complete.


I'm not sure what zero order logic is.  How do you say "this sentence
is not true" in zero order logic?


Trivial example: Is there a number of the form "2n+1", where n is a
whole and positive number, that divides by 4?
You answer seems to be: Sure there is (or, at least, you cannot prove
there isn't). There are an infinite number of such numbers.
My answer: No, there isn't. There is, indeed, an infinite quantity of
such numbers, but ALL of them divide by 4 with a remainder of either "1"
or "3".


Good example.  You assume this is true for all numbers.  Take any
positive number, multiply it be 2, add one, devide by 4, and you get
either 1 or 3.  Although I agree with you that it's true for any
number we can represent by a real computer, I don't think it's
infinitely true.  I don't think integer numbers exist to infinity.  We
can define numbers so big, that 2n and 2n+1 is almost the same.  In
any representation, whether in bits or in turing machines, if we
devide both numbers in 4 we will not necessarily get two different
results.  I can't define such a specific number, since you will be
able to contradict me.  It's an unknown unknown.  Look what I wrote
about the largest known prime number.

http://www.speedy.net/uri/blog/?p=25


> I'm referring to the general case, whether any specific problem is not
> in O(1).  No specific problem can be proved not to be in O(1) for all
> algorithms.
Ok. Let's take a simple one:
Problem: You have a list of numbers in an array. You want to either
create a new array, or in the existing array, list the numbers in
reverse order to the input one.
Proof: No number in the array, with the possible exception of the middle
element in the case of an odd sized array, is located at the right
place. Any algorithm, by the very constraints posed by the definition of
the problem, must be moved. Ergo: The minimal complexity for an
algorithm that performs this operation cannot be lower than O(n). QED


It's not a decision function.  Decision functions return either 0 or
1.  I'm referring to the question whether there is any decision
function which can be proved not to be in O(the size of the input).
For example, if the input is a number represented in n bits, and the
output is a sequence of its prime factors represented in m bits, then
we are talking about m decision functions, each of them cannot be
proved not to be calculated in O(n) steps.  The total result would be
calculated in O(m*n) steps, which is polynomial.  But if each bit is
calculated by a different computer, in parallel, then they can all be
calculated simultaneously in O(n).  If technology allows simultaneous
calculations, such as neural networks, then maybe even O(1) can be
achieved.

Your example is simple - it can be calculated in O(n), and I just said
that there is no proof that it cannot be calculated in O(n^2) [each
bit in O(n), and for any specific input in O(1)].  Indeed, your
function is a good example of calculating each bit in O(1) (on
average).

O(1) refers to any specific input.  If you provide me with any
specific problem, and I have to return a yes/no answer (one bit), then
you can't prove that I can't return a correct answer for this specific
input in O(1).

It can be proved by induction on the size of the input.  With no input
it's obvious, and if you add one bit to the input, this problem is
considered to be "hard" (not in O(1)) if and only if it contains at
least one specific input for which it is proved to be hard.

Uri.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread Uri Even-Chen

On 4/25/07, Nadav Har'El <[EMAIL PROTECTED]> wrote:

Uri, this is becoming (or was always) extremely off-topic.


Well, it is off topic.


Georg Cantor's
beautiful proof that there are more real numbers than natural numbers
(involving the diagonal of the real number's list) is one of the most
striking - and self-evident - proofs that I've ever seen (my father showed
it to me when I was a kid). Wikipedia (which I'm glad you're using as a
source - I thought you thought it was the devil :-)) also has an article
about this: http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument


I read this article before writing you.  Look what it says about the
constructivist approach.  Whether or not there are "more" real numbers
than rational numbers, depends on how you define "more".  Every
language contains ambiguities.  Proving that there are things which
can't be expressed by any specific language is in itself inconsistent.
For example, is "the smallest natural number that can't be expressed
in less than 20 words" a number?  Is there no such number?

Anyway, proving that a specific problem is "hard" for any algorithm
means something like "we can express more problems than we can express
solutions in the language we are using".  I'm arguing that if a
function is proved to be not computable, then it's not well defined.
The definition is not completely deterministic.

I changed my mind about Wikipedia.  It's a very good source for
information on many subjects, and I use it a lot (especially the
English version of it).  But there are some cases where it is
controversial, and I hardly ever write there at all.  If I write
anything, in most cases it will be deleted.  So it is both good and
bad, both one and zero, both god and the devil at the same time.  I'm
allowing myself to be inconsistent.

Uri.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread Shachar Shemesh
Uri Even-Chen wrote:
> On 4/24/07, Shachar Shemesh <[EMAIL PROTECTED]> wrote:
>> Uri Even-Chen wrote:
>> > Hi people,
>> >
>> > Recently I checked some problems in mathematics, computer science and
>> > physics.  For a long time I had the intuition that the P vs NP Problem
>> > is an undecidable problem.  I think the only conclusion we have is
>> > that there is no solution, I don't think P and NP are mathematically
>> > well defined.
>> How so? For any given algorithm, the questions "is the algorithm
>> polinomially bounded" and "is the algorithm non-deterministicly
>> polinomially bounded" can be, fairly easilly, answered. That satisfy the
>> group theory requirement for a "well behaved definition" for both P
>> and NP.
>>
>> Where is that flaw in the definition?
>
> No theory can be both complete and consistent.
"Completeness" and "Consistency" relate to the relationship between the
provability of an expression (syntax) and it's core truthfulness
(semantics, or meaning). Since I was not talking about those, these
hardly seem relevant.

A theory cannot be either, because a theory is something that needs
proof. In other words, using any moderately reasonable tools of proof, a
theory can be correct and provable, correct and unprovable or incorrect
(we usually do not let go of consistency because that leads to absurds).
You will notice, however, that the theory is neither complete NOR
consistent. These are measures not meant for theorems, but for logics.

Even for logics, the statement above is incorrect. Zero order logic is
both consistent AND complete.
>   Every question we ask
> affects the answer.  For example, the question "is the number 10
> prime?" depends on whether 10 is a binary number or a decimal number
> (or any base-n number).
This example is way irrelevant. All you really did was trick your way
around the definition of what the problem is.
>   I came to a conclusion that ANY well-defined
> function can be computed in polynomial time (in theory).
But your proof of this conclusion is incorrectly formed, and thus cannot
be validated. I asked you to elaborate your terminology.
>   There are
> infinitely many algorithms.  It is not possible to prove that any
> specific problem is not polynomial (unless it's not computable and
> therefore not well defined).
I claimed I could prove no such thing. First, I never talked about
problems, only about algorithms. Once we agree that an ALGORITHM can be
easily categorized on whether or not it belongs to P or to NP, we can
use this ability to properly define the P=?NP question.
>   Any such proof will contain some
> uncertainty (or at least, cannot be proved not to contain uncertainty)
> and therefore, it can be contradicted.
No. Mathematics deal with groups of infinite size on a daily basis, and
this has not posed a problem for quite some time. The mere fact that a
group has an infinite size does NOT mean it has everything.

Trivial example: Is there a number of the form "2n+1", where n is a
whole and positive number, that divides by 4?
You answer seems to be: Sure there is (or, at least, you cannot prove
there isn't). There are an infinite number of such numbers.
My answer: No, there isn't. There is, indeed, an infinite quantity of
such numbers, but ALL of them divide by 4 with a remainder of either "1"
or "3".

The fact that there is an infinite number of algorithms does not, in
itself, prevent a problem from having a lower bound on the number of
operations per element.
>> > You can see my proof here:
>> > http://www.speedy.net/uri/blog/?p=19
>> Your terminology seems off. You appear to be using "O(n)" in a way that
>> is inconsistent with the way it is defined in computer science, yet you
>> provide no definition of your own for what it means. This is, of course,
>> unless I totally mis-read your proof.
>
> I'm referring to the general case, whether any specific problem is not
> in O(1).  No specific problem can be proved not to be in O(1) for all
> algorithms.
Ok. Let's take a simple one:
Problem: You have a list of numbers in an array. You want to either
create a new array, or in the existing array, list the numbers in
reverse order to the input one.
Proof: No number in the array, with the possible exception of the middle
element in the case of an odd sized array, is located at the right
place. Any algorithm, by the very constraints posed by the definition of
the problem, must be moved. Ergo: The minimal complexity for an
algorithm that performs this operation cannot be lower than O(n). QED

As a note, since we actually have an algorithm of complexity O(n) that
solves the problem, we know that O(n) is not a lower bound, but the
actual minimal complexity. You will notice, however, that the actual
algorithm was never considered as part of the proof, which is why the
proof applies to ANY algorithm.

Shachar

-- 
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html


==

Re: [off topic] Some new articles I wrote about science

2007-04-25 Thread Nadav Har'El
On Wed, Apr 25, 2007, Uri Even-Chen wrote about "Re: [off topic] Some new 
articles I wrote about science":
> sure how well they can be defined.  I came to the conclusion that
> there is no proof that there are more real numbers than rational
> numbers, or more generally speaking, that an infinity more than Aleph
> zero can be defined.

Uri, this is becoming (or was always) extremely off-topic. Georg Cantor's
beautiful proof that there are more real numbers than natural numbers
(involving the diagonal of the real number's list) is one of the most
striking - and self-evident - proofs that I've ever seen (my father showed
it to me when I was a kid). Wikipedia (which I'm glad you're using as a
source - I thought you thought it was the devil :-)) also has an article
about this: http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument

Mathematicians, physicists and computer-scientists have gone over all the
subjects you're now "revising" (or revisiting) for decades, and do not agree
with your conclusions; They found other problems in set-theory, complexity
theory and quantum mechanics, but NOT the ones that you think you found.
Could it be that you simply need to read more, and write less on these
topics? It's very easy to find faults in things you don't fully understand.
Just food for thought.


-- 
Nadav Har'El| Wednesday, Apr 25 2007, 7 Iyyar 5767
[EMAIL PROTECTED] |-
Phone +972-523-790466, ICQ 13349191 |Promises are like babies: fun to make,
http://nadav.harel.org.il   |but hell to deliver.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Uri Even-Chen

On 4/24/07, Orr Dunkelman <[EMAIL PROTECTED]> wrote:

Actually, courtesy of Galois we know functions which are well defined, that
can be easily verified and not computed efficiently - the roots of general
polynomials over R of degree > 4. Specifically, to incorporate this problem
to the Turing model, you must assume that the machine can work with real
numbers (over bits), which is not the case. But when you define a
"Real-number turing machine" you get that for this machine P is not equal to
NP.


I haven't thouht much about deterministic infinite-numbers turing
machines.  I don't know how consistent such a theory can be.  I assume
any theory or field of numbers can prove itself to be inconsistent, if
it can express something like "this sentence is true if and only if
this sentence is false".  But if such a turing machine exists even in
theory (and I don't think it does), it appears to be a very strong
computation system to me.  It can solve all the problems in the
universe in one single step, compute an infinite number of decision
functions as once.

Uri.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Uri Even-Chen

I take what I said back. If there's a non-zero probability that a photon is
at position X1 at time T1, and a non-zero probability that the photon is
at position X2 at a later time T2, then it might be said that there's a
probility that this photon travelled at speed (X2-X1)/(T2-T1), which might
very well be higher than c (the speed of light). But I'm not sure if that
really matters - over longerer distances, these random fluctuations average
out.


Which means that the speed of light is constant on average, not all
the time.  I also assumed that light itself has no memory, no past and
future, there is no space and no time.  If the speed of light is not
constant at the quantum level, then Einstein's deterministic
assumptions don't apply to it too.  God does play dice, nothing is
really deterministic, and even complete randomness can not be defined.
A photon can be at many places at once, become many photons and many
become one.  Everything is possible when there are no rules, the speed
of thought is unlimited, and irrational things appear to exist too.
The only rule is that there are no rules, and this rule can also be
broken sometimes.  The speed of light is the limit of our rational
knowledge, we can't put in words what is faster than light.  The
entire universe might be the same as one single photon, infinite
universes may exist in the same space and time.  I don't think it will
ever be possible to put it in any mathematical formula, since formulas
are only estimates to reality and not reality itself.  I put it in
words as "one is equal to zero", 1=0, they are both equal and not
equal at the same time.  If the speed of light is not equal to itself,
anything can happen.  Life doesn't seem to have any rules.

E=mc^2 created nuclear bombs, something completely new, completely
destructive.  I don't know if 1=0 has any practical meaning, but if it
does - I hope it's not a destructive one.  If it has any meaning, I
hope it means more friendship and love.

Uri.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Uri Even-Chen

On 4/24/07, Nadav Har'El <[EMAIL PROTECTED]> wrote:

On Tue, Apr 24, 2007, Uri Even-Chen wrote about "Re: [off topic] Some new articles I 
wrote about science":
> No theory can be both complete and consistent.  Every question we ask
> affects the answer.  For example, the question "is the number 10
> prime?" depends on whether 10 is a binary number or a decimal number
> (or any base-n number).  I came to a conclusion that ANY well-defined
> function can be computed in polynomial time (in theory).  There are
> infinitely many algorithms.  It is not possible to prove that any
> specific problem is not polynomial (unless it's not computable and
> therefore not well defined).  Any such proof will contain some
> uncertainty (or at least, cannot be proved not to contain uncertainty)
> and therefore, it can be contradicted.

When Alan Turing started thinking about the complexity of algorithms about
70 years ago, he noticed the same problem with I think you noticed. Like
you can't say anything about "10" before you define what the string "10" means
(in your example, it could be in base 2 or base 10), similarly you can't talk
about "algorithms" without defining precisely what an algorithm means.

A cynic would have said that you can't define an algorithm precisely, because
it can be run on a "computer" of any type (at Turing's time, the computer
was usually imagined...), written in various "language"s, and so on. But
Turing wasn't a cynic, he was a genious. He formalized two types of very
specific "computers" with their own very specific "languages". These are
the (deterministic) "Turing Machine" and the "Non-deterministic Turing
Machine". Turing, as well as his contemporary Alonzo Church, believed that
any physical computer is a *subset* of the Turing Machine (in other words,
it can be emulated by a Turing Machine). This belief, the so-called
"Church-Turing Thesis" cannot be proven - you can consider this an axiom.

(consult your favorite CS book, or even Wikipedia, to refresh you knowledge
on these issues if you need to).

Now that you understand this axiom, all the theorems of complexity become
clearer. "P" means polynomial-time algorithms on a (deterministic) Turing
Machine, and "NP" means polynomial-time algorithms on a Non-deterministic
Turing Machine. These definitions are very precise, and its perfectly possible
to prove that certain algorithms are in P, in NP, not in P or not in NP
(just nobody found an algorithm which is in NP but not in P - which is why
the question of P=NP is still open).


If P and NP are defined on Turing Machines, which I agree is a good
model, then they should be represented in the language of Turing
Machines, or at least in the language the Turing Machines is defined
it.  Every language contains paradoxes and contradictions, and the
definitions of P and NP can be made to contradict themselves too.

But even if we accept that a definition such as P does exist, on
theoretical Turing Machines, it is not possible to prove that any
specific problem is computable but is not in P.


You said that "I came to a conclusion that ANY well-defined function can be
computed in polynomial time (in theory).", but this is NOT true on a
deterministic Turing Machine. For various algorithms, it is very easy to
prove that they are *NOT* in P, i.e., there cannot be any polynomial-time
algorithm. Such are algorithms which require exponential computation time
(I'm sure you can think of some).

> I'm referring to the general case, whether any specific problem is not
> in O(1).  No specific problem can be proved not to be in O(1) for all
> algorithms.

Why not? Please let me know what I'm missing in your theory. Let's say I
take the easily defined function f(n) = 111...111 (a sequence n 1's).
Creating the output of f(n) requires n steps on a Turing machine.
So the time this function takes is NOT o(1) like you say.


A decision function returns only one bit - either 0 or 1.  If we take
the size of the input into account, no decision function can be proved
not to be in O(the size of the input).  I mean - there is no decision
function who accepts an input of size n, return output in size 1, and
is proved to take more than a constant number of steps c multiplied by
n (c*n) for ANY constant.

But the constant can be very big.  Even 2^(2^1000) is still a
constant.  If the size of the constant is unlimited, such a thing can
never be proved.

A prove for a specific constant might exist.  I don't know (I haven't
checked that).  But for ANY constant a proof doesn't exist.

By the way, I also wrote something about cardinal numbers such as
Aleph zero and Aleph one.  I suspect definitions of such cardinal
numbers might be inconsistent with reality

Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Uri Even-Chen

On 4/24/07, Orr Dunkelman <[EMAIL PROTECTED]> wrote:

This reminds me an interesting question cryptographers deal with nowadays:

what is a secure cryptographic hash function.

In short (assuming you all know what a hash function is), a cryptographic
hash function is such that it is hard to find collisions ( i.e., x and x'
s.t. h(x) = h(x')), second pre-image (given x finding x' s.t. h(x)=h(x'))
and pre-images (given y, find x s.t. h(x)=y).

The reason we have problems is that collisions must exist, and if a and b is
a collision than print(a,b) is a polynomial time algorithm which outputs the
collision.

Of course, this is dubious, because for any given hash function we might
know the specific algorithm. For more information about that, lock at
http://eprint.iacr.org/2006/281

In any case, there are infinite (Aleph zero) number of polynomial
algorithms, but there is (Aleph one) languages.
Actually, the only reason this is not a proof that P \neq NP is because the
size of NP is Aleph zero.
So indeed there are sufficiently many algorithms, but the question is
whether the number of languages the algorithms identify is infinite (please
note that two algorithms may identify the same language).

As for O(1) for all algorithms - there are many such languages.

For example, determining whether an input string is of the form a^n||b^n
requires reading the string (so the running time of any algorithm that
solves it is at least O(n)). There are even problems were you can prove that
you need extra memory of logarithmic size (see the connectivity problem).

And finally, the question whether P=NP does not necessarily require the
theory complexity to be complete. And given the definitions its quite
consistent (as long as polynomials are consistent).

Orr.


I distinguish between problems which are known to be hard and problems
which can be mathematically proved to be hard for any algorithm (or
even, for example, for any algorithm which can be represented by a
sequence of 50 million bits).  I'm saying that there is no
mathematical proof which is not inconsistent, but I'm not saying
problems can never be hard.  There are hard problems, according to our
experience.

Cryptographic functions are good example.  Although we can generate
functions which are really hard to compute (the reverse function), a
reverse function (either one-to-one or one-to-many) exists.  And if we
define a hash function which returns a hash string of 10,000 bits (for
example), which represent 2^10,000 different possible values - the
number 2^50,000,000 is much bigger.  And it's very probable that an
algorithm of size 50,000,000 bits (or less) who can compute the
reverse function in a short time does exist.  The more bits we
generate for each hash function, the harder it is to find a reverse
function.  But it is an assumption.  It can never be proved.  It
doesn't mean cryptography doesn't exist, it just means that it can't
be proved not to be insecure, unless we use something like one-time
key, which is random, and this has been proved to be completely
secure.

In the army they use such one-time keys dotted on paper, each of them
has exactly two copies, and is used only once.  After it's used, the
paper is burned.  But it's wasting a lot of paper.  This is completely
secure if the key is random, the only way to decrypt such a message is
to find out the key (or the message itself).  But if the same key is
used more than once than it can become insecure.

Uri.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Orr Dunkelman

On 4/24/07, Nadav Har'El <[EMAIL PROTECTED]> wrote:


You said that "I came to a conclusion that ANY well-defined function can
be
computed in polynomial time (in theory).", but this is NOT true on a
deterministic Turing Machine. For various algorithms, it is very easy to
prove that they are *NOT* in P, i.e., there cannot be any polynomial-time
algorithm. Such are algorithms which require exponential computation time
(I'm sure you can think of some).




Actually, courtesy of Galois we know functions which are well defined, that
can be easily verified and not computed efficiently - the roots of general
polynomials over R of degree > 4. Specifically, to incorporate this problem
to the Turing model, you must assume that the machine can work with real
numbers (over bits), which is not the case. But when you define a
"Real-number turing machine" you get that for this machine P is not equal to
NP.

However, as Nadav said, the question P = NP? is defined over a turing
machine with bits as the building blocks (the values on the infinite
string).

Orr.

--
Orr Dunkelman,
[EMAIL PROTECTED]

"Any human thing supposed to be complete, must for that reason infallibly
be faulty" -- Herman Melville, Moby Dick.

GPG fingerprint: C2D5 C6D6 9A24 9A95 C5B3  2023 6CAB 4A7C B73F D0AA
(This key will never sign Emails, only other PGP keys. The key corresponds
to [EMAIL PROTECTED])


Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Nadav Har'El
On Tue, Apr 24, 2007, Nadav Har'El wrote about "Re: [off topic] Some new 
articles I wrote about science":
> I also don't understand your "conclusion" (not seemed to be based on the
> previous arguments) that the speed of light is not well-defined at the small
> scale of quantum mechanics. Heisenberg's uncertainty princple indeed says
> that you cannot know a photon's position and momentum at absolute precision
> at the same time. But even if you don't know these, why does it mean that
> you can't know its *speed*? Mind you, that unlike matter particles, a photon's
> momentum is a function of its wavelength, not its speed.

I take what I said back. If there's a non-zero probability that a photon is
at position X1 at time T1, and a non-zero probability that the photon is
at position X2 at a later time T2, then it might be said that there's a
probility that this photon travelled at speed (X2-X1)/(T2-T1), which might
very well be higher than c (the speed of light). But I'm not sure if that
really matters - over longerer distances, these random fluctuations average
out.

-- 
Nadav Har'El|   Tuesday, Apr 24 2007, 7 Iyyar 5767
[EMAIL PROTECTED] |-
Phone +972-523-790466, ICQ 13349191 |I am thinking about a new signature. Stay
http://nadav.harel.org.il   |tuned.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Nadav Har'El
On Tue, Apr 24, 2007, Uri Even-Chen wrote about "Re: [off topic] Some new 
articles I wrote about science":
> No theory can be both complete and consistent.  Every question we ask
> affects the answer.  For example, the question "is the number 10
> prime?" depends on whether 10 is a binary number or a decimal number
> (or any base-n number).  I came to a conclusion that ANY well-defined
> function can be computed in polynomial time (in theory).  There are
> infinitely many algorithms.  It is not possible to prove that any
> specific problem is not polynomial (unless it's not computable and
> therefore not well defined).  Any such proof will contain some
> uncertainty (or at least, cannot be proved not to contain uncertainty)
> and therefore, it can be contradicted.

When Alan Turing started thinking about the complexity of algorithms about
70 years ago, he noticed the same problem with I think you noticed. Like
you can't say anything about "10" before you define what the string "10" means
(in your example, it could be in base 2 or base 10), similarly you can't talk
about "algorithms" without defining precisely what an algorithm means.

A cynic would have said that you can't define an algorithm precisely, because
it can be run on a "computer" of any type (at Turing's time, the computer
was usually imagined...), written in various "language"s, and so on. But
Turing wasn't a cynic, he was a genious. He formalized two types of very
specific "computers" with their own very specific "languages". These are
the (deterministic) "Turing Machine" and the "Non-deterministic Turing
Machine". Turing, as well as his contemporary Alonzo Church, believed that
any physical computer is a *subset* of the Turing Machine (in other words,
it can be emulated by a Turing Machine). This belief, the so-called
"Church-Turing Thesis" cannot be proven - you can consider this an axiom.

(consult your favorite CS book, or even Wikipedia, to refresh you knowledge
on these issues if you need to).

Now that you understand this axiom, all the theorems of complexity become
clearer. "P" means polynomial-time algorithms on a (deterministic) Turing
Machine, and "NP" means polynomial-time algorithms on a Non-deterministic
Turing Machine. These definitions are very precise, and its perfectly possible
to prove that certain algorithms are in P, in NP, not in P or not in NP
(just nobody found an algorithm which is in NP but not in P - which is why
the question of P=NP is still open).

You said that "I came to a conclusion that ANY well-defined function can be
computed in polynomial time (in theory).", but this is NOT true on a
deterministic Turing Machine. For various algorithms, it is very easy to
prove that they are *NOT* in P, i.e., there cannot be any polynomial-time
algorithm. Such are algorithms which require exponential computation time
(I'm sure you can think of some).

> I'm referring to the general case, whether any specific problem is not
> in O(1).  No specific problem can be proved not to be in O(1) for all
> algorithms.

Why not? Please let me know what I'm missing in your theory. Let's say I
take the easily defined function f(n) = 111...111 (a sequence n 1's).
Creating the output of f(n) requires n steps on a Turing machine.
So the time this function takes is NOT o(1) like you say.

> I also wrote more articles, for example a proof that the speed of
> light is not constant (or more generally speaking, that deterministic
> constants don't exist).  It means that there is no theoretical limit

I am at a loss trying to understand your reason in that article. As far
as I remember, the double-slit experiment is never done with a single
photon, but rather with a constant source of photons, whose wave function
looks like a sine wave. If these sine waves pass different distances before
meeting each other, they will meet each other at a different phase, causing
the interference pattern to come out different.

I also don't understand your "conclusion" (not seemed to be based on the
previous arguments) that the speed of light is not well-defined at the small
scale of quantum mechanics. Heisenberg's uncertainty princple indeed says
that you cannot know a photon's position and momentum at absolute precision
at the same time. But even if you don't know these, why does it mean that
you can't know its *speed*? Mind you, that unlike matter particles, a photon's
momentum is a function of its wavelength, not its speed.

In any case, there are many problems reconciling Quantum Mechanics, Special
Relativity and General Relativity, at many levels. Steven Hawking is famous
for proving that a black hole - an object so massive that something would
need to travel faste

Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Orr Dunkelman

This reminds me an interesting question cryptographers deal with nowadays:

what is a secure cryptographic hash function.

In short (assuming you all know what a hash function is), a cryptographic
hash function is such that it is hard to find collisions (i.e., x and x' s.t.
h(x) = h(x')), second pre-image (given x finding x' s.t. h(x)=h(x')) and
pre-images (given y, find x s.t. h(x)=y).

The reason we have problems is that collisions must exist, and if a and b is
a collision than print(a,b) is a polynomial time algorithm which outputs the
collision.

Of course, this is dubious, because for any given hash function we might
know the specific algorithm. For more information about that, lock at
http://eprint.iacr.org/2006/281

In any case, there are infinite (Aleph zero) number of polynomial
algorithms, but there is (Aleph one) languages.
Actually, the only reason this is not a proof that P \neq NP is because the
size of NP is Aleph zero.
So indeed there are sufficiently many algorithms, but the question is
whether the number of languages the algorithms identify is infinite (please
note that two algorithms may identify the same language).

As for O(1) for all algorithms - there are many such languages.

For example, determining whether an input string is of the form a^n||b^n
requires reading the string (so the running time of any algorithm that
solves it is at least O(n)). There are even problems were you can prove that
you need extra memory of logarithmic size (see the connectivity problem).

And finally, the question whether P=NP does not necessarily require the
theory complexity to be complete. And given the definitions its quite
consistent (as long as polynomials are consistent).

Orr.
On 4/24/07, Uri Even-Chen <[EMAIL PROTECTED]> wrote:


On 4/24/07, Shachar Shemesh <[EMAIL PROTECTED]> wrote:
> Uri Even-Chen wrote:
> > Hi people,
> >
> > Recently I checked some problems in mathematics, computer science and
> > physics.  For a long time I had the intuition that the P vs NP Problem
> > is an undecidable problem.  I think the only conclusion we have is
> > that there is no solution, I don't think P and NP are mathematically
> > well defined.
> How so? For any given algorithm, the questions "is the algorithm
> polinomially bounded" and "is the algorithm non-deterministicly
> polinomially bounded" can be, fairly easilly, answered. That satisfy the
> group theory requirement for a "well behaved definition" for both P and
NP.
>
> Where is that flaw in the definition?

No theory can be both complete and consistent.  Every question we ask
affects the answer.  For example, the question "is the number 10
prime?" depends on whether 10 is a binary number or a decimal number
(or any base-n number).  I came to a conclusion that ANY well-defined
function can be computed in polynomial time (in theory).  There are
infinitely many algorithms.  It is not possible to prove that any
specific problem is not polynomial (unless it's not computable and
therefore not well defined).  Any such proof will contain some
uncertainty (or at least, cannot be proved not to contain uncertainty)
and therefore, it can be contradicted.

> > You can see my proof here:
> > http://www.speedy.net/uri/blog/?p=19
> Your terminology seems off. You appear to be using "O(n)" in a way that
> is inconsistent with the way it is defined in computer science, yet you
> provide no definition of your own for what it means. This is, of course,
> unless I totally mis-read your proof.

I'm referring to the general case, whether any specific problem is not
in O(1).  No specific problem can be proved not to be in O(1) for all
algorithms.

I also wrote more articles, for example a proof that the speed of
light is not constant (or more generally speaking, that deterministic
constants don't exist).  It means that there is no theoretical limit
to the speed any particle can travel in spacetime.  But I don't know
whether or not it applies to humans.  The term "human" itself is
intuitive and I don't think it can be defined physically in any
deterministic way which is not inconsistent.

http://www.speedy.net/uri/blog/?cat=3

Uri.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]





--
Orr Dunkelman,
[EMAIL PROTECTED]

"Any human thing supposed to be complete, must for that reason infallibly
be faulty" -- Herman Melville, Moby Dick.

GPG fingerprint: C2D5 C6D6 9A24 9A95 C5B3  2023 6CAB 4A7C B73F D0AA
(This key will never sign Emails, only other PGP keys. The key corresponds
to [EMAIL PROTECTED])


Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Uri Even-Chen

On 4/24/07, Shachar Shemesh <[EMAIL PROTECTED]> wrote:

Uri Even-Chen wrote:
> Hi people,
>
> Recently I checked some problems in mathematics, computer science and
> physics.  For a long time I had the intuition that the P vs NP Problem
> is an undecidable problem.  I think the only conclusion we have is
> that there is no solution, I don't think P and NP are mathematically
> well defined.
How so? For any given algorithm, the questions "is the algorithm
polinomially bounded" and "is the algorithm non-deterministicly
polinomially bounded" can be, fairly easilly, answered. That satisfy the
group theory requirement for a "well behaved definition" for both P and NP.

Where is that flaw in the definition?


No theory can be both complete and consistent.  Every question we ask
affects the answer.  For example, the question "is the number 10
prime?" depends on whether 10 is a binary number or a decimal number
(or any base-n number).  I came to a conclusion that ANY well-defined
function can be computed in polynomial time (in theory).  There are
infinitely many algorithms.  It is not possible to prove that any
specific problem is not polynomial (unless it's not computable and
therefore not well defined).  Any such proof will contain some
uncertainty (or at least, cannot be proved not to contain uncertainty)
and therefore, it can be contradicted.


> You can see my proof here:
> http://www.speedy.net/uri/blog/?p=19
Your terminology seems off. You appear to be using "O(n)" in a way that
is inconsistent with the way it is defined in computer science, yet you
provide no definition of your own for what it means. This is, of course,
unless I totally mis-read your proof.


I'm referring to the general case, whether any specific problem is not
in O(1).  No specific problem can be proved not to be in O(1) for all
algorithms.

I also wrote more articles, for example a proof that the speed of
light is not constant (or more generally speaking, that deterministic
constants don't exist).  It means that there is no theoretical limit
to the speed any particle can travel in spacetime.  But I don't know
whether or not it applies to humans.  The term "human" itself is
intuitive and I don't think it can be defined physically in any
deterministic way which is not inconsistent.

http://www.speedy.net/uri/blog/?cat=3

Uri.

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: [off topic] Some new articles I wrote about science

2007-04-24 Thread Shachar Shemesh
Uri Even-Chen wrote:
> Hi people,
>
> Recently I checked some problems in mathematics, computer science and
> physics.  For a long time I had the intuition that the P vs NP Problem
> is an undecidable problem.  I think the only conclusion we have is
> that there is no solution, I don't think P and NP are mathematically
> well defined.
How so? For any given algorithm, the questions "is the algorithm
polinomially bounded" and "is the algorithm non-deterministicly
polinomially bounded" can be, fairly easilly, answered. That satisfy the
group theory requirement for a "well behaved definition" for both P and NP.

Where is that flaw in the definition?
> You can see my proof here:
> http://www.speedy.net/uri/blog/?p=19
Your terminology seems off. You appear to be using "O(n)" in a way that
is inconsistent with the way it is defined in computer science, yet you
provide no definition of your own for what it means. This is, of course,
unless I totally mis-read your proof.

Shachar

-- 
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html


=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]