Re: assert semantic change proposal

2014-08-12 Thread Kagamin via Digitalmars-d
On Tuesday, 12 August 2014 at 05:47:22 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 12 August 2014 at 04:50:15 UTC, Kagamin wrote:
Logic is ordered, and we have a notion of order because we 
know time, which is the only obviously ordered thing in 
nature. So in a sense any logic has time in its foundation and 
math can do the reverse: represent time in declarative manner.


No, there is no order to boolean expressions. Deduction can be
performed bottom up.


A true reversal would be when preconditions are derived from 
postconditions.


Recall that all pure functions over finite types can be 
implemented as tables. A table lookup is O(1). No time.


That's different logic, and algorithmic complexity is time.


Re: assert semantic change proposal

2014-08-12 Thread via Digitalmars-d

On Tuesday, 12 August 2014 at 08:07:01 UTC, Kagamin wrote:
A true reversal would be when preconditions are derived from 
postconditions.


That's how you conduct a proof…

Recall that all pure functions over finite types can be 
implemented as tables. A table lookup is O(1). No time.


That's different logic, and algorithmic complexity is time.


Not at all. You can create a boolean expression for every bit of 
output and evaluate it only using NAND gates (or NOR). Not 
practical, but doable…


Re: assert semantic change proposal

2014-08-12 Thread Kagamin via Digitalmars-d
On Tuesday, 12 August 2014 at 09:17:54 UTC, Ola Fosheim Grøstad 
wrote:

On Tuesday, 12 August 2014 at 08:07:01 UTC, Kagamin wrote:
A true reversal would be when preconditions are derived from 
postconditions.


That's how you conduct a proof…


A proof usually flows from causes to consequences, the reverse 
(induction) is ambiguous. Ordering is more fundamental: you can 
define or redefine it, but it will remain, one can't think 
outside of this formalism.


Not at all. You can create a boolean expression for every bit 
of output and evaluate it only using NAND gates (or NOR). Not 
practical, but doable…


Mathematics is pretty aware of differences between algorithms, 
and proving their equivalence is non-trivial.


Re: assert semantic change proposal

2014-08-12 Thread Ola Fosheim Gr via Digitalmars-d

On Tuesday, 12 August 2014 at 14:43:23 UTC, Kagamin wrote:
A proof usually flows from causes to consequences, the reverse 
(induction) is ambiguous.


If a deterministic imperative program is ambigious then there is 
something wrong. So no, ambiguity is not the problem. The size if 
the search space is. The preconditions are the ambiguity which 
the theorem prover try to find...


Ordering is more fundamental: you can define or redefine it, 
but it will remain, one can't think outside of this formalism.


Explain.

Not at all. You can create a boolean expression for every bit 
of output and evaluate it only using NAND gates (or NOR). Not 
practical, but doable…


Mathematics is pretty aware of differences between algorithms, 
and proving their equivalence is non-trivial.


Define non-trivial. We were talking about the nature of finite 
computations. They are all trivial. Algorithms are just 
compression of space.


Re: assert semantic change proposal

2014-08-11 Thread Kagamin via Digitalmars-d
On Sunday, 10 August 2014 at 02:40:31 UTC, Ola Fosheim Grøstad 
wrote:
Of course it does, that is why Hoare Logic and SSA exist. 
Deduction lacks a notion of time.


Logic is ordered, and we have a notion of order because we know 
time, which is the only obviously ordered thing in nature. So in 
a sense any logic has time in its foundation and math can do the 
reverse: represent time in declarative manner.


Re: assert semantic change proposal

2014-08-11 Thread via Digitalmars-d

On Tuesday, 12 August 2014 at 04:50:15 UTC, Kagamin wrote:
Logic is ordered, and we have a notion of order because we know 
time, which is the only obviously ordered thing in nature. So 
in a sense any logic has time in its foundation and math can do 
the reverse: represent time in declarative manner.


No, there is no order to boolean expressions. Deduction can be
performed bottom up. Imperative programs rely on top down
execution due to side effects.

Recall that all pure functions over finite types can be 
implemented as tables. A table lookup is O(1). No time.


Re: assert semantic change proposal

2014-08-10 Thread Timon Gehr via Digitalmars-d
On 08/10/2014 04:40 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Saturday, 9 August 2014 at 20:22:36 UTC, Timon Gehr wrote:

Formally, that's what it assumes to be the case. If you can prove
'false', this means that the code is unreachable.


No, if you prove false that means either that it is nonterminating


I.e. everything afterwards is unreachable.


⦃ Q ∧ ¬Q ⦄
⦃ false ⦄ // this means 'unreachable'
assert(false); // goes through


implication is not equivalence


Do you agree we indeed have equivalence for those programs with an empty 
precondition?



or that it cannot be proved by the the assumptions, hence the assumptions
need strengthening.
...


'false' is the strongest statement. Why would you need to strengthen 
assumptions?



(Yes, of course it can also be read as 'inconsistent', if the code is
actually reachable, but proving code 'unreachable'


No, you can have this:

main(){
Precondition(false)
Stuff()
Postcondition(anything)
}

This states that Stuff has not yet been verified for any input. It does
not state that Stuff will never execute.
...


If the precondition is actually assumed to hold for all executions of 
the program, a precondition of 'false' means that the program never 
executes. Maybe you don't assume this, but internally to the logic, this 
is what happens: You verify the postcondition under the assumption of 
the precondition. I.e. in this case you verify the postcondition of the 
program under the assumption that it never executes.


Of course you may still execute such a program, but the ⦃ ⦄ assertions 
become irrelevant to this execution.


Note that an optimizer needs to be able to use deduced facts for program 
transformations, or it is useless. To an optimizer, a contradiction 
means 'unreachable'. There are no non-trivial preconditions (in your 
sense, i.e. the program still may have interesting behaviour if they are 
violated) on the program used to deduce those facts. Do you agree that 
_if_ 'assume' is to be seen as an optimizer hint, then 'assume(false)' 
means 'assume unreachable'?



Your definition is lacking. It mixes up two entirely different
perspectives,
the deductive mode and the imperative mode of reasoning.


I think you should elaborate on this a little if you want to make a
point. E.g. I don't think that such an opposition of 'reasoning modes'
even exists.


Of course it does, that is why Hoare Logic and SSA exist.


They exist for the virtue of being convenient formalisms for some 
applications, but they can be embedded into e.g. set theory just fine.



Deduction lacks a notion of time.
...


What is 'a notion of time' and how does HL provide it?


⦃ P ∧ Q ⦄  // ← precondition
assert(Q); // ← statement
⦃ P ⦄  // ← postcondition


Makes no sense.


It is a definition. If you actually want to understand David's point you 
should digest it.


Does

⦃ false ⦄
abort;
⦃ P ⦄

make sense to you?

The assert(Q); statement is equivalent to

if(¬Q) abort;



Assert is not an imperative statement,


Yes, it may be seen to be.


it is an annotation. It is meta.



If it is part of the program it is not 'meta'.

Here, there is a distinction between:

assert(P) // statement

and

⦃ P ⦄// meta

even though both are commonly called 'assertions'.

assert(P); is a statement and hence it needs to be given a semantics, 
e.g. an axiomatic semantics or by rewriting, as above.



(BTW: This is not about being right or wrong. It is about understanding 
each other's statements in the right context.)


Re: assert semantic change proposal

2014-08-10 Thread via Digitalmars-d

On Sunday, 10 August 2014 at 13:47:45 UTC, Timon Gehr wrote:
No, if you prove false that means either that it is 
nonterminating


I.e. everything afterwards is unreachable.


Depends on your take on it. If you support non-deterministic 
computations (e.g. Ada's select) then non-termination is 
treated like bottom. So you can have


infinite_recursion() or terminates() = terminates()

So unreachable in terms of machine-language, sure, but not in 
terms of execution flow.


Do you agree we indeed have equivalence for those programs with 
an empty precondition?


You mean false or true?

or that it cannot be proved by the the assumptions, hence the 
assumptions

need strengthening.
...


'false' is the strongest statement. Why would you need to 
strengthen assumptions?


If you succeed in proving the postcondition to be false then the 
postcondition does not hold and you have to strengthen the 
precondition or else the triplet is invalid/incorrect. If the 
precondition is defined false then the postcondition is not 
required to hold. It is not covered by the 
proof/verification/specification and the triplet is 
valid/provably correct.


So if the precondition is defined false then the program is a 
priori proved correct to the specification, i.e. the 
specification does not require any guarantees for the result.


If the precondition is actually assumed to hold for all 
executions of the program, a precondition of 'false' means that 
the program never executes.


That is the responsibility of the caller. It is the caller's 
responsibility to uphold the precondition in order to get 
guaranteed results.


But you could have multiple pairs of 
preconditions/postconditions. E.g. you could for a generic 
function have different pairs for int/float/double or have a 
separate pair for termination, etc. Or have one set that 
specifies what is covered by unit-tests, what is vetted by 
review, what is totally unknown… You could have different types 
of guarantees for a library and users could take advantage of it 
based on the requirements for the application (high security vs 
computer game).


Maybe you don't assume this, but internally to the logic, this 
is what happens: You verify the postcondition under the 
assumption of the precondition. I.e. in this case you verify 
the postcondition of the program under the assumption that it 
never executes.


It is not always possible to prove termination, so that would be 
unfortunate. It would be more useful to be able to restrict the 
precondition to what you cover by unit-tests. Or to have multiple 
sets of pre/postconditions (proven-by-machine vs 
vetted-by-review).


Then if you can prove that the caller fulfills the precondition 
you are able to optimize based on the pre/post condition and 
assume the post-condition if it is machine-verified.


That would give new optimization possibilities.

That does not mean that you should never be able to use a 
function outside the verified range (by machine proof), but then 
you need to assert the results at runtime if you want verified 
correctness.


Of course you may still execute such a program, but the ⦃ ⦄ 
assertions become irrelevant to this execution.


Not really, they could still trap.

Note that an optimizer needs to be able to use deduced facts 
for program transformations, or it is useless. To an optimizer, 
a contradiction means 'unreachable'.


No, to an optimizer a contradiction means all bets are off for 
the postcondition to hold:


program(x) != executable(x)

That is why __assume(P) must always hold on any reachable path 
(e.g. any path that the optimizer considers to be a target for 
optimization).


But __assume is not a regular precondition, it has a counterpart 
in the optimization invariant (postcondition):


program(x) == executable(x), x not creating contradictions by 
__assume(…)


The optimizer is presuming that anything goes as long as this 
optimization invariant can be upheld. This is not a postcondition 
specified by the programmer at the meta-level. It is implicit.


And the optimizer does not check the implicit precondition, at 
all…


to deduce those facts. Do you agree that _if_ 'assume' is to be 
seen as an optimizer hint, then 'assume(false)' means 'assume 
unreachable'?


No. I think the optimizer should only use assumptions when they 
are guaranteed to hold by the caller or the language axioms. Or 
in the rare case by other machine verifiable axioms 
(specifications for hardware registers etc).



Of course it does, that is why Hoare Logic and SSA exist.


They exist for the virtue of being convenient formalisms for 
some applications, but they can be embedded into e.g. set 
theory just fine.


Not sure what you mean by this and how this relates to imperative 
programming.



Deduction lacks a notion of time.
...


What is 'a notion of time' and how does HL provide it?


It doesn't. Which is why it trips when you don't prove 
termination. But HL does not do program transformations.



Re: assert semantic change proposal

2014-08-09 Thread via Digitalmars-d

On Thursday, 7 August 2014 at 23:13:15 UTC, David Bregman wrote:

and use unreachable() if you know 100% it holds.


This is just another name for the same thing, it isn't any more 
or less dangerous.


Of course it is. unreachable() could lead to this:

f(){ @unreachable}
g(){...}

f:
g:
   machinecode


assume(false) just states that the post condition has not been 
proved/tested/verified yet. That does not signify that no code 
will reach it. It just means that the returned value is 
uncertain. It does not mean that the code will never be executed.


It is also dangerous to mix them up since CTFE can lead to 
assume(false). If you want safety and consistence you need to 
have a designated notation for @unreachable.



Encouraging assume(false) sounds like an aberration to me.


If you accept my definition of assume there is no problem with 
it, it just means unreachable. I showed how this follows 
naturally from my definition.


Your definition is lacking. It mixes up two entirely different 
perspectives, the deductive mode and the imperative mode of 
reasoning.


You need to provide proofs if you attempt using assume(X) and 
assert(false).


There is no mechanism for providing proofs or checking proofs 
in D.


Actually, unit tests do. D just lacks the semantics to state that 
the input accepted by the precondition has been fully covered in 
unit tests.


Well, any correct mathematical formalism of the code must 
preserve its semantics, so it doesn't really matter which one 
we talk about.


Preserve the semantics of the language, yes. For pure functions 
there is no difference. And that is generally what is covered by 
plain HL. The code isn't modified, but you only prove 
calculations, not output.


Different lines (or basic blocks, I guess is the right term 
here) can have different reachability, so you could just give 
each one its own P.


basic blocks is an artifact of a specific intermediate 
representation. It can cease to exist. For instance MIPS and ARM 
have quite different conditionals than x86. With a MIPS 
instruction set a comparison is an expression returning 1 or 0 
and you can get away from conditionals by multiplying successive 
calculations with it to avoid branch penalties. This is also a 
common trick for speeding up SIMD…


Hoare Logic deals with triplets: precondition, statement, 
postcondition.  (assume,code,assert)


You start with assume(false), then widen it over the code until 
the assert is covered for the input you care about.


P1 and P2 are propositions, so what are the content of those? 
The ones you use for your P1=X and P2=Y?


This is just an implementation detail of the compiler. Probably 
each basic block has its own P, which is conservatively 
approximated with some data flow analysis. (Disclaimer: I am 
not a compiler expert)


Oh, there is of course no problem by inserting a @unreachable 
barrier-like construct when you have proven assume(false), but it 
is a special case. It is not an annotation for deduction.


Anyways, I don't have much more time to keep justifying my 
definition of assume. If you still don't like mine, maybe you 
can give your definition of assume, and demonstrate how that 
definition is more useful in the context of D.


assume(x) is defined by the preconditions in Hoare Logic 
triplets. This is what you should stick to for in(){} 
preconditions.


A backend_assume(x) is very much implementation defined and is a 
precondition where the postconditions is:


program(input) == executable(input) for all x except those 
excluded by backend_assume.


However an assume tied to defining a function's degree of 
verification is not a backend_assume.


That is where D risk going seriously wrong: conflating 
postconditions (assert) with preconditions (assume) and 
conflating local guarantees (by unit tests or proofs) with global 
optimization and backend mechanisms (backend_assume).





Re: assert semantic change proposal

2014-08-09 Thread Timon Gehr via Digitalmars-d
On 08/09/2014 09:26 PM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Thursday, 7 August 2014 at 23:13:15 UTC, David Bregman wrote:

and use unreachable() if you know 100% it holds.


This is just another name for the same thing, it isn't any more or
less dangerous.


Of course it is. unreachable() could lead to this:

f(){ @unreachable}
g(){...}

f:
g:
machinecode


assume(false) just states that the post condition has not been
proved/tested/verified yet. That does not signify that no code will
reach it.


Formally, that's what it assumes to be the case. If you can prove 
'false', this means that the code is unreachable.


if(Q){
⦃ Q ⦄
if(¬Q){
⦃ Q ∧ ¬Q ⦄
⦃ false ⦄ // this means 'unreachable'
assert(false); // goes through
}
}

After assuming 'false' you can prove false.
⦃ false ⦄ still means 'unreachable' if it is 'assume'd.
(Yes, of course it can also be read as 'inconsistent', if the code is 
actually reachable, but proving code 'unreachable' which can actually be 
reached also just means that the reasoning and the assumptions were 
'inconsistent'. You can also read 'inconsistent' as: 'this program will 
never actually be executed', etc. David's interpretation of the 
formalism is adequate.)





Encouraging assume(false) sounds like an aberration to me.


If you accept my definition of assume there is no problem with it,
it just means unreachable. I showed how this follows naturally from my 
definition.


Your definition is lacking. It mixes up two entirely different perspectives,
the deductive mode and the imperative mode of reasoning.


I think you should elaborate on this a little if you want to make a 
point. E.g. I don't think that such an opposition of 'reasoning modes' 
even exists.





Hoare Logic deals with triplets: precondition, statement, postcondition.  
(assume,code,assert)


'assume' and 'assert' are statements:

⦃ P ∧ Q ⦄  // ← precondition
assert(Q); // ← statement
⦃ P ⦄  // ← postcondition

'assume' is just a dual concept:

⦃ P ⦄  // ← precondition
assume(Q);  // ← statement
⦃ P ∧ Q ⦄  // ← postcondition

There is nothing 'lacking' here, right?



Re: assert semantic change proposal

2014-08-09 Thread Walter Bright via Digitalmars-d

On 8/7/2014 10:44 AM, Daniel Murphy wrote:

I meant asserts in pre-conditions when used with inheritance.  It's a pass if
any of the preconditions pass, so the compiler runs them in turn and catches all
but the last.


That's right, they are OR'd together.


Re: assert semantic change proposal

2014-08-09 Thread Walter Bright via Digitalmars-d

On 8/7/2014 2:29 AM, Daniel Murphy wrote:

And we've also got asserts in pre-conditions, which are recoverable by 
definition.


Pre-condition asserts are not recoverable.



Re: assert semantic change proposal

2014-08-09 Thread H. S. Teoh via Digitalmars-d
On Sat, Aug 09, 2014 at 05:54:21PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/7/2014 10:44 AM, Daniel Murphy wrote:
 I meant asserts in pre-conditions when used with inheritance.  It's a
 pass if any of the preconditions pass, so the compiler runs them in
 turn and catches all but the last.
 
 That's right, they are OR'd together.

Which is fine if they are evaluated as a single expression. The problem
right now is that the OR is simulated by catching AssertErrors, which by
definition only happens when the program in an invalid state, so the
fact that this works at all is only a lucky coincidence. :-(  If
you're not so lucky you end up catching a genuine program bug but
ignoring it because the other condition in the simulated OR turned out
to pass. This unfortunately puts DbC in D on rather shaky ground.


T

-- 
Кто везде - тот нигде.


Re: assert semantic change proposal

2014-08-09 Thread via Digitalmars-d

On Saturday, 9 August 2014 at 20:22:36 UTC, Timon Gehr wrote:
Formally, that's what it assumes to be the case. If you can 
prove 'false', this means that the code is unreachable.


No, if you prove false that means either that it is 
nonterminating or that it cannot be proved by the the 
assumptions, hence the assumptions need strengthening.



⦃ Q ∧ ¬Q ⦄
⦃ false ⦄ // this means 'unreachable'
assert(false); // goes through


implication is not equivalence


After assuming 'false' you can prove false.
⦃ false ⦄ still means 'unreachable' if it is 'assume'd.


False only means that the postcondition does not hold. If the 
precondition is false then the triplet is valid i.e. correct.


(Yes, of course it can also be read as 'inconsistent', if the 
code is actually reachable, but proving code 'unreachable'


No, you can have this:

main(){
Precondition(false)
Stuff()
Postcondition(anything)
}

This states that Stuff has not yet been verified for any input. 
It does not state that Stuff will never execute.


Your definition is lacking. It mixes up two entirely 
different perspectives,

the deductive mode and the imperative mode of reasoning.


I think you should elaborate on this a little if you want to 
make a point. E.g. I don't think that such an opposition of 
'reasoning modes' even exists.


Of course it does, that is why Hoare Logic and SSA exist. 
Deduction lacks a notion of time.



⦃ P ∧ Q ⦄  // ← precondition
assert(Q); // ← statement
⦃ P ⦄  // ← postcondition


Makes no sense. Assert is not an imperative statement, it is an 
annotation. It is meta.




Re: assert semantic change proposal

2014-08-09 Thread Walter Bright via Digitalmars-d

On 8/9/2014 5:52 PM, Walter Bright wrote:

On 8/7/2014 2:29 AM, Daniel Murphy wrote:

And we've also got asserts in pre-conditions, which are recoverable by
definition.


Pre-condition asserts are not recoverable.



Eh, ignore that.


Re: assert semantic change proposal

2014-08-08 Thread Daniel Murphy via Digitalmars-d
H. S. Teoh via Digitalmars-d  wrote in message 
news:mailman.689.1407436311.16021.digitalmar...@puremagic.com...



P.S. The current implementation also does not distinguish between a
broken contract vs. a bug or problem encountered by the contract code.
For example:


Last time I checked it also treated OOM and access violation (on windows) as 
contract failure too.  Any Throwable. 



Re: assert semantic change proposal

2014-08-08 Thread Daniel Murphy via Digitalmars-d

Sean Kelly  wrote in message news:trnhymzwfkflgotxy...@forum.dlang.org...

Oh man, I forgot about this.  I wonder how this works from a codegen 
perspective.  Is precondition inheritance properly implemented yet?


Properly is a bit strong.  It is somewhat implemented.  I think there are 
still corner cases when some functions have no preconditions that people are 
still arguing over, but it does work for simple inheritance examples. 



Re: assert semantic change proposal

2014-08-08 Thread Daniel Murphy via Digitalmars-d
H. S. Teoh via Digitalmars-d  wrote in message 
news:mailman.684.1407434193.16021.digitalmar...@puremagic.com...



Hmph. Now it's making me wonder if preconditions should be treated as a
*single* assert outside the body of 'in{}', rather than allowing
individual asserts inside. Perhaps the body of 'in{}' should return a
boolean where false indicates a failed precondition,


I think that ship has sailed. 



Re: assert semantic change proposal

2014-08-08 Thread H. S. Teoh via Digitalmars-d
On Fri, Aug 08, 2014 at 07:15:22PM +1000, Daniel Murphy via Digitalmars-d wrote:
 H. S. Teoh via Digitalmars-d  wrote in message
 news:mailman.684.1407434193.16021.digitalmar...@puremagic.com...
 
 Hmph. Now it's making me wonder if preconditions should be treated as
 a *single* assert outside the body of 'in{}', rather than allowing
 individual asserts inside. Perhaps the body of 'in{}' should return a
 boolean where false indicates a failed precondition,
 
 I think that ship has sailed.

I know. It was more of a wish than anything.


T

-- 
Don't modify spaghetti code unless you can eat the consequences.


Re: assert semantic change proposal

2014-08-08 Thread Vlad Levenfeld via Digitalmars-d
On Friday, 8 August 2014 at 14:27:56 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Fri, Aug 08, 2014 at 07:15:22PM +1000, Daniel Murphy via 
Digitalmars-d wrote:

H. S. Teoh via Digitalmars-d  wrote in message
news:mailman.684.1407434193.16021.digitalmar...@puremagic.com...

Hmph. Now it's making me wonder if preconditions should be 
treated as
a *single* assert outside the body of 'in{}', rather than 
allowing
individual asserts inside. Perhaps the body of 'in{}' should 
return a

boolean where false indicates a failed precondition,

I think that ship has sailed.


I know. It was more of a wish than anything.


T


Since the ship HAS sailed... why must contracts be elided with 
--release? Seems to me if asserts go away, then eliding in and 
out is redundant. It'd be nice if I could put all my recoverable 
pre and post conditions inside those blocks as well. Just for 
organization's sake.


Re: assert semantic change proposal

2014-08-08 Thread Jonathan M Davis via Digitalmars-d

On Friday, 8 August 2014 at 18:43:22 UTC, Vlad Levenfeld wrote:
Since the ship HAS sailed... why must contracts be elided with 
--release? Seems to me if asserts go away, then eliding in and 
out is redundant. It'd be nice if I could put all my 
recoverable pre and post conditions inside those blocks as 
well. Just for organization's sake.


An in or out block could have other code which is doing work in 
preparation for asserting rather than just assertions, so leaving 
them in wouldn't make sense.


- Jonathan M Davis


Re: assert semantic change proposal

2014-08-08 Thread Sebastiaan Koppe via Digitalmars-d

On Wednesday, 6 August 2014 at 00:47:28 UTC, Walter Bright wrote:

On 8/3/2014 7:26 PM, Tove wrote:

It is possible, just not as a default enabled warning.

Some compilers offers optimization diagnostics which can be 
enabled by a switch,
I'm quite fond of those as it's a much faster way to go 
through a list of
compiler highlighted failed/successful optimizations rather 
than being forced to
check the asm output after every new compiler version or minor 
code refactoring.


In my experience, it actually works fine in huge projects, 
even if there are
false positives you can analyse what changes from the previous 
version as well

as ignoring modules which you know is not performance critical.


If you build dmd in debug mode, and then run it with -O --c, it 
will give you a list of all the data flow transformations it 
does.


But the list is a blizzard on non-trivial programs.
What about making a diff file? Or are the transformations so deep 
that it would be impossible to link them to lines in the source 
code?


Re: assert semantic change proposal

2014-08-07 Thread David Bregman via Digitalmars-d
On Thursday, 7 August 2014 at 03:54:12 UTC, Ola Fosheim Grøstad 
wrote:

«The __assume(0) statement is a special case.»

So, it does not make perfect sense. If it did, it would not be 
a special case?


It doesn't have to be a special case if you define it in the 
right way - in terms of control flow. Then the interpretation of 
assume(false) as unreachable follows quite naturally:


instead of defining assume(x) to mean that x is an axiom, define 
assume(x) to mean that P=x is an axiom, where P is the 
proposition that control flow reaches the assume statement.


so assume(false) actually means P=false, or simply !P

and !P means !(control flow reaches the assume), as desired.


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d
On Thursday, 7 August 2014 at 05:41:22 UTC, Matthias Bentrup 
wrote:

If assume is independent of control flow, then clearly this
cannot be related to assert.


Well, both assume and assert are independent of control flow 
since you don't have execution or control flow in mathematics.


The dependency is tied to the renaming of variables into 
constants which allows Hoare-Logic to work on variables, not 
only constants. However, for proper constants, you don't need 
that limit for neither assert nor assume?


if(B){
   assume(B  PI==3.14)
} else {
   assume(!B  PI==3.14) // must hold here too since it is a 
constant

}

apply hoare logic rule for if:

if this holds:

{B ∧ P} S {Q}   ,   {¬B ∧ P } T {Q}

then this holds:

{P} if B then S else T endif {Q}

which gives:

assume(PI==3.14)
f(B){
   assume(PI==3.14)
} else {
   assume(PI==3.14)
}
assume(PI==3.14)


(Usually you only have assume at the top in the preconditions, 
since you otherwise don't get a proof :-).


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d

On Thursday, 7 August 2014 at 06:04:38 UTC, David Bregman wrote:
On Thursday, 7 August 2014 at 03:54:12 UTC, Ola Fosheim Grøstad 
wrote:

«The __assume(0) statement is a special case.»

So, it does not make perfect sense. If it did, it would not be 
a special case?


It doesn't have to be a special case if you define it in the 
right way - in terms of control flow. Then the interpretation 
of assume(false) as unreachable follows quite naturally:


instead of defining assume(x) to mean that x is an axiom, 
define assume(x) to mean that P=x is an axiom, where P is the 
proposition that control flow reaches the assume statement.


so assume(false) actually means P=false, or simply !P

and !P means !(control flow reaches the assume), as desired.


Let's try the opposite way instead:

assume(Q)
if(B1){
   if(B2){
   }
}

implies:

assume(Q)
if(B1){
   assume(B1Q)
   if(B2){
  assume(B1B2Q)
   }
}

So your P in the inner if statement is  B1B2.

However assume(Pfalse) is still a fallacy…

Or?


Re: assert semantic change proposal

2014-08-07 Thread Johannes Pfau via Digitalmars-d
Am Wed, 06 Aug 2014 14:57:50 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 On 8/6/2014 10:18 AM, Sean Kelly wrote:
  So from a functional perspective, assuming this is a multithreaded
  app, what should the procedure be when an AssertError is thrown in
  some thread?
 
 Print message, stop the process and all its threads.
 
 Rationale: the program is in an unknown, invalid state, which will
 include any thread (or process) that has access to memory shared with
 that failed thread.
 
 Attempting to continue operating is irresponsible if the program is
 doing or can do anything important.

Again: unittests?


Re: assert semantic change proposal

2014-08-07 Thread Johannes Pfau via Digitalmars-d
Am Wed, 06 Aug 2014 20:19:33 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 On 8/6/2014 5:22 PM, Sean Kelly wrote:
  On Wednesday, 6 August 2014 at 22:01:47 UTC, Walter Bright wrote:
  On 8/6/2014 12:34 PM, Sean Kelly wrote:
  There is no facility for forcing a clean termination of another
  thread.
 
  Understood - so the only option is to force an unclean termination.
 
  So in all cases, it seems like what we should be doing is call
  exit(1) or the equivalent and forego any attempt at cleanup.
  Otherwise an assertion failure in a spawned thread would be handled
  even more strongly than in the main thread.
 
  Alternately, if we build on std.concurrency we could have the next
  call to receive throw a FatalError or something to force it to
  terminate.  But there's no telling how long this would be.  I
  assume you prefer the forceful route.
 
 Yes, I prefer that.
 
 

I can see some benefits here, but how does this interact with asserts in
unittests? You might special-case these, but it's also common practice
to use these asserts not only in the main unittest function but also in
helper functions.


Re: assert semantic change proposal

2014-08-07 Thread Matthias Bentrup via Digitalmars-d
On Thursday, 7 August 2014 at 06:32:18 UTC, Ola Fosheim Grøstad 
wrote:
On Thursday, 7 August 2014 at 05:41:22 UTC, Matthias Bentrup 
wrote:

If assume is independent of control flow, then clearly this
cannot be related to assert.


Well, both assume and assert are independent of control flow 
since you don't have execution or control flow in mathematics.


The dependency is tied to the renaming of variables into 
constants which allows Hoare-Logic to work on variables, not 
only constants. However, for proper constants, you don't need 
that limit for neither assert nor assume?


if(B){
   assume(B  PI==3.14)
} else {
   assume(!B  PI==3.14) // must hold here too since it is a 
constant

}

apply hoare logic rule for if:

if this holds:

{B ∧ P} S {Q}   ,   {¬B ∧ P } T {Q}

then this holds:

{P} if B then S else T endif {Q}

which gives:

assume(PI==3.14)
f(B){
   assume(PI==3.14)
} else {
   assume(PI==3.14)
}
assume(PI==3.14)


(Usually you only have assume at the top in the preconditions, 
since you otherwise don't get a proof :-).


So the D function (note that a is constant)

int silly() {
  enum int a = 1;
  if( a == 2 ) {
assert( a == 2 );
  }
  return a;
}

has undefined semantics (at least in debug mode), because it 
contains a false assertion ?


That is not my understanding of assert.


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d
On Thursday, 7 August 2014 at 08:12:23 UTC, Matthias Bentrup 
wrote:

So the D function (note that a is constant)

int silly() {
  enum int a = 1;
  if( a == 2 ) {
assert( a == 2 );
  }
  return a;
}

has undefined semantics (at least in debug mode), because it 
contains a false assertion ?


It isn't reachable so it is not part of the program? If you 
support generic programming you probably should limit proofs to 
reachable portions of library code.


However the specification or the implementation of it appears to 
be flawed, so perhaps it should be considered incorrect if this 
was in application code.


You usually try to prove post conditions in terms of pre 
conditions. The invariants, variants and asserts are just tools 
to get there. So this is not a typical issue.


Re: assert semantic change proposal

2014-08-07 Thread Matthias Bentrup via Digitalmars-d
On Thursday, 7 August 2014 at 08:35:21 UTC, Ola Fosheim Grøstad 
wrote:
On Thursday, 7 August 2014 at 08:12:23 UTC, Matthias Bentrup 
wrote:

So the D function (note that a is constant)

int silly() {
 enum int a = 1;
 if( a == 2 ) {
   assert( a == 2 );
 }
 return a;
}

has undefined semantics (at least in debug mode), because it 
contains a false assertion ?


It isn't reachable so it is not part of the program? If you 
support generic programming you probably should limit proofs to 
reachable portions of library code.


However the specification or the implementation of it appears 
to be flawed, so perhaps it should be considered incorrect if 
this was in application code.


You usually try to prove post conditions in terms of pre 
conditions. The invariants, variants and asserts are just tools 
to get there. So this is not a typical issue.


So reachability has an influence, but control flow hasn't ?


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 21:57:50 UTC, Walter Bright wrote:

On 8/6/2014 10:18 AM, Sean Kelly wrote:
So from a functional perspective, assuming this is a 
multithreaded app, what
should the procedure be when an AssertError is thrown in some 
thread?


Print message, stop the process and all its threads.

Rationale: the program is in an unknown, invalid state, which 
will include any thread (or process) that has access to memory 
shared with that failed thread.


Strictly speaking we would need to kill only those threads. But I 
guess it wouldn't be worth the effort, because the remaining 
threads are most likely not expecting them to disappear and 
couldn't recover from this.


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d
On Thursday, 7 August 2014 at 08:52:43 UTC, Matthias Bentrup 
wrote:


So reachability has an influence, but control flow hasn't ?


You have to define what you want to prove.

1. The post condition to hold?

2. A sound specification to be fully implemented?

3. That executed code is tested when it executes, but ignore that 
important code should have been executed instead.


(1) and (2) is what program verification and aims for.
(1) is what design by contract tries to establish.
(3) is weak and what c-asserts do.


Re: assert semantic change proposal

2014-08-07 Thread Daniel Murphy via Digitalmars-d

Walter Bright  wrote in message news:lrvglk$25dt$1...@digitalmars.com...

Asserts in unittests are not the same as asserts elsewhere. I wasn't a big 
fan of this, but I was probably the only one :-)


This has the fun side-effect that asserts in unittest helper functions are 
fatal, while ones in the unittest body are not!


And we've also got asserts in pre-conditions, which are recoverable by 
definition. 



Re: assert semantic change proposal

2014-08-07 Thread Walter Bright via Digitalmars-d

On 8/7/2014 2:10 AM, Marc Schütz schue...@gmx.net wrote:

On Wednesday, 6 August 2014 at 21:57:50 UTC, Walter Bright wrote:

On 8/6/2014 10:18 AM, Sean Kelly wrote:

So from a functional perspective, assuming this is a multithreaded app, what
should the procedure be when an AssertError is thrown in some thread?


Print message, stop the process and all its threads.

Rationale: the program is in an unknown, invalid state, which will include any
thread (or process) that has access to memory shared with that failed thread.


Strictly speaking we would need to kill only those threads.


No, all the threads. Threads share memory, so corruption can spread from one to 
the next.




Re: assert semantic change proposal

2014-08-07 Thread Walter Bright via Digitalmars-d

On 8/7/2014 12:54 AM, Johannes Pfau wrote:

Again: unittests?


Asserts in unittests are not the same as asserts elsewhere. I wasn't a big fan 
of this, but I was probably the only one :-)


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d

On Thursday, 7 August 2014 at 09:26:01 UTC, Walter Bright wrote:

On 8/7/2014 2:10 AM, Marc Schütz schue...@gmx.net wrote:
On Wednesday, 6 August 2014 at 21:57:50 UTC, Walter Bright 
wrote:

On 8/6/2014 10:18 AM, Sean Kelly wrote:
So from a functional perspective, assuming this is a 
multithreaded app, what
should the procedure be when an AssertError is thrown in 
some thread?


Print message, stop the process and all its threads.

Rationale: the program is in an unknown, invalid state, which 
will include any
thread (or process) that has access to memory shared with 
that failed thread.


Strictly speaking we would need to kill only those threads.


No, all the threads. Threads share memory, so corruption can 
spread from one to the next.


Yes, that's what I was saying: those threads ... that ha[ve] 
access to memory shared with that failed thread.


But even without real sharing of memory, a failed assertion in 
one thread might result from invalid data being sent to that 
thread from another one (via spawn or send). Better to just kill 
all threads.


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 22:31:00 UTC, Walter Bright wrote:

On 8/6/2014 5:14 AM, Marc Schütz schue...@gmx.net wrote:
We're not living in an ideal world, unfortunately. It is bad 
enough that
programs are wrong as they are written, we don't need the 
compiler to transform
these programs to do something that is still wrong, but also 
completely
different. This would make your goal of fixing the program 
very hard to achieve.
In an extreme case, a small error in several million lines of 
code could
manifest at a completely different place, because you cannot 
rely on any

determinism once undefined behaviour is involved.


You are technically correct, and I used to worry about that. 
But after using assert()s for 30 years, I can only think of 
this happening once. assert()s tend to trip very shortly after 
the actual error occurred. Of course, there's a bit of an art 
to appropriate placement of those assert()s.


But for those 30 years you only used asserts with the semantics 
they have in C, not with the semantics you want for D. I don't 
see how you can come to the conclusion that the same is true for 
the new style assertions.


The problems with finding errors that I talk about are not 
because the cause of the error is far from the assert. They stem 
from the fact that - with the proposed semantics - the asserts 
themselves can influence code in different parts of the program, 
far away from both the cause of the error and the failed assert.


Re: assert semantic change proposal

2014-08-07 Thread David Bregman via Digitalmars-d
On Thursday, 7 August 2014 at 07:53:44 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 7 August 2014 at 06:04:38 UTC, David Bregman wrote:
On Thursday, 7 August 2014 at 03:54:12 UTC, Ola Fosheim 
Grøstad wrote:

«The __assume(0) statement is a special case.»

So, it does not make perfect sense. If it did, it would not 
be a special case?


It doesn't have to be a special case if you define it in the 
right way - in terms of control flow. Then the interpretation 
of assume(false) as unreachable follows quite naturally:


instead of defining assume(x) to mean that x is an axiom, 
define assume(x) to mean that P=x is an axiom, where P is the 
proposition that control flow reaches the assume statement.


so assume(false) actually means P=false, or simply !P

and !P means !(control flow reaches the assume), as desired.


Let's try the opposite way instead:

assume(Q)
if(B1){
   if(B2){
   }
}

implies:

assume(Q)
if(B1){
   assume(B1Q)
   if(B2){
  assume(B1B2Q)
   }
}

So your P in the inner if statement is  B1B2.

However assume(Pfalse) is still a fallacy…

Or?


If you use the definition of assume that I gave, assume(P  
false) declares the axiom


P = (P  false)

which is again equivalent to !P.


Re: assert semantic change proposal

2014-08-07 Thread H. S. Teoh via Digitalmars-d
On Thu, Aug 07, 2014 at 07:29:43PM +1000, Daniel Murphy via Digitalmars-d wrote:
 Walter Bright  wrote in message news:lrvglk$25dt$1...@digitalmars.com...
 
 Asserts in unittests are not the same as asserts elsewhere. I wasn't
 a big fan of this, but I was probably the only one :-)
 
 This has the fun side-effect that asserts in unittest helper functions
 are fatal, while ones in the unittest body are not!
 
 And we've also got asserts in pre-conditions, which are recoverable by
 definition.

Huh, what? I thought asserts in pre-conditions are non-recoverable,
because they imply that user code has broken the contract governing the
use of that function.


T

-- 
Answer: Because it breaks the logical sequence of discussion.
Question: Why is top posting bad?


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d

On Thursday, 7 August 2014 at 14:09:26 UTC, David Bregman wrote:
If you use the definition of assume that I gave, assume(P  
false) declares the axiom


P = (P  false)

which is again equivalent to !P.


Well, P=(PX) is equivalent to P=X.

But you are allowed to have false in the preconditions, since 
we only are interested in


preconditions = postconditions

assume(input!=0) is ok… it does not say unreachable.
It says, postconditions are not required to hold for input==0…


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d
On Thursday, 7 August 2014 at 15:35:30 UTC, Ola Fosheim Grøstad 
wrote:

assume(input!=0) is ok… it does not say unreachable.
It says, postconditions are not required to hold for input==0…


And… assume(false) is a precondition that says that you don't 
have to care about the postconditions at all… Basically, assume 
anything…


Re: assert semantic change proposal

2014-08-07 Thread David Bregman via Digitalmars-d
On Thursday, 7 August 2014 at 15:35:30 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 7 August 2014 at 14:09:26 UTC, David Bregman wrote:
If you use the definition of assume that I gave, assume(P  
false) declares the axiom


P = (P  false)

which is again equivalent to !P.


Well, P=(PX) is equivalent to P=X.


Right. So if X is false, the axiom we declare is !P, not false 
or a fallacy.


But you are allowed to have false in the preconditions, since 
we only are interested in


preconditions = postconditions


That's ok. There is no contradiction if P is false.

False = X is true for any X, so the axiom we declare is just a 
tautology.



So is the control flow definition / unreachability argument clear 
now?




 assume(input!=0) is ok… it does not say unreachable.
 It says, postconditions are not required to hold for input==0…


And… assume(false) is a precondition that says that you don't 
have to care about the postconditions at all… Basically, 
assume anything…


I don't get this part, maybe you can reword it if I haven't 
already answered the question.


Re: assert semantic change proposal

2014-08-07 Thread Sean Kelly via Digitalmars-d

On Thursday, 7 August 2014 at 07:57:14 UTC, Johannes Pfau wrote:


Attempting to continue operating is irresponsible if the 
program is doing or can do anything important.


Again: unittests?


unittests shouldn't use assert.  Or alternately, we can have 
onAssertError (the assertion failure callback) do something 
different if unit testing is taking place than when the program 
is actually running.


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d

On Thursday, 7 August 2014 at 16:36:09 UTC, David Bregman wrote:
Right. So if X is false, the axiom we declare is !P, not 
false or a fallacy.


But Pn is always true where you assume…?

False = X is true for any X, so the axiom we declare is just a 
tautology.


I don't follow. If any assumption is false then anything goes…

So is the control flow definition / unreachability argument 
clear now?


Nope.


 assume(input!=0) is ok… it does not say unreachable.
 It says, postconditions are not required to hold for 
 input==0…


And… assume(false) is a precondition that says that you don't 
have to care about the postconditions at all… Basically, 
assume anything…


I don't get this part, maybe you can reword it if I haven't 
already answered the question.


assume(true)
r = calc()
assert(specification_test(r)) //required to hold for any situation

assume(false)
r=calc()
//r can be anything


// input!=0
assume(input!=0) // true
r=calc(input)
assert(specification(input)==r) // required to hold for input!=0


//input=0
assume(input!=0)  // false
r=calc(input)
assert(specification(input)==r) // may or may not hold



Re: assert semantic change proposal

2014-08-07 Thread David Gileadi via Digitalmars-d

On 8/7/14, 9:51 AM, Sean Kelly wrote:

On Thursday, 7 August 2014 at 07:57:14 UTC, Johannes Pfau wrote:


Attempting to continue operating is irresponsible if the program is
doing or can do anything important.


Again: unittests?


unittests shouldn't use assert.  Or alternately, we can have
onAssertError (the assertion failure callback) do something different if
unit testing is taking place than when the program is actually running.


Considering that TDPL, Ali's Book[1] and dlang.org[2] all use assert 
when introducing unittest, I think the latter solution is probably the 
right one.


[1] http://ddili.org/ders/d.en/unit_testing.html
[2] http://dlang.org/unittest.html


Re: assert semantic change proposal

2014-08-07 Thread Sean Kelly via Digitalmars-d
On Thursday, 7 August 2014 at 15:21:14 UTC, H. S. Teoh via 
Digitalmars-d wrote:


Huh, what? I thought asserts in pre-conditions are 
non-recoverable,
because they imply that user code has broken the contract 
governing the use of that function.


I suspect this new treatment of assert will affect the way that 
people check for errors.  For example, in the server apps I 
write, I don't always want a hard stop of the entire process if I 
detect a logic error because it provides a potential attack 
vector for a user to bring down an entire service.  As soon as 
someone determines that a particular request kills all 
transactions on a process...


This isn't just paranoia in a new direction either.  The servers 
I work on are literally under constant attack by hackers.  
Largely botnets trying to crack passwords, but not exclusively 
that.  And so I might want to at least drain out all pending 
requests before halting the process, or I might want to just kill 
the transaction that detected the logic error and leave 
everything running, assuming I can be sure that the potential 
collateral damage is local to the transaction (which is typically 
the case).


Longer term, this new treatment of assert will probably encourage 
better program design which guarantees memory isolation by 
multiprocessing.  I favor this direction, but messaging has a way 
to go first.  And by extension, I do think that memory sharing in 
a language that allows direct memory access is the real risk.  
That a logic error might cause a thread / process to generate an 
invalid message is not an issue any more than it would be an 
issue to call a function with bad parameters.  In both cases, the 
implication is that the caller is in an invalid state, not the 
callee.  So long as there is no vector through which one context 
can alter the data accessed by another context without proper 
validation, the corruption cannot spread.


Re: assert semantic change proposal

2014-08-07 Thread David Bregman via Digitalmars-d
On Thursday, 7 August 2014 at 16:56:01 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 7 August 2014 at 16:36:09 UTC, David Bregman wrote:
Right. So if X is false, the axiom we declare is !P, not 
false or a fallacy.


But Pn is always true where you assume…?


No. Remember P is control flow can reach this assume. If the 
assume is unreachable, then P is false. The compiler doesn't 
always know that P is false though, so we can use assume(false) 
to supply that as an axiom.


Here's an example:

x = rand()  3; // x is 0,1,2 or 3.
switch(x) {
  case 0: foo();
  case 1: bar();
  case 2: baz();
  case 3: qux();
  default: assume(false); // hopefully redundant with a decent 
compiler.

}

now the compiler can optimize this down to

{foo, bar, baz, qux}[x]();

or even better, a computed jump.



False = X is true for any X, so the axiom we declare is just 
a tautology.


I don't follow. If any assumption is false then anything goes…



Using my definition of assume, the axiom declared is P=x. If P 
is false, then the axiom declared is false = x, which is true 
for any x.


http://en.wikipedia.org/wiki/Truth_table#Logical_implication


So is the control flow definition / unreachability argument 
clear now?


Nope.


Ok, is it clear now? If not, which parts are not clear?



Re: assert semantic change proposal

2014-08-07 Thread Daniel Murphy via Digitalmars-d
H. S. Teoh via Digitalmars-d  wrote in message 
news:mailman.674.1407424873.16021.digitalmar...@puremagic.com...



 And we've also got asserts in pre-conditions, which are recoverable by
 definition.

Huh, what? I thought asserts in pre-conditions are non-recoverable,
because they imply that user code has broken the contract governing the
use of that function.


I meant asserts in pre-conditions when used with inheritance.  It's a pass 
if any of the preconditions pass, so the compiler runs them in turn and 
catches all but the last. 



Re: assert semantic change proposal

2014-08-07 Thread H. S. Teoh via Digitalmars-d
On Fri, Aug 08, 2014 at 03:44:06AM +1000, Daniel Murphy via Digitalmars-d wrote:
 H. S. Teoh via Digitalmars-d  wrote in message
 news:mailman.674.1407424873.16021.digitalmar...@puremagic.com...
 
  And we've also got asserts in pre-conditions, which are recoverable
  by definition.
 
 Huh, what? I thought asserts in pre-conditions are non-recoverable,
 because they imply that user code has broken the contract governing
 the use of that function.
 
 I meant asserts in pre-conditions when used with inheritance.  It's a
 pass if any of the preconditions pass, so the compiler runs them in
 turn and catches all but the last.

Oh, I see it now.

Hmph. Now it's making me wonder if preconditions should be treated as a
*single* assert outside the body of 'in{}', rather than allowing
individual asserts inside. Perhaps the body of 'in{}' should return a
boolean where false indicates a failed precondition, so that instead of
writing:

auto func(...)
in {
assert(cond1);
assert(cond2);
assert(cond3);
}
body { ... }

you'd write instead:

auto func(...)
in { cond1  cond2  cond3 }
body { ... }

and the compiler translates this to:

bool __func__in(...) { return cond1  cond2  cond3; }
auto func(...) {
assert(__func_in(...));
// body goes here
}

Well, technically it should be the caller that invokes __func_in(), but
that's a different issue.


T

-- 
We are in class, we are supposed to be learning, we have a teacher...
Is it too much that I expect him to teach me??? -- RL


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d

On Thursday, 7 August 2014 at 17:38:22 UTC, David Bregman wrote:
Using my definition of assume, the axiom declared is P=x. If P 
is false, then the axiom declared is false = x, which is true 
for any x.


I don't see how this will work out. If it is truly unreachable 
then you don't need any assume, because then you can assert 
anything. Therefore you shouldn't.


valid:

assume(true)
while(true){}
assert(angels_love_pixiedust) //proven true now…

But there are alternatives to HL that I don't know much about. 
You might want to look at Matching Logic Reachability. Looks 
complicated:


http://fsl.cs.illinois.edu/pubs/rosu-stefanescu-2012-fm.pdf



Ok, is it clear now? If not, which parts are not clear?


It doesn't make any sense to me since I don't see how you would 
discriminate between your Ps.


{P1}
while(true) skip; endwhile
{P2}

How do you define P1 and P2 to be different?


Re: assert semantic change proposal

2014-08-07 Thread David Bregman via Digitalmars-d
On Thursday, 7 August 2014 at 18:02:08 UTC, Ola Fosheim Grøstad 
wrote:
I don't see how this will work out. If it is truly unreachable 
then you don't need any assume


I just gave you a concrete example of where assume(false) might 
be useful for optimizing a switch statement. Just because it is 
truly unreachable doesn't mean the compiler is aware of that.




It doesn't make any sense to me since I don't see how you would 
discriminate between your Ps.


{P1}
while(true) skip; endwhile
{P2}

How do you define P1 and P2 to be different?


I'm not sure how you'd define them to be the same. They're 
different lines of code, for one thing.




Re: assert semantic change proposal

2014-08-07 Thread H. S. Teoh via Digitalmars-d
On Thu, Aug 07, 2014 at 10:54:53AM -0700, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Aug 08, 2014 at 03:44:06AM +1000, Daniel Murphy via Digitalmars-d 
 wrote:
  H. S. Teoh via Digitalmars-d  wrote in message
  news:mailman.674.1407424873.16021.digitalmar...@puremagic.com...
  
   And we've also got asserts in pre-conditions, which are recoverable
   by definition.
  
  Huh, what? I thought asserts in pre-conditions are non-recoverable,
  because they imply that user code has broken the contract governing
  the use of that function.
  
  I meant asserts in pre-conditions when used with inheritance.  It's a
  pass if any of the preconditions pass, so the compiler runs them in
  turn and catches all but the last.
 
 Oh, I see it now.
[...]

P.S. The current implementation also does not distinguish between a
broken contract vs. a bug or problem encountered by the contract code.
For example:

auto func(T...)(T args)
in {
assert(checkConsistency(args));
}
body {
...
}

bool checkConsistency(T...)(T args) {
int i;
for (i=0; i  args.length; i++) {
...
i = 1000; // suppose an inadvertent typo causes this
}
assert(i == args.length); // so this will fail
...
return result;
}

Suppose the i=1000 line is a mistake by the programmer, so it's a
genuine bug in checkConsistency. This would trip the assert in
checkConsistency, which would throw an AssertError. But if it was called
from the in-contract of func(), and func() is in a derived class for
which the base class version of func() has an in-contract that passes,
then we're basically swallowing the AssertError triggered by the failed
malloc(), and thereby causing the program to continue running in an
invalid state!

Basically, the problem is that the in-contract can't tell the difference
between a precondition failure (triggered by the outer assert) and a
genuine program bug (triggered by the inner assert).

This makes DbC in D really shaky.  :-(


T

-- 
Freedom: (n.) Man's self-given right to be enslaved by his own depravity.


Re: assert semantic change proposal

2014-08-07 Thread Sean Kelly via Digitalmars-d

On Thursday, 7 August 2014 at 17:44:02 UTC, Daniel Murphy wrote:
H. S. Teoh via Digitalmars-d  wrote in message 
news:mailman.674.1407424873.16021.digitalmar...@puremagic.com...


 And we've also got asserts in pre-conditions, which are 
 recoverable by

 definition.

Huh, what? I thought asserts in pre-conditions are 
non-recoverable,
because they imply that user code has broken the contract 
governing the

use of that function.


I meant asserts in pre-conditions when used with inheritance.  
It's a pass if any of the preconditions pass, so the compiler 
runs them in turn and catches all but the last.


Oh man, I forgot about this.  I wonder how this works from a 
codegen perspective.  Is precondition inheritance properly 
implemented yet?


Re: assert semantic change proposal

2014-08-07 Thread via Digitalmars-d

On Thursday, 7 August 2014 at 18:20:20 UTC, David Bregman wrote:
On Thursday, 7 August 2014 at 18:02:08 UTC, Ola Fosheim Grøstad 
wrote:
I just gave you a concrete example of where assume(false) might 
be useful for optimizing a switch statement.


But to me it sounds dangerous if it actually is reachable or if 
the compiler somehow think it is. So you should use halt() if it 
should be unreachable, and use unreachable() if you know 100% it 
holds. Encouraging assume(false) sounds like an aberration to me.


You need to provide proofs if you attempt using assume(X) and 
assert(false).


This is valid, but it is derived, not postulated:

assume(P)
while(false){

   assume(false)  // anything goes is acceptable over S here
   S… // since it never happens…
   assert(P) // valid since the assumption is false

}
assert(P) // valid since the content is just a noop


When you postulate without proving you open a can of worms:

assume(P)
while(B){

   assume(false)  // woops! Should have been assume(BP)
   S…  // ohoh we just stated anything goes in 
here…

   assert(P)  // invalid

}
assert(P)  // invalid since the loop invariant doesn't hold.




How do you define P1 and P2 to be different?


I'm not sure how you'd define them to be the same. They're 
different lines of code, for one thing.


I don't believe there is such a thing as lines of code. We are 
doing math. Everything just is. We don't have code, we have 
mathematical dependencies/relations. P1 and P2 are propositions, 
so what are the content of those? The ones you use for your P1=X 
and P2=Y? You have to define newassume(P,X)===assume(P=X). But 
what is P?


Anyway, you don't need those… You can use a regular 
assume(false), but not for indicating unreachable. You indicate 
that the postcondition does not have to hold. Otherwise you will 
run into trouble.


But if you think about while() as a short-hand that is expanded 
into pure math, then it makes sense. If you have while(true){} 
then the loop cannot generate new mathematical constants… Thus it 
cannot be a source for new unique propositions that let you 
discriminate between before and after entering the loop? Or?


Re: assert semantic change proposal

2014-08-07 Thread David Bregman via Digitalmars-d
On Thursday, 7 August 2014 at 19:41:44 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 7 August 2014 at 18:20:20 UTC, David Bregman wrote:
On Thursday, 7 August 2014 at 18:02:08 UTC, Ola Fosheim 
Grøstad wrote:
I just gave you a concrete example of where assume(false) 
might be useful for optimizing a switch statement.


But to me it sounds dangerous if it actually is reachable or if 
the compiler somehow think it is.


It is dangerous, and shouldn't be used lightly. That's what 
started this whole thread.



So you should use halt() if it should be unreachable,


Not if your goal is to completely optimize away any unreachable 
code.



and use unreachable() if you know 100% it holds.


This is just another name for the same thing, it isn't any more 
or less dangerous.



Encouraging assume(false) sounds like an aberration to me.


If you accept my definition of assume there is no problem with 
it, it just means unreachable. I showed how this follows 
naturally from my definition.




You need to provide proofs if you attempt using assume(X) and 
assert(false).


There is no mechanism for providing proofs or checking proofs in 
D.



How do you define P1 and P2 to be different?


I'm not sure how you'd define them to be the same. They're 
different lines of code, for one thing.


I don't believe there is such a thing as lines of code. We 
are doing math. Everything just is. We don't have code, we 
have mathematical dependencies/relations.


Well, any correct mathematical formalism of the code must 
preserve its semantics, so it doesn't really matter which one we 
talk about.


Different lines (or basic blocks, I guess is the right term here) 
can have different reachability, so you could just give each one 
its own P.


P1 and P2 are propositions, so what are the content of those? 
The ones you use for your P1=X and P2=Y?


This is just an implementation detail of the compiler. Probably 
each basic block has its own P, which is conservatively 
approximated with some data flow analysis. (Disclaimer: I am not 
a compiler expert)


Anyway, you don't need those… You can use a regular 
assume(false), but not for indicating unreachable. You indicate 
that the postcondition does not have to hold. Otherwise you 
will run into trouble.


But if you think about while() as a short-hand that is expanded 
into pure math, then it makes sense. If you have while(true){} 
then the loop cannot generate new mathematical constants… Thus 
it cannot be a source for new unique propositions that let you 
discriminate between before and after entering the loop? Or?


Sorry, I don't understand much of this.

Anyways, I don't have much more time to keep justifying my 
definition of assume. If you still don't like mine, maybe you can 
give your definition of assume, and demonstrate how that 
definition is more useful in the context of D.


Re: assert semantic change proposal

2014-08-06 Thread Tofu Ninja via Digitalmars-d

On Wednesday, 6 August 2014 at 00:52:32 UTC, Walter Bright wrote:

On 8/3/2014 4:51 PM, Mike Farnsworth wrote:
This all seems to have a very simple solution, to use 
something like: expect()


I see code coming that looks like:

   expect(x  2);  // must be true
   assert(x  2);  // check that it is true

All I can think of is, shoot me now :-)


How about something like
@expected assert(x  2); or @assumed assert(x  2);

It wouldn't introduce a new keyword, but still introduces the
expected/assumed semantics. You should keep in mind that you
might have to make a compromise, regardless of your feelings on
the subject.


Also, I am going to try to say this in as respectful a way as I
can...

Please stop responding in such a dismissive way, I think it is
already pretty obvious that some are getting frustrated by these
threads. Responding in a dismissive way makes it seem like you
don't take the arguments seriously.


Re: assert semantic change proposal

2014-08-06 Thread eles via Digitalmars-d

On Wednesday, 6 August 2014 at 04:05:41 UTC, David Bregman wrote:
On Wednesday, 6 August 2014 at 01:11:55 UTC, Jeremy Powers via 
Digitalmars-d wrote:


How can there be any question? This is a change in the 
compiler, a change in the docs, change in what your program 
does, change of the very bytes in the executable. If my program


I feel that, at this stage, is only about how a compiler glag, 
specifically -release works. For other configurations, there is 
no problem: event if the optimizer optimizes based on asserts, 
the asserts themselves are part of the code: code is there and 
the assertion will fail before execution enters the optimized 
path. This is just like any other optimization, nothing special 
about it.


The problem with -release might be formulated in that it 
optimizes based on a code that is no longer present (the asserts 
are wiped out). It keeps the optimization, but it dismisses the 
garde-fous.


Re: assert semantic change proposal

2014-08-06 Thread Jeremy Powers via Digitalmars-d


 The main argument seems to revolve around whether this is actually a
 change
 or not.  In my (and others') view, the treatment of assert as 'assume' is
 not a change at all.  It was in there all along, we just needed the wizard
 to tell us.


 How can there be any question? This is a change in the compiler, a change
 in the docs, change in what your program does, change of the very bytes in
 the executable. If my program worked before and doesn't now, how is that
 not a change? This must be considered a change by any reasonable definition
 of the word change.

 I don't think I can take seriously this idea that someone's unstated,
 unmanifested intentions define change more so than things that are .. you
 know.. actual real changes.


Yes, sorry, there will be actual consequences if the optimizations are
implemented.  What I meant with the somewhat facetious statement was that
there is no semantic change - broken code will still be broken, it will
just be broken in a different way.

If you subscribe to the idea that a failed assertion indicates all
subsequent code is invalid, then subsequent code is broken (and undefined,
if the spec says it is undefined).  The change would be clarifying this in
the spec, and dealing with the fallout of previously
broken-but-still-working code behaving differently under optimization.

At least, that's how I understand it... I hope I an not mischaracterizing
others' positions here (let me know, and I'll shut up).




 Well I think I outlined the issues in the OP. As for solutions, there
 have been some suggestions in this thread, the simplest is to leave things
 as is and introduce the optimizer hint as a separate function, assume().

 I don't think there was any argument presented against a separate function
 besides that Walter couldn't see any difference between the two behaviors,
 or the intention thing which doesn't really help us here.


An argument against the separate function: we already have a function,
called assert.  Those that want the nonoptimizing version (a disable-able
'if false throw' with no wider meaning) should get their own method darnit.


I guess the only real argument against it is that pre-existing asserts
 contain significant optimization information that we can't afford to not
 reuse.


Ye.  This gets to the argument - asserts contain information about the
program.  Specifically, a statement about the valid program state at a
given point.  So we should treat them as such.


 But this is a claim I'm pretty skeptical of.


Ah man, thought I had you.


 Andrei admitted it's just a hunch at this point. Try looking through your
 code base to see how many asserts would be useful for optimizing.


Ironically, I don't tend to use asserts at all in my code.  I do not want
code that will throw or not throw based on a compiler flag.  Why am I even
arguing about this stuff?

If asserts were used for optimizing, I would look at actually using them
more.


(Can we programmatically (sp?) identify and flag/resolve
 issues that occur from a mismatch of expectations?)


 I'm not an expert on this, but my guess is it's possible in theory but
 would never happen in practice. Such things are very complex to implement,
 if Walter won't agree to a simple and easy solution, I'm pretty sure
 there's no way in hell he would agree to a complex one that takes a massive
 amount of effort.


If gcc et all do similar optimizations, how do they handle messaging?


Re: assert semantic change proposal

2014-08-06 Thread Andrei Alexandrescu via Digitalmars-d

On 8/5/14, 11:28 PM, Tofu Ninja wrote:

On Wednesday, 6 August 2014 at 00:52:32 UTC, Walter Bright wrote:

On 8/3/2014 4:51 PM, Mike Farnsworth wrote:

This all seems to have a very simple solution, to use something like:
expect()


I see code coming that looks like:

   expect(x  2);  // must be true
   assert(x  2);  // check that it is true

All I can think of is, shoot me now :-)


How about something like
@expected assert(x  2); or @assumed assert(x  2);

It wouldn't introduce a new keyword, but still introduces the
expected/assumed semantics. You should keep in mind that you
might have to make a compromise, regardless of your feelings on
the subject.


I think assert is good to use for optimization, and debug assert 
would be a good choice for soft assertions. Care must be exercised with 
tying new optimizations to build flags.



Also, I am going to try to say this in as respectful a way as I
can...

Please stop responding in such a dismissive way, I think it is
already pretty obvious that some are getting frustrated by these
threads. Responding in a dismissive way makes it seem like you
don't take the arguments seriously.


I have difficulty figuring how such answers can be considered 
dismissive. The quoted code is considered an antipattern at least e.g. 
at my workplace. (Wouldn't pass review, and disproportionate insistence 
on such might curb one's career.) Even though some might not agree with 
Walter's opinion, it's entirely reasonable to express dislike of that 
code; I don't quite get why that would be consider dismissive. I think 
we're at the point where everybody understands one another, and there 
must be a way to express polite but definite disagreement. What would 
that be?



Thanks,

Andrei



Re: assert semantic change proposal

2014-08-06 Thread Jeremy Powers via Digitalmars-d

 I feel that, at this stage, is only about how a compiler glag,
 specifically -release works. For other configurations, there is no
 problem: event if the optimizer optimizes based on asserts, the asserts
 themselves are part of the code: code is there and the assertion will fail
 before execution enters the optimized path. This is just like any other
 optimization, nothing special about it.

 The problem with -release might be formulated in that it optimizes based
 on a code that is no longer present (the asserts are wiped out). It keeps
 the optimization, but it dismisses the garde-fous.



Yes!

[lengthy and imprecise rambling about assert definition omitted]


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 06:56:40 UTC, eles wrote:
I feel that, at this stage, is only about how a compiler glag, 
specifically -release works. For other configurations, there 
is no problem: event if the optimizer optimizes based on 
asserts, the asserts themselves are part of the code: code is 
there and the assertion will fail before execution enters the 
optimized path. This is just like any other optimization, 
nothing special about it.


Not right:

b = a+1
assume(bC)

implies

assume(a+1C)
b = a+1


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 07:19:21 UTC, Andrei Alexandrescu 
wrote:
The quoted code is considered an antipattern at least e.g. at 
my workplace.


What about:

«
if(x==0){ …free of x…}
…free of x…
assume(x!=0)
»

being equivalent to

«
assume(x!=0)
if(x==0){ …free of x…}
…free of x…
»

I think we're at the point where everybody understands one 
another


Really? I am the point where I realize that a significant portion 
of programmers have gullible expectations of their own ability to 
produce provably correct code and a very sloppy understanding of 
what computing is.


So now we don't have Design by Contract, but Design by Gullible 
Assumptions.


Great…


Re: assert semantic change proposal

2014-08-06 Thread eles via Digitalmars-d
On Wednesday, 6 August 2014 at 07:19:21 UTC, Andrei Alexandrescu 
wrote:

On 8/5/14, 11:28 PM, Tofu Ninja wrote:
On Wednesday, 6 August 2014 at 00:52:32 UTC, Walter Bright 
wrote:

On 8/3/2014 4:51 PM, Mike Farnsworth wrote:


I think assert is good to use for optimization, and debug 
assert would be a good choice for soft assertions. Care must


Conceptually, this means a release assert (both in debug and 
release builds) and a debug assert (only in debug builds).


Thus, question: it is acceptable to optimize a (release) build 
based on code that is present only into another (debug) build?


Re: assert semantic change proposal

2014-08-06 Thread Walter Bright via Digitalmars-d

On 8/5/2014 11:28 PM, Tofu Ninja wrote:

Please stop responding in such a dismissive way, I think it is
already pretty obvious that some are getting frustrated by these
threads. Responding in a dismissive way makes it seem like you
don't take the arguments seriously.


I responded to the equivalent design proposal several times already, with 
detailed answers. This one is shorter, but the essential aspects are there. I 
know those negative aspects came across because they are addressed with your 
counter:


 How about something like
 @expected assert(x  2); or @assumed assert(x  2);
 It wouldn't introduce a new keyword, but still introduces the
expected/assumed semantics.

The riposte:

1. it's long with an unappealing hackish look
2. it follows in the C++ tradition of the best practice being the long ugly way, 
and the deprecated practice is the straightforward way (see arrays in C++)
3. users will be faced with two kinds of asserts, with a subtle difference that 
is hard to explain, hard to remember which is which, and will most likely use 
inappropriately
4. everyone who wants faster assert optimizations will have to rewrite their 
(possibly extensive) use of asserts that we'd told them was best practice. I 
know I'd be unhappy about having to do such to my D code.



 You should keep in mind that you might have to make a compromise, regardless 
 of your feelings on the subject.


This is not about my feelings, other than my desire to find the best design 
based on a number of tradeoffs.



I'll sum up with the old saw that any programming problem can be solved with 
another level of indirection. I submit a corollary that any language issue can 
be solved by adding another keyword or compiler flag. The (much) harder thing is 
to solve a problem with an elegant solution that does not involve new 
keywords/flags, and fits in naturally.


Re: assert semantic change proposal

2014-08-06 Thread eles via Digitalmars-d
On Wednesday, 6 August 2014 at 07:29:02 UTC, Ola Fosheim Grøstad 
wrote:

On Wednesday, 6 August 2014 at 06:56:40 UTC, eles wrote:



Not right:

b = a+1
assume(bC)

implies

assume(a+1C)
b = a+1


b = a+1
if(C=b) exit(1);

implies

if(C=a+1) exit(1);
b = a+1

Is not the same? Still, one would allow the optimizer to exit 
before executing b=a+1 line (in the first version) based on that 
condition (considering no volatile variables).


I stick to my point: as long as the code is there, optimization 
based on it is acceptable (as long as the functionality of the 
program is not changed, of course). The sole floating point is 
what to do when the code that is used for optimization is 
discarded.


Would you accept optimization of a C program based on code that 
is guarded by:


#ifndef _NDEBUG
//code that could be used for optimization
#endif

in the -D_NDEBUG version?

(I am not convinced 100% that for D is the same, but should help 
with the concepts)


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 01:11:55 UTC, Jeremy Powers via 
Digitalmars-d wrote:
That's in the past. This is all about the pros and cons of 
changing it now

and for the future.



The main argument seems to revolve around whether this is 
actually a change
or not.  In my (and others') view, the treatment of assert as 
'assume' is
not a change at all.  It was in there all along, we just needed 
the wizard

to tell us.


This is already the first misunderstanding: The argument is about 
whether it's a good idea, not whether it's newly introduced or 
has been the intended meaning since assert's conception.






The below can safely be ignored, as I just continue the pedantic
discussions


OK, but my point was you were using a different definition of 
undefined
behavior. We can't communicate if we aren't using the same 
meanings of

words.


Yes, very true.  My definition of undefined in this case hinges 
on my
definition of what assert means.  If a failed assert means all 
code after
it is invalid, then by definition (as I interpret the 
definition) that code
is invalid and can be said to have undefined behaviour.  That 
is, it makes
sense to me that it is specified as undefined, by the spec that 
is
incredibly unclear.  I may be reading too much into it here, 
but this
follows the strict definition of undefined - it is undefined 
because it is
defined to be undefined.  This is the 'because I said so' 
defense.


Of course you can define your own concept and call it 
undefined, but I don't see how it matters. The concept 
described by the usual definition of undefined still exists, 
and it still has very different implications than your concept 
has. To give a more practical example:


You're writing an authentication function. It takes a username 
and a password, and returns true or false, depending on whether 
the password is correct for this username. Unfortunately, the 
verification algorithm is wrong: due to an integer overflow in 
the hash calculation, it rejects some valid passwords, but never 
accepts invalid ones. The program is clearly incorrect, but its 
behaviour is still well-defined and predictable (overflow is not 
undefined in D).


Now, if the flaw in the algorithm were due to an actual undefined 
operation, everything could happen, including the function 
accepting invalid passwords. I hope it's clear that this is a 
very different class of brokenness.


My stance is that this new/old definition is a good thing, as 
it matches
how I thought things were already, and any code that surfaces 
as broken
because of it was already broken in my definition.  Therefore 
this 'change'
is good, does not introduce breaking changes, and arguments 
about such
should be redirected towards mitigation and fixing of 
expectations.


In an attempt to return this discussion to something useful, 
question:


If assert means what I think it means (and assuming we agree on 
what the
actual two interpretations are), how do we allay concerns about 
it?  Is
there anything fundamentally/irretrievably bad if we use this 
new/old
definition?  (Can we programmatically (sp?) identify and 
flag/resolve

issues that occur from a mismatch of expectations?)


My understanding (which is probably the same as that of most 
people participating in the discussion, because as I said above, 
I _don't_ think the argument is about a misunderstanding on this 
level):


Walter's assert:
* Expresses a statement that the programmer intends to be true. 
It is only checked in non-release mode.
* The compiler can assume that it is true - even in release mode 
- because the programmer explicitly said so, and the compiler may 
not have figured it out by itself (similar to casts, which also 
express assumptions by the programmer that the compiler cannot 
know otherwise).

* Asserting a condition that is false is undefined behaviour.

The other assert:
* Expresses a statement that the programmer intends to be true. 
It is only checked in non-release mode.
* Because it is unlikely that the programmer has proved the 
correctness in the general case, the compiler must not assume it 
is true unless it can prove it to be, either at compile time, or 
with a runtime check. Release mode disables the runtime checks.
* Asserting a condition that is false either raises an error at 
runtime, aborts compilation, or doesn't do anything. It never 
causes undefined behaviour by itself.


As I already wrote elsewhere, an assert with the 
suggested/intended behaviour is a very dangerous tool that should 
not be used as widely as it is today. If the asserted condition 
is wrong (for whatever reason), it would create not just wrong 
behaviour, but undefined behaviour (as described above, not your 
concept).


H.S. Teoh however suggested to extend compile time checking for 
assertions. I believe this is the way to go forward, and it has 
great potential. What I don't agree with, of course, is to just 
believe anything in the assertions to be true without 

Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Tuesday, 5 August 2014 at 21:17:14 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Tue, Aug 05, 2014 at 08:11:16PM +, via Digitalmars-d 
wrote:
On Tuesday, 5 August 2014 at 18:57:40 UTC, H. S. Teoh via 
Digitalmars-d

wrote:
Exactly. I think part of the problem is that people have been 
using
assert with the wrong meaning. In my mind, 'assert(x)' 
doesn't mean
abort if x is false in debug mode but silently ignore in 
release
mode, as some people apparently think it means. To me, it 
means at

this point in the program, x is true.  It's that simple.

A language construct with such a meaning is useless as a safety
feature.


I don't see it as a safety feature at all.


Sorry, I should have written correctness feature. I agree that 
it's not very useful for safety per se. (But of course, safety 
and correctness are not unrelated.)





If I first have to prove that the condition is true before I 
can
safely use an assert, I don't need the assert anymore, because 
I've

already proved it.


I see it as future proofing: I may have proven the condition 
for *this*
version of the program, but all software will change (unless 
it's dead),
and change means the original proof may no longer be valid, but 
this

part of the code is still written under the assumption that the
condition holds. In most cases, it *does* still hold, so in 
general

you're OK, but sometimes a change invalidates an axiom that, in
consequence, invalidates the assertion.  Then the assertion 
will trip
(in non-release mode, of course), telling me that my program 
logic has
become invalid due to the change I made.  So I'll have to fix 
the

problem so that the condition holds again.


Well, I think it's unlikely that you actually did prove the 
assert condition, except in trivial situations. This is related 
to the discussion about the ranges example, so I'll respond there.





If it is intended to be an optimization hint, it should be 
implemented
as a pragma, not as a prominent feature meant to be widely 
used. (But
I see that you have a different use case, see my comment 
below.)


And here is the beauty of the idea: rather than polluting my 
code with
optimization hints, which are out-of-band (and which are 
generally
unverified and may be outright wrong after the code undergoes 
several
revisions), I am stating *facts* about my program logic that 
must hold
-- which therefore fits in very logically with the code itself. 
It even
self-documents the code, to some extent. Then as an added 
benefit, the
compiler is able to use these facts to emit more efficient 
code. So to
me, it *should* be a prominent, widely-used feature. I would 
use it, and

use it a *lot*.


I think this is where we disagree mainly: What you call facts is 
something I see as intentions that *should* be true, but are not 
*proven* to be so. Again, see below.






The optimizer only guarantees (in theory) consistent program
behaviour if the program is valid to begin with. If the 
program is
invalid, all bets are off as to what its optimized version 
does.


There is a difference between invalid and undefined: A program 
is
invalid (buggy), if it doesn't do what it's programmer 
intended,
while undefined is a matter of the language specification. 
The
(wrong) behaviour of an invalid program need not be undefined, 
and

often isn't in practice.


To me, this distinction doesn't matter in practice, because in 
practice,
an invalid program produces a wrong result, and a program with 
undefined
behaviour also produces a wrong result. I don't care what kind 
of wrong
result it is; what I care is to fix the program to *not* 
produce a wrong

result.


Please see my response to Jeremy; the distinction is important:
http://forum.dlang.org/thread/hqxoldeyugkazolll...@forum.dlang.org?page=11#post-eqlyruvwmzbpemvnrebw:40forum.dlang.org




An optimizer may only transform code in a way that keeps the 
resulting

code semantically equivalent. This means that if the original
unoptimized program is well-defined, the optimized one will 
be too.


That's a nice property to have, but again, if my program 
produces a
wrong result, then my program produces a wrong result. As a 
language
user, I don't care that the optimizer may change one wrong 
result to a
different wrong result.  What I care about is to fix the code 
so that
the program produces the *correct* result. To me, it only 
matters that
the optimizer does the Right Thing when the program is correct 
to begin
with. If the program was wrong, then it doesn't matter if the 
optimizer
makes it a different kind of wrong; the program should be fixed 
so that

it stops being wrong.


We're not living in an ideal world, unfortunately. It is bad 
enough that programs are wrong as they are written, we don't need 
the compiler to transform these programs to do something that is 
still wrong, but also completely different. This would make your 
goal of fixing the program very hard to achieve. In an extreme 
case, a small error in several 

Re: assert semantic change proposal

2014-08-06 Thread Artur Skawina via Digitalmars-d
On 08/06/14 10:28, eles via Digitalmars-d wrote:
 On Wednesday, 6 August 2014 at 07:29:02 UTC, Ola Fosheim Grøstad wrote:
 On Wednesday, 6 August 2014 at 06:56:40 UTC, eles wrote:
 
 Not right:

 b = a+1
 assume(bC)

 implies

 assume(a+1C)
 b = a+1
 
 b = a+1
 if(C=b) exit(1);
 
 implies
 
 if(C=a+1) exit(1);
 b = a+1
 
 Is not the same? Still, one would allow the optimizer to exit before 
 executing b=a+1 line (in the first version) based on that condition 
 (considering no volatile variables).

It is *very* different. In the `exit` case this kind of transformation
is only valid if that assignment has no externally observable effects.
`assume` makes the transformation *unconditionally* valid, even when
executing 'b=a+1' would affect the output of the program.

The compiler can /assume/ that the condition never fails. Hence, it does
not need to generate any code to check that the assumption is valid.
It does not need to emit any code for any path that would only be taken
if the condition would be false. If that ever happens the CPU can start
executing any random instructions. That's ok, since you explicitly told
the compiler that this situation will never occur. Hence, it can also
skip generating any code for any path that unconditionally leads to a
failing `assume` directive. Because it is all dead code that will never
be executed, as long as all `assume` conditions are true.

artur


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 12:41:16 UTC, Artur Skawina via 
Digitalmars-d wrote:
The compiler can /assume/ that the condition never fails. 
Hence, it does
not need to generate any code to check that the assumption is 
valid.


Exactly, worse example using a coroutine:

«
label:
…
while(running) { // forevah!
   … yield …
}
…
assume(anything local not assigned below label) // reachable, but 
never executed

»

is equivalent to:

«
assume(anything local not assigned below label) // optimize based 
on this

label:
…
while(running) {
   … yield …
}
»

Woops?


Re: assert semantic change proposal

2014-08-06 Thread Matthias Bentrup via Digitalmars-d
On Wednesday, 6 August 2014 at 13:31:54 UTC, Ola Fosheim Grøstad 
wrote:
On Wednesday, 6 August 2014 at 12:41:16 UTC, Artur Skawina via 
Digitalmars-d wrote:
The compiler can /assume/ that the condition never fails. 
Hence, it does
not need to generate any code to check that the assumption is 
valid.


Exactly, worse example using a coroutine:

«
label:
…
while(running) { // forevah!
   … yield …
}
…
assume(anything local not assigned below label) // reachable, 
but never executed

»

is equivalent to:

«
assume(anything local not assigned below label) // optimize 
based on this

label:
…
while(running) {
   … yield …
}
»

Woops?


But even if there is no explicit assert()/assume() given by the 
developer, I guess the optimizer is free to insert assumes that 
are provably correct, e.g.


  while(running) {
...don't assign to running, don't break...
  }

is equivalent to

  while(running) {
...don't assign to running, don't break...
  }
  assume(!running);

is equivalent to

  assume(!running);
  while(running) {
...don't assign to running, don't break...
  }

is equivalent to

  assume(!running);


So I take the compiler is allowed to throw away code without any 
asserts already ?


Re: assert semantic change proposal

2014-08-06 Thread Tofu Ninja via Digitalmars-d
On Wednesday, 6 August 2014 at 15:02:10 UTC, Matthias Bentrup 
wrote:
So I take the compiler is allowed to throw away code without 
any asserts already ?


Yes it can, but only in the cases where it can prove it is the 
same.


The main difference is that assume does not need to be proved, it 
is stated to always be true so no proof needed. Some have said 
that the compiler can still try to prove what it can and issue 
warnings if it is provable untrue, but it would actually be a 
little silly on the part of the compiler to waist it's time 
trying to disprove things it knows are true. If it knows it 
is true then why even bother checking?


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 15:02:10 UTC, Matthias Bentrup 
wrote:

is equivalent to

  while(running) {
...don't assign to running, don't break...
  }
  assume(!running);


You have to prove termination to get there? Plain Hoare logic 
cannot deal with non-terminating loops. Otherwise you risk 
proving any theorem below the loop to hold?


But I was talking about explicit assumptions made by the 
programmer, who does not know the rules… It was obviously wrong 
to assume anything to hold if the loop never terminates.


The rule for proving loop termination is quite involved where you 
basically prove that every time  you move through the loop you 
get something less than you had before (ensured by t and  D). 
This is no fun, and a good reason to hate reasoning about 
correctness:


a  b: is a well-founded ordering on the set D

if this holds:

[P ∧ B ∧ t ∈ D ∧ t = z]   S   [P ∧ t ∈ D ∧ t  z ]

then this holds:

[P ∧ t ∈ D]   while B do S done   [¬B ∧ P ∧ t ∈ D]

I think… or maybe not. Who can tell from just glancing at it?


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 15:29:13 UTC, Ola Fosheim Grøstad 
wrote:
On Wednesday, 6 August 2014 at 15:02:10 UTC, Matthias Bentrup 
wrote:

is equivalent to

 while(running) {
   ...don't assign to running, don't break...
 }
 assume(!running);


You have to prove termination to get there? Plain Hoare logic 
cannot deal with non-terminating loops. Otherwise you risk 
proving any theorem below the loop to hold?


This is an assume, not an assert. The user tells the compiler 
that `running` is false after the loop, it can thus conclude that 
the loop will never run. So there is no non-terminating loop in 
this case.


Re: assert semantic change proposal

2014-08-06 Thread Artur Skawina via Digitalmars-d
On 08/06/14 17:02, Matthias Bentrup via Digitalmars-d wrote:
 But even if there is no explicit assert()/assume() given by the developer, I 
 guess the optimizer is free to insert assumes that are provably correct, e.g.
 
   while(running) {
 ...don't assign to running, don't break...
   }
 
 is equivalent to
 
   while(running) {
 ...don't assign to running, don't break...
   }
   assume(!running);

In the running==true case that 'assume' will never be reached...

 is equivalent to
 
   assume(!running);
   while(running) {
 ...don't assign to running, don't break...
   }

... so, no, this transformation is not a valid one.

 So I take the compiler is allowed to throw away code without any asserts 
 already ?

It can do whatever it wants, as long as the observable behavior does
not change. `Assume` removes that restriction as soon as a failing
assert is reached. 

artur


Re: assert semantic change proposal

2014-08-06 Thread Tofu Ninja via Digitalmars-d

On Wednesday, 6 August 2014 at 08:25:38 UTC, Walter Bright wrote:

Thank you for the well thought out response.

I responded to the equivalent design proposal several times 
already, with detailed answers. This one is shorter, but the 
essential aspects are there. I know those negative aspects came 
across because they are addressed with your counter:


I was not trying to attack you, but rather inform you. I think 
that is all I want to say on the subject so I am going to stop 
talking about it.



1. it's long with an unappealing hackish look


I find this a positive, the proposed assert has quite dangerous 
implications, having a slightly hackish look will serve to 
discourage using it when it is not obviously needed.


2. it follows in the C++ tradition of the best practice being 
the long ugly way, and the deprecated practice is the 
straightforward way (see arrays in C++)


That is implying that the best practice is your version of 
assert. As it can introduce undefined behavior, I find it hard to 
believe.


3. users will be faced with two kinds of asserts, with a subtle 
difference that is hard to explain, hard to remember which is 
which, and will most likely use inappropriately


This is why I think it would be best added as an annotation on 
assert. People who don't care enough to look up what the extra 
annotation does will just use assert by itself and have no risk 
of undefined behavior and it will behave as expected(similar to 
how it behaves in other languages).



4. everyone who wants faster assert optimizations will have to 
rewrite their (possibly extensive) use of asserts that we'd 
told them was best practice. I know I'd be unhappy about having 
to do such to my D code.


This is a fare point but it would happen the same either way. If 
some one used assert expecting it to act like the C assert and 
found out it did not any more, they would be forced to go through 
though there code and change all the asserts. As this is a 
change, I am inclined to say we should favor the old version and 
not force users of the old version to update.


This is not about my feelings, other than my desire to find the 
best design based on a number of tradeoffs.


I was just trying to say that it is a possibility that should not 
be forgotten, I think I may have said it in a harsher way than I 
meant to, I apologize. The annotated assert was my attempt at a 
possible compromise.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 15:36:52 UTC, Marc Schütz wrote:

This is an assume, not an assert.


Not sure what you mean. An assume is an assert proven (or 
defined) to hold. It can be a proven theorem which has been given 
the status of an axiom. It is known to keep the algebraic system 
sound. If you claim that something unsound is proven then all 
bets are off everywhere?


I am quite confident that assume(false) anywhere in your program 
is basically stating that the program is unsound (true==false) 
and should not be compiled and run at all. If true==false 
anywhere in your program then it surely holds everywhere in your 
program?


The only reason c-style assert(false) works, is because you 
delay the verification until the last moment, at which point the 
system says woopsie, gotta terminate this because this program 
should never have compiled in the first place.


Sounds consistent to me?


Re: assert semantic change proposal

2014-08-06 Thread David Bregman via Digitalmars-d

On Wednesday, 6 August 2014 at 08:25:38 UTC, Walter Bright wrote:

On 8/5/2014 11:28 PM, Tofu Ninja wrote:

Please stop responding in such a dismissive way, I think it is
already pretty obvious that some are getting frustrated by 
these

threads. Responding in a dismissive way makes it seem like you
don't take the arguments seriously.


I responded to the equivalent design proposal several times 
already, with detailed answers. This one is shorter, but the 
essential aspects are there. I know those negative aspects came 
across because they are addressed with your counter:


I don't think I've seen your arguments against adding assume(), 
last time this was discussed you were still claiming that there 
was no difference between the two, so we couldn't even get to 
discussing it!




 How about something like
 @expected assert(x  2); or @assumed assert(x  2);
 It wouldn't introduce a new keyword, but still introduces the
expected/assumed semantics.

The riposte:

1. it's long with an unappealing hackish look
2. it follows in the C++ tradition of the best practice being 
the long ugly way, and the deprecated practice is the 
straightforward way (see arrays in C++)
3. users will be faced with two kinds of asserts, with a subtle 
difference that is hard to explain, hard to remember which is 
which, and will most likely use inappropriately
4. everyone who wants faster assert optimizations will have to 
rewrite their (possibly extensive) use of asserts that we'd 
told them was best practice. I know I'd be unhappy about having 
to do such to my D code.




Nice, this is progress. You're giving us some actual reasoning! :)

Is this the full list of your reasons or just a subset?

1, 2:
Those don't apply to assume() without the annotations right? I 
agree that's hacky looking with annotations.


3:
If the separation is not made official, a lot of people will end 
up having to roll their own, potentially with all kinds of names. 
This is much more fragmenting than just having official assert 
and assume. myAssert, debugCheck, smurfAssert, etc. ugh.


As for the difference being subtle, hard to remember, understand, 
or explain: This totally supports my argument. If people don't 
even understand the subtlety, there's no way they are ready to be 
inserting undefined behavior everywhere in their code. assume (or 
your assert) is a dangerous power tool, joe coder probably 
shouldn't be using it at all. Yet you want it used everywhere by 
default?


4:
This is a valid argument. The flip side is that everyone else has 
to invent their own function and rewrite their code with it, 
unless they have 100% faith their code is not buggy, or don't 
care about undefined behavior (or don't know they care about it 
until they get bitten by it later). Or if they go for the quick 
fix by disabling -release, then you just pessimized their code 
instead of optimized it. Plus what about people who miss the 
change, or unmaintained code?


Isn't the default way of doing things to err on the side of not 
breaking people's code? Why is this time different? I know you 
hate more switches (for good reason), but this seems like a good 
case for -Oassert or -Ounsafe, for the elite few people whose 
code is actually suitable for such dangerous transformation.



I would like to note some additional benefits of separating the 
two concepts and hopefully get your thoughts on these. Some of 
these are repeating from previous posts.


-assert(0) is defined as a special case, so it can't express 
unreachability. This makes it less powerful for optimizing than 
it should be. assume(0) does not have this problem. e.g. 
switch(){ /* ... */ default: assume(0); }


-the proposed assert is not @safe, which creates complications 
since the current assert is. assume() does not have this problem, 
as it can be defined as @system from the start.


-with the functions separate, it can be made clear that assume() 
is a dangerous tool that should be used sparingly. It will mean 
that code is more searchable and easier to audit - if there is a 
heisenbug, try a search for @trusted and assume() constructs.


-retain C compatibility, which is supposed to be a goal of D.


Re: assert semantic change proposal

2014-08-06 Thread Matthias Bentrup via Digitalmars-d

On Wednesday, 6 August 2014 at 15:29:13 UTC, Ola Fosheim Grøstad
wrote:
On Wednesday, 6 August 2014 at 15:02:10 UTC, Matthias Bentrup 
wrote:

is equivalent to

 while(running) {
   ...don't assign to running, don't break...
 }
 assume(!running);


You have to prove termination to get there? Plain Hoare logic 
cannot deal with non-terminating loops. Otherwise you risk 
proving any theorem below the loop to hold?




I'm pretty sure that running is false when the program reaches
the end of the while loop, whether it is assigned in the loop or
not. I only added the don't assign to running to make the
example match your pattern.


Re: assert semantic change proposal

2014-08-06 Thread Kagamin via Digitalmars-d

On Wednesday, 6 August 2014 at 00:50:07 UTC, Walter Bright wrote:

On 8/3/2014 4:24 PM, Martin Krejcirik wrote:
Couldn't this new assert behaviour be introduced as a new 
optimization
switch ? Say -Oassert ? It would be off by default and would 
work both

in debug and release mode.


It could, but every one of these:

1. doubles the time it takes to test dmd, it doesn't take many 
of these to render dmd untestable


2. adds confusion to most programmers as to what switch does 
what


3. adds complexity, i.e. bugs


If performance is not worth associated complexity, then it 
doesn't pull its weight.


4. interactions between optimization switches often exhibits 
emergent behavior - i.e. extremely hard to test for


Why do you think the emergent behavior is caused by interactions 
between switches, you think optimizations themselves don't 
interact? You said, it's factorial, is it a number of 
permutations of switches or a number of permutations of 
optimizations? Switches should not be sensitive to permutations.


On Wednesday, 6 August 2014 at 08:25:38 UTC, Walter Bright wrote:

1. it's long with an unappealing hackish look


It's an established practice in D that unsafe features should 
look unappealing. Example: __gshared.


3. users will be faced with two kinds of asserts, with a subtle 
difference that is hard to explain, hard to remember which is 
which, and will most likely use inappropriately


Andrei already proposed `debug assert` - a safe flavor of 
(dangerous by default) assert. So two kinds of assert are 
inevitable and should be documented, because user should be 
informed about dangerous optimizations. But frankly compiler 
will break your code is not a subtle difference and can be 
easily explained in just 5 words.


And BTW why safe behavior must be invoked with an extra syntax? 
That goes against best D practices and hence surprising, 
confusing and counterintuitive.


I'll sum up with the old saw that any programming problem can 
be solved with another level of indirection. I submit a 
corollary that any language issue can be solved by adding 
another keyword or compiler flag. The (much) harder thing is to 
solve a problem with an elegant solution that does not involve 
new keywords/flags, and fits in naturally.


So what is this elegant solution? To break code silently in the 
worst possible way and selectively and exactly at the most 
critical points?


Another downside of assert being dangerous - you can't add it to 
code freely. Every assert can order the compiler to break the 
code, adding an assert becomes a difficult task, because the 
asserted expression (being assumed) begins to require a thorough 
future-proof environment-independent proof, which is very hard to 
come by: it needs to be an actual proof than just a guess, 
because the tradeoff is to be literally punished by death. 
Unittests don't provide such proof.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 16:42:57 UTC, Matthias Bentrup 
wrote:

I'm pretty sure that running is false when the program reaches
the end of the while loop, whether it is assigned in the loop or
not. I only added the don't assign to running to make the
example match your pattern.


Well, if you don't assign to it in the loop, and it is known to 
terminate, then running is provably false before the loop for 
sure?


But then it isn't a loop… It is an empty statement: skip, NOP…


Re: assert semantic change proposal

2014-08-06 Thread Artur Skawina via Digitalmars-d

On 08/06/14 18:00, via Digitalmars-d wrote:
I am quite confident that assume(false) anywhere in your 
program is basically stating that the program is unsound 
(true==false) and should not be compiled and run at all.


No, an assume(false) in a program only means that every _path_
_leading_to_that_statement is 'unsound'. For practical purposes
it's better to treat 'unsound' as impossible and unreachable.

IOW

import std.stdio, std.array;
int main(string[] argv) {
   if (argv.length2)
  assume(0);
   if (argv.length==1)
  writeln(help text);
   return argv.empty;
}

=

00403890 _Dmain:
   403890:   31 c0   xor%eax,%eax
   403892:   c3  retq


The alternatives would be to make it either: a) always a compile
error, or b) always a runtime error. The former would add little
value (`static assert` already exists); the latter is already
available as `assert(0)`.
The above example after s/assume(0)/assert(0)/ becomes:

00403890 _Dmain:
   403890:   48 83 ff 01 cmp$0x1,%rdi
   403894:   76 03   jbe403899 
_Dmain+0x9

   403896:   31 c0   xor%eax,%eax
   403898:   c3  retq
   403899:   50  push   %rax
   40389a:   e8 71 e7 ff ff  callq  402010 abort@plt

IOW the compiler can still optimize based on the (un)reachability,
but the behavior in no longer undefined.

artur


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 16:59:18 UTC, Artur Skawina wrote:

No, an assume(false) in a program only means that every _path_
_leading_to_that_statement is 'unsound'. For practical purposes
it's better to treat 'unsound' as impossible and unreachable.


I don't think so. Because a program is only correct if all axioms 
and proposed theorems are proven true.


All axioms are by definition true, so if you assume(false==true) 
anywhere it has to hold everywhere. Neither false or true are 
variables.


Just like assume(PI==3.14…) has to hold everywhere. false, true 
and PI are constant throughout every path in the program.


Not really sure how it is possible to disagree with that?


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 16:00:33 UTC, Ola Fosheim Grøstad 
wrote:

On Wednesday, 6 August 2014 at 15:36:52 UTC, Marc Schütz wrote:

This is an assume, not an assert.


Not sure what you mean. An assume is an assert proven (or 
defined) to hold. It can be a proven theorem which has been 
given the status of an axiom. It is known to keep the algebraic 
system sound. If you claim that something unsound is proven 
then all bets are off everywhere?


I guess we're talking past each other. You were saying that Hoare 
logic doesn't work with non-terminating loops, and I was 
responding that there was no non-terminating loop in the example. 
That's all there is to it.


To clarify: The assume condition wasn't `false`, it was 
`!running`. There is a situation in which this is true, namely 
when `running` is false at the point where the assume is. Because 
the variable isn't changed in the loop, it follows that it's also 
true before the loop. I see no contradiction here.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 17:03:44 UTC, Marc Schütz wrote:
I guess we're talking past each other. You were saying that 
Hoare logic doesn't work with non-terminating loops, and I was 
responding that there was no non-terminating loop in the 
example. That's all there is to it.


Oh well, but a terminating loop that does not change the 
condition is just an empty statement. So then you have basically 
proved that it isn't a loop… ;)


Re: assert semantic change proposal

2014-08-06 Thread Sean Kelly via Digitalmars-d
So from a functional perspective, assuming this is a 
multithreaded app, what should the procedure be when an 
AssertError is thrown in some thread?


Re: assert semantic change proposal

2014-08-06 Thread Matthias Bentrup via Digitalmars-d

On Wednesday, 6 August 2014 at 17:08:18 UTC, Ola Fosheim Grøstad
wrote:

On Wednesday, 6 August 2014 at 17:03:44 UTC, Marc Schütz wrote:
I guess we're talking past each other. You were saying that 
Hoare logic doesn't work with non-terminating loops, and I was 
responding that there was no non-terminating loop in the 
example. That's all there is to it.


Oh well, but a terminating loop that does not change the 
condition is just an empty statement. So then you have 
basically proved that it isn't a loop… ;)


I still can't see how you can infer that the assume(!running)
which clearly holds after the loop also holds before the loop.


Re: assert semantic change proposal

2014-08-06 Thread Artur Skawina via Digitalmars-d
On Wednesday, 6 August 2014 at 17:03:45 UTC, Ola Fosheim Grøstad 
wrote:
On Wednesday, 6 August 2014 at 16:59:18 UTC, Artur Skawina 
wrote:

No, an assume(false) in a program only means that every _path_
_leading_to_that_statement is 'unsound'. For practical purposes
it's better to treat 'unsound' as impossible and unreachable.


I don't think so. Because a program is only correct if all 
axioms and proposed theorems are proven true.


All axioms are by definition true, so if you 
assume(false==true) anywhere it has to hold everywhere. Neither 
false or true are variables.


Just like assume(PI==3.14…) has to hold everywhere. false, true 
and PI are constant throughout every path in the program.


Not really sure how it is possible to disagree with that?


These threads have proven that it's apparently possible to 
disagree

even about the definition of words like disagreement and
definition...

But, in a practical programming language definition context,
consider:

   if (0)
  assume(0);

Yes, `assume` could be defined in a way that would make this
always a compile error. But then `assume(0)` would be useless.
If it means 'unreachable' instead, then the behavior is still
consistent - it becomes a way to to tell the compiler: this
can never happen, optimize accordingly.

[gmail smtp servers seem to be broken right now; had to post this
 via the webform; probably won't be posting until the mailing list
 i/f starts working for me again]

artur


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 17:19:45 UTC, Matthias Bentrup 
wrote:

I still can't see how you can infer that the assume(!running)
which clearly holds after the loop also holds before the loop.


It holds if the loop does not change running, trivially?
But then you no longer have a loop.

Note also that you can implement an if construct from while this 
way:


while(B){
S;
B=false;
}

If that was the case, then you could not infer anything about B 
before the while.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 17:03:45 UTC, Ola Fosheim Grøstad 
wrote:
On Wednesday, 6 August 2014 at 16:59:18 UTC, Artur Skawina 
wrote:

No, an assume(false) in a program only means that every _path_
_leading_to_that_statement is 'unsound'. For practical purposes
it's better to treat 'unsound' as impossible and unreachable.


I don't think so. Because a program is only correct if all 
axioms and proposed theorems are proven true.


All axioms are by definition true, so if you 
assume(false==true) anywhere it has to hold everywhere. Neither 
false or true are variables.


Just like assume(PI==3.14…) has to hold everywhere. false, true 
and PI are constant throughout every path in the program.


Not really sure how it is possible to disagree with that?


The crux is the axiom that is being defined. When you write 
`assume(condition)`, the axiom is not:


`condition` is true.

but:

When control flow reaches this point, then `condition` is 
true.


It's not an unconditionally true condition :-P


Re: assert semantic change proposal

2014-08-06 Thread David Bregman via Digitalmars-d

On Wednesday, 6 August 2014 at 17:18:19 UTC, Sean Kelly wrote:
So from a functional perspective, assuming this is a 
multithreaded app, what should the procedure be when an 
AssertError is thrown in some thread?


afaik, AssertError works the same as any other exception. If it 
reaches the top without being handled, then the entire app will 
be terminated including all the other threads.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 17:59:06 UTC, Marc Schütz wrote:
The crux is the axiom that is being defined. When you write 
`assume(condition)`, the axiom is not:


`condition` is true.

but:

When control flow reaches this point, then `condition` is 
true.


It's not an unconditionally true condition :-P


Assume(X) does not provide a condition. It defines a 
proposition/relation to be true.


If you define a relation between two constants then it has to 
hold globally. Otherwise they are not constant…


Assume(P) defines that the proposition holds. If it does not 
involve variables, then it will be free to move anywhere by the 
rules of Hoare-logic (and propositional logic)? If 
assume(PI==3.14…) can move everywhere, by the rules, then so can 
assume(true==false).


You surely don't want to special case true==false in the hoare 
logic rules?


Please note that it is perfectly fine to define true==false, 
but you then have to live by it, that is, you no longer have a 
boolean algebra, you have an unsound algebra.  And that holds 
globally. So, please don't think that assume(true==false) is a 
test of a condition, it is just a definition that you impose on 
the equality relation (that also, by incident, happens to make 
the algebra unsound)


If you special case assume(false) and assert(false) then you can 
no longer use deduction properly and it essentially becomes an 
aberration, syntactical sugar, that needs special casing 
everywhere.


It would be much better to just halt compilation if you at 
compile time can evaluate X==false for assume(X) or assert(X). 
Why let this buggy code live? If it failed at compile time then 
let the compilation fail! This would work better with CTFE.


@unreachable is an alternative if you want to be explicit about 
it and matches __builtin_unreachable et al.
















Re: assert semantic change proposal

2014-08-06 Thread Kagamin via Digitalmars-d

On Wednesday, 6 August 2014 at 08:25:38 UTC, Walter Bright wrote:
4. everyone who wants faster assert optimizations will have to 
rewrite their (possibly extensive) use of asserts that we'd 
told them was best practice. I know I'd be unhappy about having 
to do such to my D code.


Also having the same syntax for both kinds of assert makes it 
easier to try unsafe optimizations: the code can be written safe, 
then unsafe optimizations can tried effortlessly and performance 
gains evaluated. In C one would only need to edit the macro to do 
this. Whether we want to allow such experiments is debatable, but 
I find it at least reasonable. One may also want the 
optimizations to propagate backwards for even more performance - 
this would be a different kind of optimization, which may or may 
not require yet another syntax.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 18:51:05 UTC, Ola Fosheim Grøstad 
wrote:

On Wednesday, 6 August 2014 at 17:59:06 UTC, Marc Schütz wrote:
The crux is the axiom that is being defined. When you write 
`assume(condition)`, the axiom is not:


   `condition` is true.

but:

   When control flow reaches this point, then `condition` is 
true.


It's not an unconditionally true condition :-P


Assume(X) does not provide a condition. It defines a 
proposition/relation to be true.


Well, let's call it proposition then...



If you define a relation between two constants then it has to 
hold globally. Otherwise they are not constant…


Yes, but as I wrote above, we're not defining a relation between 
two constants (true == false), we define the following axiom:


When control flow reaches this point, then true == false.

Because true is not equal to false, it follows that the first 
part of the proposition cannot be true, meaning control flow will 
not reach this point.




Assume(P) defines that the proposition holds. If it does not 
involve variables, then it will be free to move anywhere by the 
rules of Hoare-logic (and propositional logic)? If 
assume(PI==3.14…) can move everywhere, by the rules, then so 
can assume(true==false).


See above. Of course we could define our `assume(P)` to define 
`P` as an axiom directly, but this would be much less useful, 
because it would have exactly the consequences you write about. 
And it would be a lot more consistent, too. Surely you wouldn't 
want `assume(x == 42)` to mean x is always and everywhere equal 
to 42, but just x is equal to 42 when control flow passes this 
point.




You surely don't want to special case true==false in the 
hoare logic rules?


Please note that it is perfectly fine to define true==false, 
but you then have to live by it, that is, you no longer have a 
boolean algebra, you have an unsound algebra.  And that holds 
globally. So, please don't think that assume(true==false) is a 
test of a condition, it is just a definition that you impose on 
the equality relation (that also, by incident, happens to make 
the algebra unsound)


If you special case assume(false) and assert(false) then you 
can no longer use deduction properly and it essentially becomes 
an aberration, syntactical sugar, that needs special casing 
everywhere.


It would be much better to just halt compilation if you at 
compile time can evaluate X==false for assume(X) or assert(X). 
Why let this buggy code live? If it failed at compile time then 
let the compilation fail! This would work better with CTFE.


@unreachable is an alternative if you want to be explicit 
about it and matches __builtin_unreachable et al.


See above, no special treatment is necessary.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 19:18:30 UTC, Marc Schütz wrote:
Assume(P) defines that the proposition holds. If it does not 
involve variables, then it will be free to move anywhere by 
the rules of Hoare-logic (and propositional logic)? If 
assume(PI==3.14…) can move everywhere, by the rules, then so 
can assume(true==false).


See above. Of course we could define our `assume(P)` to define 
`P` as an axiom directly, but this would be much less useful, 
because it would have exactly the consequences you write about. 
And it would be a lot more consistent, too. Surely you wouldn't 
want `assume(x == 42)` to mean x is always and everywhere 
equal to 42, but just x is equal to 42 when control flow 
passes this point.


This is unclear. What I wanted to write is: It would be more 
consistent to use the definition of `assume` that I propose..


Re: assert semantic change proposal

2014-08-06 Thread Sean Kelly via Digitalmars-d

On Wednesday, 6 August 2014 at 18:15:41 UTC, David Bregman wrote:

On Wednesday, 6 August 2014 at 17:18:19 UTC, Sean Kelly wrote:
So from a functional perspective, assuming this is a 
multithreaded app, what should the procedure be when an 
AssertError is thrown in some thread?


afaik, AssertError works the same as any other exception. If it 
reaches the top without being handled, then the entire app will 
be terminated including all the other threads.


Except that's not really how unhandled exceptions are treated in 
threads.  They're stored by the thread and (optionally) re-thrown 
in the context of whatever thread calls join, if any thread calls 
join at all.  The assumption at the time was that the joining 
thread cared about the state of the thread being joined and so 
re-throwing the exception was a way to communicate the error to 
the most interested party.


So if the main thread joins a thread that terminated with an 
unhandled AssertError, the error will be re-thrown in the context 
of the main thread, which will in turn join all remaining running 
threads and wait for them to complete.  This is the same behavior 
as if an AssertError is generated during normal processing of the 
main thread.  But in neither case is the application forcibly 
terminated when an assertion failure occurs.


So again, my question is twofold: what *should* happen, given 
this new treatment for assertions, and then *how* will we 
accomplish this?  A multithreaded process is really pretty much 
equivalent to a bunch of standalone processes with shared memory. 
 There is no facility for forcing a clean termination of another 
thread.  The best I can think of would be to assume that 
std.concurrency is being used for multithreading and sort things 
out similar to the existing LinkTerminated signaling, but looking 
for a reasonable solution within Druntime would be tricky.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d

On Wednesday, 6 August 2014 at 19:18:30 UTC, Marc Schütz wrote:

On Wednesday, 6 August 2014 at 18:51:05 UTC, Ola Fosheim


Yes, but as I wrote above, we're not defining a relation 
between two constants (true == false), we define the 
following axiom:


When control flow reaches this point, then true == false.

Because true is not equal to false, it follows that the first 
part of the proposition cannot be true, meaning control flow 
will not reach this point.


But you are defining an axiom, not evaluating, that is how the 
optimizer can use it for deduction. Think of how Prolog works.


See above. Of course we could define our `assume(P)` to define 
`P` as an axiom directly, but this would be much less useful, 
because it would have exactly the consequences you write about. 
And it would be a lot more consistent, too. Surely you wouldn't 
want `assume(x == 42)` to mean x is always and everywhere 
equal to 42, but just x is equal to 42 when control flow 
passes this point.


I think there is a misunderstanding here. If you have:

uint x=0;

then that invokes many axioms defined by the language:

uint x=0;
assume(x exists from here)
assume(any value assigned to x has to be in the range 
[0,0x])

assume(x==0)
assume(type(x)=='uint')
assume(…etc…)

So if you then do:

x=x+1
assert(x==1)

You have to move the assert upwards (3+ steps) and see if it 
satisfied by any of the axioms you run into.


3: assert(x==0) //covered by axiom(x==0), x==1 is satisfied
2: assert(x+1-1==1-1)
1: assert(x+1==1)
x=x+1
assert(x==1)

Without the uint x=0; you would then have moved all the way up 
to the global scope (conceptually speaking) and not been able to 
find any axioms matching 'x'.


If you assume(x==0) on the other hand, then you don't need the 
other axioms at all. It is instantly satisfied. No need to prove 
anything.



See above, no special treatment is necessary.


I don't follow. If you use assert(false) to signify anything 
special beyond requesting a proof or assume(false) for anything 
beyond defining an axiom, then you are special casing.


Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
Please also remember that they are all constants in math, but 
variables are constants that are automatically renamed for 
convinience…


uint x=0;
x=x+1;
x=x*2;

is…

uint x0=0;
x1=x0+1;
x2=x1*2;

perhaps that makes it more clear.



Re: assert semantic change proposal

2014-08-06 Thread via Digitalmars-d
On Wednesday, 6 August 2014 at 20:01:04 UTC, Ola Fosheim Grøstad 
wrote:
Please also remember that they are all constants in math, but 
variables are constants that are automatically renamed for 
convinience…


uint x=0;
x=x+1;
x=x*2;

is…

uint x0=0;
x1=x0+1;
x2=x1*2;

perhaps that makes it more clear.


As you may recall, this is also the common representation in 
compilers:


http://en.wikipedia.org/wiki/Static_single_assignment_form


Re: assert semantic change proposal

2014-08-06 Thread David Bregman via Digitalmars-d

On Wednesday, 6 August 2014 at 19:34:48 UTC, Sean Kelly wrote:
On Wednesday, 6 August 2014 at 18:15:41 UTC, David Bregman 
wrote:

On Wednesday, 6 August 2014 at 17:18:19 UTC, Sean Kelly wrote:
So from a functional perspective, assuming this is a 
multithreaded app, what should the procedure be when an 
AssertError is thrown in some thread?


afaik, AssertError works the same as any other exception. If 
it reaches the top without being handled, then the entire app 
will be terminated including all the other threads.


Except that's not really how unhandled exceptions are treated 
in threads.  They're stored by the thread and (optionally) 
re-thrown in the context of whatever thread calls join, if any 
thread calls join at all.  The assumption at the time was that 
the joining thread cared about the state of the thread being 
joined and so re-throwing the exception was a way to 
communicate the error to the most interested party.


So if the main thread joins a thread that terminated with an 
unhandled AssertError, the error will be re-thrown in the 
context of the main thread, which will in turn join all 
remaining running threads and wait for them to complete.  This 
is the same behavior as if an AssertError is generated during 
normal processing of the main thread.  But in neither case is 
the application forcibly terminated when an assertion failure 
occurs.


Ah, I see. Then I apologize for giving incorrect information. I 
am not that familiar with threads in D yet and I assumed 
unhandled exceptions worked like C++. But if you already knew the 
answer then why were you asking?


So again, my question is twofold: what *should* happen, given 
this new treatment for assertions, and then *how* will we 
accomplish this?  A multithreaded process is really pretty much 
equivalent to a bunch of standalone processes with shared 
memory.
 There is no facility for forcing a clean termination of 
another thread.  The best I can think of would be to assume 
that std.concurrency is being used for multithreading and sort 
things out similar to the existing LinkTerminated signaling, 
but looking for a reasonable solution within Druntime would be 
tricky.


I think there is some kind of misunderstanding here, I have not 
proposed any new treatment for assertions. Walter's proposal only 
affects release mode, so it's not related to exceptions.


Re: assert semantic change proposal

2014-08-06 Thread Sean Kelly via Digitalmars-d

On Wednesday, 6 August 2014 at 20:22:58 UTC, David Bregman wrote:

On Wednesday, 6 August 2014 at 19:34:48 UTC, Sean Kelly wrote:
On Wednesday, 6 August 2014 at 18:15:41 UTC, David Bregman 
wrote:

On Wednesday, 6 August 2014 at 17:18:19 UTC, Sean Kelly wrote:
So from a functional perspective, assuming this is a 
multithreaded app, what should the procedure be when an 
AssertError is thrown in some thread?


afaik, AssertError works the same as any other exception. If 
it reaches the top without being handled, then the entire app 
will be terminated including all the other threads.


Except that's not really how unhandled exceptions are treated 
in threads.  They're stored by the thread and (optionally) 
re-thrown in the context of whatever thread calls join, if any 
thread calls join at all.  The assumption at the time was that 
the joining thread cared about the state of the thread being 
joined and so re-throwing the exception was a way to 
communicate the error to the most interested party.


So if the main thread joins a thread that terminated with an 
unhandled AssertError, the error will be re-thrown in the 
context of the main thread, which will in turn join all 
remaining running threads and wait for them to complete.  This 
is the same behavior as if an AssertError is generated during 
normal processing of the main thread.  But in neither case is 
the application forcibly terminated when an assertion failure 
occurs.


Ah, I see. Then I apologize for giving incorrect information. I 
am not that familiar with threads in D yet and I assumed 
unhandled exceptions worked like C++. But if you already knew 
the answer then why were you asking?


It seems like this change in treatment of assertion failures 
might require a change in runtime behavior as well.  But I don't 
know exactly what people have in mind.  Also, assuming that a 
change is expected, depending on what the desired effect is, I'm 
not sure I'll know how to do it.



So again, my question is twofold: what *should* happen, given 
this new treatment for assertions, and then *how* will we 
accomplish this?  A multithreaded process is really pretty 
much equivalent to a bunch of standalone processes with shared 
memory.
There is no facility for forcing a clean termination of 
another thread.  The best I can think of would be to assume 
that std.concurrency is being used for multithreading and sort 
things out similar to the existing LinkTerminated signaling, 
but looking for a reasonable solution within Druntime would be 
tricky.


I think there is some kind of misunderstanding here, I have not 
proposed any new treatment for assertions. Walter's proposal 
only affects release mode, so it's not related to exceptions.


Well, it seems like the underlying idea is to treat assertion 
failures more strongly than we do now.  So changes in how these 
are handled by the runtime might be a side effect of this 
proposal.


  1   2   3   >