Re: Program logic bugs vs input/environmental errors

2014-11-19 Thread Bruno Medeiros via Digitalmars-d

On 09/11/2014 21:33, Walter Bright wrote:

On 11/7/2014 7:00 AM, Bruno Medeiros wrote:

Let me give an example:

double sqrt(double num) {
   assert(num = 0);
   ...

With just this, then purely from a compiler/language viewpoint, if the
assert is
triggered the *language* doesn't know if the whole program is corrupted
(formatting the hard disk, etc.), or if the fault is localized there,
and an
error/exception can be thrown cleanly (clean in the sense that other
parts of
the program are not corrupted).

So the language doesn't know, but the *programmer* can make a
reasoning in each
particular assert of which domains/components of the program are
affected by
that assertion failure. In the sqrt() case above, the programmer can
easily
state that the math library that sqrt is part of is not corrupted, and
its state
is not totally unknown (as in, it's not deadlocked, nor is it
formatting the
hard disk!).


Making such an assumption presumes that the programmer knows the SOURCE
of the bug. He does not. The source could be a buffer overflow, a wild
pointer, any sort of corruption.



Very well then. But then we'll get to the point where enforce() will
become much
more popular than assert to check for contract conditions. assert()
will be
relegated to niche and rare situations where the program cant really
know how to
continue/recover cleanly (memory corruption for example).

That idiom is fine with me actually - but then the documentation for
assert
should reflect that.


I created this thread because it is an extremely important topic. It has
come up again and again for my entire career.

There is no such thing as knowing in advance what caused a bug, and that
the bug is safe to continue from. If you know in advance what caused
it, then it becomes expected program behavior, and is not a bug.

assert() is for bug detection, detecting state that should have never
happened. By definition you cannot know it is safe, you cannot know
what caused it.

enforce() is for dealing with known, expected conditions.


As I mentioned before, it's not about knowing exactly what caused it, 
nor knowing for sure if it is safe (this is an imprecise term anyways, 
in this context).
It's about making an educated guess about what will provide a better 
user experience when an assertion is triggered: halting the program, or 
ignoring the bug and continuing the program (even if admittedly the 
program will be in a buggy state).


I've already mentioned several examples of situations where I think the 
later is preferable.


Just to add another one, one that I recently came across while coding, 
was an assertion check that I put, which, if it where to fail, would 
only cause a redundant use of memory (but no NPEs or access violations 
or invalid state, etc.).


--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: Program logic bugs vs input/environmental errors

2014-11-19 Thread Walter Bright via Digitalmars-d

On 11/19/2014 3:59 AM, Bruno Medeiros wrote:

Just to add another one, one that I recently came across while coding, was an
assertion check that I put, which, if it where to fail, would only cause a
redundant use of memory (but no NPEs or access violations or invalid state, 
etc.).



If you're comfortable with that, then you should be using enforce(), not 
assert().


Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread via Digitalmars-d

On Sunday, 9 November 2014 at 21:44:53 UTC, Walter Bright wrote:
Having assert() not throw Error would be a reasonable design 
choice.


What if you could turn assert() in libraries into enforce() using 
a compiler switch?


Servers should be able to record failure and free network 
resources/locks even on fatal failure.






Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread via Digitalmars-d
On the other hand I guess HLT will signal SIGSEGV which can be 
caught using a signal handler, but then D should provide the 
OS-specific infrastructure for obtaining the necessary 
information before exiting.


Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread Walter Bright via Digitalmars-d
On 11/12/2014 11:40 AM, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Sunday, 9 November 2014 at 21:44:53 UTC, Walter Bright wrote:

Having assert() not throw Error would be a reasonable design choice.


What if you could turn assert() in libraries into enforce() using a compiler
switch?


Forgive me for being snarky, but there are text editing utilities where one can:

   s/assert/enforce/

because if one can use a compiler switch, then one has the source which can be 
edited.


In any case, compiler switches should not change behavior like that. assert() 
and enforce() are completely different.


Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 20:40:45 UTC, Walter Bright 
wrote:
Forgive me for being snarky, but there are text editing 
utilities where one can:


Snarky is ok. :)

In any case, compiler switches should not change behavior like 
that. assert() and enforce() are completely different.


Well, but I don't understand how assert() can unwind the stack if 
everyone should assume that the stack might be trashed and 
therefore invalid?


In order to be consistent it with your line of reasoning it 
should simply HLT, then a SIGSEGV handler should set up a 
preallocated stack, obtain the information and send it off to a 
logging service using pure system calls before terminating (or 
send it to the parent process).




Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 20:52:28 UTC, Ola Fosheim 
Grøstad wrote:
In order to be consistent it with your line of reasoning it 
should simply HLT, then a SIGSEGV handler should set up a 
preallocated stack, obtain the information and send it off to a 
logging service using pure system calls before terminating (or 
send it to the parent process).


Btw, in C you should get SIGABRT on assert()


Re: Program logic bugs vs input/environmental errors

2014-11-11 Thread Kagamin via Digitalmars-d

On Saturday, 1 November 2014 at 16:42:31 UTC, Walter Bright wrote:

My ideas are what are implemented on airplanes.


For components, not for a system. Nobody said a word against 
component design, it's systems that people want to be able to 
design, yet you prohibit it.


I didn't originate these ideas, they come from the aviation 
industry.


You're original in claiming it is the only working solution, but 
aviation industry proves error resilient systems are possible and 
successful, even though you claim their design is unsound and 
unusable. Yet you praise them, acknowledging their success, which 
makes your claims ever so ironical.


Recall that I was employed as an engineer working on flight 
critical systems design for the 757.


This is how problem decomposition works: you don't need to 
understand the whole system to work on a component.


On Sunday, 2 November 2014 at 17:53:45 UTC, Walter Bright wrote:
Kernel mode code is the responsibility of the OS system, not 
the app.


Suddenly safety becomes not the top priority. If it can't always 
be the priority, there should be a choice of priorities, but you 
deny that choice. It's a matter of compliance with reality. 
Whatever way you design the language, can you change reality that 
way? I don't see why possibility of choice prevents anything.


Re: Program logic bugs vs input/environmental errors

2014-11-09 Thread Dicebot via Digitalmars-d

On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:

On 11/2/2014 3:44 PM, Dicebot wrote:
They have hardware protection against sharing memory between 
processes. It's a

reasonable level of protection.

reasonable default - yes
reasoable level of protection in general - no


No language can help when that is the requirement.


Yes, because it is property of system architecture as a whole 
which is exactly what I am speaking about.


It is absolutely different because of scale; having 1K of 
shared memory is very different from having 100Mb shared 
between processes including the stack and program code.


It is possible to have minimal amount shared mutable memory 
inside one process. There is nothing inherently blocking one to 
do so, same as there is nothing inherently preventing one to 
screw the inter-process shared memory. Being different only 
because of scale - not really different.


Kernel mode code is the responsibility of the OS system, not 
the app.


In some (many?) large scale server systems OS is the app or at 
least heavily
integrated. Thinking about app as a single independent 
user-space process is a

bit.. outdated.


Haha, I've used such a system (MSDOS) for many years. Switching 
to process protection was a huge advance. Sad that we're 
modernizing by reverting to such an awful programming 
environment.


What is huge advance for user land applciation is a problem for 
server code. Have you ever heard OS is the problem, not 
solution slogan that is slowly becoming more popular in high 
load networking world?



It is all about system design.


It's about the probability of coupling and the level of that 
your system can

stand. Process level protection is adequate for most things.


Again, I am fine with advocating it as a resonable default. 
What frustrates me
is intentionally making any other design harder than it should 
be by explicitly
allowing normal cleanup to be skipped. This behaviour is easy 
to achieve by
installing custom assert handler (it could be generic Error 
handler too) but

impossible to bail out when it is the default one.


Running normal cleanup code when the program is in an 
undefined, possibly corrupted, state can impede proper shutdown.


Preventing cleanup can be done with roughly one line of code from 
user code. Enabling it back is effectively impossible. With this 
decision you don't trade safer default for more dangerous default 
- you trade configurable default for unavoidable.


To preserve same safe defaults you could define all thrown Errors 
to result in plain HLT / abort call with possibility to define 
user handler that actually throws. That would have addressed all 
concernc nicely while still not making life of those who want 
cleanup harder.


Because of abovementioned avoiding more corruption from 
cleanup does not sound

to me as strong enough benefit to force that on everyone.


I have considerable experience with what programs can do when 
continuing to run after a bug. This was on real mode DOS, which 
infamously does not seg fault on errors.


It's AWFUL. I've had quite enough of having to reboot the 
operating system after every failure, and even then that often 
wasn't enough because it might scramble the disk driver code so 
it won't even boot.


I don't argue necessity to terminate the program. I argue strict 
relation program == process which is impractical and inflexible.


It is my duty to explain how to use the features of the 
language correctly, including how and why they work the way 
they do. The how, why, and best practices are not part of a 
language specification.


You can't just explain things to make them magically appropriate 
for user domain. I fully understand how you propose to design 
applications. Unfortunately, it is completely unacceptable in 
some cases and quite inconvenient in others. Right now your 
proposal is effectively design applications like me or 
reimplement language / library routines yourself.



NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.
As I have already mentioned it almost never can be truly 
reliable.


That's correct, but not a justification for making it less 
reliable.


It is justification for making it more configurable.

If D changes assert() to do unwinding, then D will become 
unusable for building reliable systems until I add in yet 
another form of assert() that does not.


My personal perfect design would be like this:

- Exceptions work as they do now
- Errors work the same way as exceptions but don't get caught by 
catch(Exception)
- assert does not throw Error but simply aborts the program 
(configurable with druntime callback)

- define die which is effectively assert(false)
- tests don't use assert

That would provide default behaviour similar to one we currently 
have (with all the good things) but leave much more configurable 
choices for system designer.


Some small chance of undefined behaviour vs 100% chance of 
resource leaks?


If the operating 

Re: Program logic bugs vs input/environmental errors

2014-11-09 Thread Dicebot via Digitalmars-d

On Friday, 7 November 2014 at 15:00:35 UTC, Bruno Medeiros wrote:
Very well then. But then we'll get to the point where enforce() 
will become much more popular than assert to check for contract 
conditions. assert() will be relegated to niche and rare 
situations where the program cant really know how to 
continue/recover cleanly (memory corruption for example).


That idiom is fine with me actually - but then the 
documentation for assert should reflect that.


This looks like a only practical solution for me right now - but 
it is complicated by the fact that assert is not the only Error 
in the library code.


Re: Program logic bugs vs input/environmental errors

2014-11-09 Thread Walter Bright via Digitalmars-d

On 11/7/2014 7:00 AM, Bruno Medeiros wrote:

Let me give an example:

double sqrt(double num) {
   assert(num = 0);
   ...

With just this, then purely from a compiler/language viewpoint, if the assert is
triggered the *language* doesn't know if the whole program is corrupted
(formatting the hard disk, etc.), or if the fault is localized there, and an
error/exception can be thrown cleanly (clean in the sense that other parts of
the program are not corrupted).

So the language doesn't know, but the *programmer* can make a reasoning in each
particular assert of which domains/components of the program are affected by
that assertion failure. In the sqrt() case above, the programmer can easily
state that the math library that sqrt is part of is not corrupted, and its state
is not totally unknown (as in, it's not deadlocked, nor is it formatting the
hard disk!).


Making such an assumption presumes that the programmer knows the SOURCE of the 
bug. He does not. The source could be a buffer overflow, a wild pointer, any 
sort of corruption.




Very well then. But then we'll get to the point where enforce() will become much
more popular than assert to check for contract conditions. assert() will be
relegated to niche and rare situations where the program cant really know how to
continue/recover cleanly (memory corruption for example).

That idiom is fine with me actually - but then the documentation for assert
should reflect that.


I created this thread because it is an extremely important topic. It has come up 
again and again for my entire career.


There is no such thing as knowing in advance what caused a bug, and that the bug 
is safe to continue from. If you know in advance what caused it, then it 
becomes expected program behavior, and is not a bug.


assert() is for bug detection, detecting state that should have never happened. 
By definition you cannot know it is safe, you cannot know what caused it.


enforce() is for dealing with known, expected conditions.


Re: Program logic bugs vs input/environmental errors

2014-11-09 Thread Walter Bright via Digitalmars-d

On 11/9/2014 1:12 PM, Dicebot wrote:

On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:

It is absolutely different because of scale; having 1K of shared memory is
very different from having 100Mb shared between processes including the stack
and program code.

It is possible to have minimal amount shared mutable memory inside one process.


D's type system tries to minimize it, but the generated code knows nothing at 
all about the difference between local and shared memory, and has no protection 
against crossing the boundary. Interprocess protection is done via the hardware.




There is nothing inherently blocking one to do so, same as there is nothing
inherently preventing one to screw the inter-process shared memory. Being
different only because of scale - not really different.


Sharing 1K of interprocess memory is one millionth of the vulnerability surface 
of a 1G multithreaded program.




What is huge advance for user land applciation is a problem for server code.
Have you ever heard OS is the problem, not solution slogan that is slowly
becoming more popular in high load networking world?


No, but my focus is what D can provide, not what the OS can provide.



Preventing cleanup can be done with roughly one line of code from user code.
Enabling it back is effectively impossible. With this decision you don't trade
safer default for more dangerous default - you trade configurable default for
unavoidable.

To preserve same safe defaults you could define all thrown Errors to result in
plain HLT / abort call with possibility to define user handler that actually
throws. That would have addressed all concernc nicely while still not making
life of those who want cleanup harder.


There is already a cleanup solution - use enforce().



That's correct, but not a justification for making it less reliable.

It is justification for making it more configurable.


In general, some things shouldn't be configurable. For example, one cannot mix 
3rd party libraries when each one is trying to customize global behavior.




My personal perfect design would be like this:

- Exceptions work as they do now
- Errors work the same way as exceptions but don't get caught by 
catch(Exception)
- assert does not throw Error but simply aborts the program (configurable with
druntime callback)
- define die which is effectively assert(false)
- tests don't use assert


Having assert() not throw Error would be a reasonable design choice.



Re: Program logic bugs vs input/environmental errors

2014-11-09 Thread eles via Digitalmars-d

On Sunday, 9 November 2014 at 21:34:05 UTC, Walter Bright wrote:

On 11/7/2014 7:00 AM, Bruno Medeiros wrote:


assert() is for bug detection, detecting state that should have 
never happened. By definition you cannot know it is safe, you 
cannot know what caused it.


enforce() is for dealing with known, expected conditions.


This is clear. The missing piece is a way to make the compile 
enforce that use on the user.


Code review alone does not work.


Re: Program logic bugs vs input/environmental errors

2014-11-09 Thread eles via Digitalmars-d

On Sunday, 9 November 2014 at 21:59:19 UTC, eles wrote:

On Sunday, 9 November 2014 at 21:34:05 UTC, Walter Bright wrote:

On 11/7/2014 7:00 AM, Bruno Medeiros wrote:


assert() is for bug detection, detecting state that should 
have never happened. By definition you cannot know it is 
safe, you cannot know what caused it.


enforce() is for dealing with known, expected conditions.


This is clear. The missing piece is a way to make the compile 
enforce that use on the user.


Code review alone does not work.


This is clear. The missing piece is a way to make the compiler
enforce that separate use on the user.

Code review alone does not work.


Re: Program logic bugs vs input/environmental errors

2014-11-07 Thread Bruno Medeiros via Digitalmars-d

On 29/10/2014 21:22, Walter Bright wrote:

On 10/29/2014 5:37 AM, Bruno Medeiros wrote:

On 18/10/2014 18:40, Walter Bright wrote:

As I've said before, tripping an assert by definition means the program
has entered an unknown state. I don't believe it is possible for any
language to make guarantees beyond that point.


The guarantees (if any), would not be made by the language, but by the
programmer. The language cannot know if a program is totally broken and
undefined when an assert fails, but a programmer can, for each particular
assert, make some assumptions about which fault domains (like Sean put
it) can
be affected and which are not.


Assumptions are not guarantees.



Let me give an example:

double sqrt(double num) {
  assert(num = 0);
  ...

With just this, then purely from a compiler/language viewpoint, if the 
assert is triggered the *language* doesn't know if the whole program is 
corrupted (formatting the hard disk, etc.), or if the fault is localized 
there, and an error/exception can be thrown cleanly (clean in the sense 
that other parts of the program are not corrupted).


So the language doesn't know, but the *programmer* can make a reasoning 
in each particular assert of which domains/components of the program are 
affected by that assertion failure. In the sqrt() case above, the 
programmer can easily state that the math library that sqrt is part of 
is not corrupted, and its state is not totally unknown (as in, it's not 
deadlocked, nor is it formatting the hard disk!). That being the case, 
sqrt() can be made to throw an exception, and then that assertion 
failure can be recovered cleanly.


Which leads to what you say next:


In any case, if the programmer knows than assert error is restricted to
a particular domain, and is recoverable, and wants to recover from it,
use enforce(), not assert().



Very well then. But then we'll get to the point where enforce() will 
become much more popular than assert to check for contract conditions. 
assert() will be relegated to niche and rare situations where the 
program cant really know how to continue/recover cleanly (memory 
corruption for example).


That idiom is fine with me actually - but then the documentation for 
assert should reflect that.


--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: Program logic bugs vs input/environmental errors

2014-11-02 Thread Dicebot via Digitalmars-d
On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via 
Digitalmars-d wrote:
I never said component == process. All I said was that at 
the OS
level, at least with current OSes, processes are the smallest 
unit

that is decoupled from each other.


Which is exactly the statement I call wrong. With current OSes 
processes aren't decoupled units at all - it is all about feature 
set you stick to. Same with any other units.



If you go below that level of
granularity, you have the possibility of shared memory being 
corrupted
by one thread (or fibre, or whatever smaller than a process) 
affecting

the other threads.


You already have that possibility at process level via shared 
process memory and kernel mode code. And you still don't have 
that possibility at thread/fiber level if you don't use mutable 
shared memory (or any global state in general). It is all about 
system design.


Pretty much only reliably decoupled units I can imagine are 
processes running in different restricted virtual machines (or, 
better, different physical machines). Everything else gives just 
certain level of expectations.


Walter has experience with certain types of systems where process 
is indeed most appropriate unit of granularity and calls that a 
silver bullet by explicitly designing language in a way that 
makes any other approach inherently complicated and 
effort-consuming. But there is more than that in software world.


Re: Program logic bugs vs input/environmental errors

2014-11-02 Thread Walter Bright via Digitalmars-d

On 11/2/2014 3:48 AM, Dicebot wrote:

On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via Digitalmars-d 
wrote:
Which is exactly the statement I call wrong. With current OSes processes aren't
decoupled units at all - it is all about feature set you stick to. Same with any
other units.


They have hardware protection against sharing memory between processes. It's a 
reasonable level of protection.




If you go below that level of
granularity, you have the possibility of shared memory being corrupted
by one thread (or fibre, or whatever smaller than a process) affecting
the other threads.

You already have that possibility at process level via shared process memory


1. very few processes use shared memory
2. those that do should regard it as input/environmental, and not trust it



and kernel mode code.


Kernel mode code is the responsibility of the OS system, not the app.



And you still don't have that possibility at thread/fiber
level if you don't use mutable shared memory (or any global state in general).


A buffer overflow will render all that protection useless.



It is all about system design.


It's about the probability of coupling and the level of that your system can 
stand. Process level protection is adequate for most things.




Pretty much only reliably decoupled units I can imagine are processes running in
different restricted virtual machines (or, better, different physical machines).
Everything else gives just certain level of expectations.


Everything is coupled at some level. Again, it's about the level of reliability 
needed.




Walter has experience with certain types of systems where process is indeed most
appropriate unit of granularity and calls that a silver bullet by explicitly
designing language


I design the language to do what it can. A language cannot compensate for 
coupling and bugs in the operating system, nor can a language compensate for two 
machines being plugged into the same power circuit.




in a way that makes any other approach inherently complicated
and effort-consuming.


Using enforce is neither complicated nor effort consuming.

The idea that asserts can be recovered from is fundamentally unsound, and makes 
D unusable for robust critical software. Asserts are for checking for 
programming bugs. A bug can be tripped because of a buffer overflow, memory 
corruption, a malicious code injection attack, etc.


NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.

Running arbitrary cleanup code at this point is literally undefined behavior. 
This is not a failure of language design - no language can offer any guarantees 
about this.


If you want code cleanup to happen, use enforce(). If you are using enforce() to 
detect programming bugs, well, that's your choice. enforce() isn't any more 
complicated or effort-consuming than using assert().





Re: Program logic bugs vs input/environmental errors

2014-11-02 Thread Dicebot via Digitalmars-d

On Sunday, 2 November 2014 at 17:53:45 UTC, Walter Bright wrote:

On 11/2/2014 3:48 AM, Dicebot wrote:
On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via 
Digitalmars-d wrote:
Which is exactly the statement I call wrong. With current OSes 
processes aren't
decoupled units at all - it is all about feature set you stick 
to. Same with any

other units.


They have hardware protection against sharing memory between 
processes. It's a reasonable level of protection.


reasonable default - yes
reasoable level of protection in general - no


1. very few processes use shared memory
2. those that do should regard it as input/environmental, and 
not trust it


This is no different from:

1. very few threads use shared
2. those that do should regard is as input/environmental


and kernel mode code.


Kernel mode code is the responsibility of the OS system, not 
the app.


In some (many?) large scale server systems OS is the app or at 
least heavily integrated. Thinking about app as a single 
independent user-space process is a bit.. outdated.



And you still don't have that possibility at thread/fiber
level if you don't use mutable shared memory (or any global 
state in general).


A buffer overflow will render all that protection useless.


Nice we have @safe and default thread-local memory!


It is all about system design.


It's about the probability of coupling and the level of that 
your system can stand. Process level protection is adequate for 
most things.


Again, I am fine with advocating it as a resonable default. What 
frustrates me is intentionally making any other design harder 
than it should be by explicitly allowing normal cleanup to be 
skipped. This behaviour is easy to achieve by installing custom 
assert handler (it could be generic Error handler too) but 
impossible to bail out when it is the default one.


Because of abovementioned avoiding more corruption from cleanup 
does not sound to me as strong enough benefit to force that on 
everyone. Ironically in system with decent fault protection and 
safety redundance it won't even matter (everything it can 
possibly corrupt is duplicated and proof-checked anyway)


Walter has experience with certain types of systems where 
process is indeed most
appropriate unit of granularity and calls that a silver bullet 
by explicitly

designing language


I design the language to do what it can. A language cannot 
compensate for coupling and bugs in the operating system, nor 
can a language compensate for two machines being plugged into 
the same power circuit.


I don't expect you to do magic. My blame is about making 
decisions that support designs you have great expertise with but 
hamper something different (but still very real) - decisions that 
are usually uncharacteristic in D (which I believe is 
non-opinionated language) and don't really belong to system 
programming language.



in a way that makes any other approach inherently complicated
and effort-consuming.


Using enforce is neither complicated nor effort consuming.


If you want code cleanup to happen, use enforce(). If you are 
using enforce() to detect programming bugs, well, that's your 
choice. enforce() isn't any more complicated or 
effort-consuming than using assert().


I don't have other choice and I don't like it. It is effort 
consuming because it requires manually maintained exception 
hierarchy and style rules to keep errors different from 
exceptions - something that language otherwise provides to you 
out of the box. And there is always that 3d party library that is 
hard-coded to throw Error.


It is not something that I realistically expect to change in D 
and there are specific plans for working with it (thanks for 
helping with it btw!). Just mentioning it as one of few D design 
decisions I find rather broken conceptually.


The idea that asserts can be recovered from is fundamentally 
unsound, and makes D unusable for robust critical software.


Not recovered but terminate user-defined portion of the 
system.


Asserts are for checking for programming bugs. A bug can be 
tripped because of a buffer overflow, memory corruption, a 
malicious code injection attack, etc.


NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.


As I have already mentioned it almost never can be truly 
reliable. You simply call one higher reliability chance good 
enough and other lower one - disastrous. I don't agree this is 
the language call to make, even if decision is reasonable and 
fitting 90% cases.


This is really no different than GC usage in Phobos before @nogc 
push. If language decision may result in fundamental code base 
fragmentation (even for relatively small portion of users), it is 
likely to be overly opinionated decision.


Running arbitrary cleanup code at this point is literally 
undefined behavior. This is not a failure of language design - 
no language can offer any guarantees about this.


Some small chance of undefined behaviour vs 100% chance of 
resource 

Re: Program logic bugs vs input/environmental errors

2014-11-02 Thread Walter Bright via Digitalmars-d

On 11/2/2014 3:44 PM, Dicebot wrote:

They have hardware protection against sharing memory between processes. It's a
reasonable level of protection.

reasonable default - yes
reasoable level of protection in general - no


No language can help when that is the requirement.



1. very few processes use shared memory
2. those that do should regard it as input/environmental, and not trust it


This is no different from:

1. very few threads use shared
2. those that do should regard is as input/environmental


It is absolutely different because of scale; having 1K of shared memory is very 
different from having 100Mb shared between processes including the stack and 
program code.




and kernel mode code.


Kernel mode code is the responsibility of the OS system, not the app.


In some (many?) large scale server systems OS is the app or at least heavily
integrated. Thinking about app as a single independent user-space process is a
bit.. outdated.


Haha, I've used such a system (MSDOS) for many years. Switching to process 
protection was a huge advance. Sad that we're modernizing by reverting to such 
an awful programming environment.




And you still don't have that possibility at thread/fiber
level if you don't use mutable shared memory (or any global state in general).


A buffer overflow will render all that protection useless.


Nice we have @safe and default thread-local memory!


Assert is to catch program bugs that should never happen, not correctly 
functioning programs. Nor can D possibly guarantee that called C functions are safe.




It is all about system design.


It's about the probability of coupling and the level of that your system can
stand. Process level protection is adequate for most things.


Again, I am fine with advocating it as a resonable default. What frustrates me
is intentionally making any other design harder than it should be by explicitly
allowing normal cleanup to be skipped. This behaviour is easy to achieve by
installing custom assert handler (it could be generic Error handler too) but
impossible to bail out when it is the default one.


Running normal cleanup code when the program is in an undefined, possibly 
corrupted, state can impede proper shutdown.




Because of abovementioned avoiding more corruption from cleanup does not sound
to me as strong enough benefit to force that on everyone.


I have considerable experience with what programs can do when continuing to run 
after a bug. This was on real mode DOS, which infamously does not seg fault on 
errors.


It's AWFUL. I've had quite enough of having to reboot the operating system after 
every failure, and even then that often wasn't enough because it might scramble 
the disk driver code so it won't even boot.


I got into the habit of layering in asserts to stop the program when it went 
bad. Do not pass go, do not collect $200 is the only strategy that has a hope 
of working under such systems.




I don't expect you to do magic. My blame is about making decisions that support
designs you have great expertise with but hamper something different (but still
very real) - decisions that are usually uncharacteristic in D (which I believe
is non-opinionated language) and don't really belong to system programming
language.


It is my duty to explain how to use the features of the language correctly, 
including how and why they work the way they do. The how, why, and best 
practices are not part of a language specification.




I don't have other choice and I don't like it. It is effort consuming because it
requires manually maintained exception hierarchy and style rules to keep errors
different from exceptions - something that language otherwise provides to you
out of the box. And there is always that 3d party library that is hard-coded to
throw Error.

It is not something that I realistically expect to change in D and there are
specific plans for working with it (thanks for helping with it btw!). Just
mentioning it as one of few D design decisions I find rather broken 
conceptually.


I hope to eventually change your mind about it being broken.



NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.

As I have already mentioned it almost never can be truly reliable.


That's correct, but not a justification for making it less reliable.



You simply
call one higher reliability chance good enough and other lower one - disastrous.
I don't agree this is the language call to make, even if decision is reasonable
and fitting 90% cases.


If D changes assert() to do unwinding, then D will become unusable for building 
reliable systems until I add in yet another form of assert() that does not.




This is really no different than GC usage in Phobos before @nogc push. If
language decision may result in fundamental code base fragmentation (even for
relatively small portion of users), it is likely to be overly opinionated 
decision.


The reason I initiated this thread is to point out the correct way to use 
assert() and to get that 

Re: Program logic bugs vs input/environmental errors

2014-11-02 Thread Sean Kelly via Digitalmars-d

On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:


I have considerable experience with what programs can do when 
continuing to run after a bug. This was on real mode DOS, which 
infamously does not seg fault on errors.


It's AWFUL. I've had quite enough of having to reboot the 
operating system after every failure, and even then that often 
wasn't enough because it might scramble the disk driver code so 
it won't even boot.


Yep.  Fun stuff.

http://research.microsoft.com/en-us/people/mickens/thenightwatch.pdf


I got into the habit of layering in asserts to stop the program 
when it went bad. Do not pass go, do not collect $200 is the 
only strategy that has a hope of working under such systems.


The tough thing is that in D, contracts serve as this early 
warning system, but the errors this system catches are also 
sometimes benign and sometimes the programmer knows that they're 
benign, but if the person writing the test did so with an assert 
then the decision of how to respond to the error has been made 
for him.



It is my duty to explain how to use the features of the 
language correctly, including how and why they work the way 
they do. The how, why, and best practices are not part of a 
language specification.


I don't entirely agree.  The standard library serves as the how 
and how and best practices for a language, and while a programmer 
can go against this, it's often like swimming upstream.  For 
better or worse, we need to establish how parameters are 
validated and such in Phobos, and this will serve as the template 
for nearly all code written in D.


The core design goal of Druntime is to make the default behavior 
as safe as possible, but to allow to discerning user to override 
this behavior in certain key places.  I kind of see the entire D 
language like this--it has the power of C/C++ but the ease of use 
of a much higher level language.  We should strive for all 
aspects of the language and standard library to have the same 
basic behavior: a safe, efficient default but the ability to 
customize in key areas to meet individual needs.  The trick is 
determining what these key areas are and how much latitude the 
user should have.



If D changes assert() to do unwinding, then D will become 
unusable for building reliable systems until I add in yet 
another form of assert() that does not.


To be fair, assert currently does unwinding.  It always has.  The 
proposal is that it should not.



The reason I initiated this thread is to point out the correct 
way to use assert() and to get that into the culture of best 
practices for D. This is because if I don't, then in the vacuum 
people will tend to fill that vacuum with misunderstandings and 
misuse.


It is an extremely important topic.


I still feel like there's something really important here that 
we're all grasping at but it hasn't quite come to the fore yet.  
Along the lines of the idea that a @safe program may be able to 
recover from a logic error.  It seems like a uniquely D thing 
insofar as systems languages are concerned.



If the operating system can't handle resource recovery for a 
process terminating, it is an unusable operating system.


There are all kinds of resources, and not all of them are local 
to the system.  Everything will eventually recover though, it 
just won't happen immediately as is the case with resource 
cleanup within a process.


Re: Program logic bugs vs input/environmental errors

2014-11-02 Thread Walter Bright via Digitalmars-d

On 11/2/2014 8:54 PM, Sean Kelly wrote:

On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:

I got into the habit of layering in asserts to stop the program when it went
bad. Do not pass go, do not collect $200 is the only strategy that has a
hope of working under such systems.

The tough thing is that in D, contracts serve as this early warning system, but
the errors this system catches are also sometimes benign


If it is benign is not known until it is debugged by a human.


and sometimes the
programmer knows that they're benign, but if the person writing the test did so
with an assert then the decision of how to respond to the error has been made
for him.


The person who wrote assert decided that it was a non-recoverable programming 
bug. I deliberately wrote bug, and not error.




It is my duty to explain how to use the features of the language correctly,
including how and why they work the way they do. The how, why, and best
practices are not part of a language specification.

I don't entirely agree.  The standard library serves as the how and how and best
practices for a language, and while a programmer can go against this, it's often
like swimming upstream.  For better or worse, we need to establish how
parameters are validated and such in Phobos, and this will serve as the template
for nearly all code written in D.


Definitely Phobos should exhibit best practices. Whether bad function argument 
values are input/environmental errors or bugs is decidable only on a 
case-by-case basis. There is no overarching rule.


Input/environmental errors must not use assert to detect them.



To be fair, assert currently does unwinding.  It always has.  The proposal is
that it should not.


Not entirely - a function with only asserts in it is considered nothrow and 
callers may not have exception handlers for them.




The reason I initiated this thread is to point out the correct way to use
assert() and to get that into the culture of best practices for D. This is
because if I don't, then in the vacuum people will tend to fill that vacuum
with misunderstandings and misuse.

It is an extremely important topic.


I still feel like there's something really important here that we're all
grasping at but it hasn't quite come to the fore yet. Along the lines of the
idea that a @safe program may be able to recover from a logic error.  It seems
like a uniquely D thing insofar as systems languages are concerned.


It's a false hope. D cannot offer any guarantees of recovery from programming 
bugs. Asserts, by definition, can never happen. So when they do, something is 
broken. Broken programs are not recoverable because one cannot know why they 
broke until they are debugged. As I mentioned to Dicebot, @safe only applies to 
the function's logic. D programs can call C functions. C functions are not safe. 
There can be compiler bugs. There can be other threads corrupting memory. There 
can be hardware failures, operating system bugs, etc., that resulting in 
tripping the assert.


If a programmer knows a bug is benign and wants to recover from it, D has a 
mechanism for it - enforce(). I do not understand the desire to bash assert() 
into behaving like enforce(). Just use enforce() in the first place.


The idea was brought up that one may be using a library that uses assert() to 
detect input/environmental errors. I do not understand using a library in a 
system that must be made robust, having the source code to the library, and not 
being allowed to change that source code to fix bugs in it. A robust application 
cannot be made using such a library - assert() misuse will not be the only 
problem with it.




If the operating system can't handle resource recovery for a process
terminating, it is an unusable operating system.

There are all kinds of resources, and not all of them are local to the system.
Everything will eventually recover though, it just won't happen immediately as
is the case with resource cleanup within a process.


I'd say that is a poor design for an operating system. Be that as it may, if you 
want to recover from assert()s, use enforce() instead.



There are other consequences from trying to make assert() recoverable:

1. functions with assert()s cannot be nothrow
2. assert()s cannot provide hints to the optimizer

Those are high prices to pay for a systems performance language.


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread Dicebot via Digitalmars-d
On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via 
Digitalmars-d wrote:

Again, you're using a different definition of component.


I see no justified reasoning why process can be considered such 
component ad anything else cannot.


In practice it is completely dependent on system design as a 
whole and calling process a silver bullet only creates problems 
when it is in fact not.


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread Kagamin via Digitalmars-d
On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via 
Digitalmars-d wrote:

You're using a different definition of component.


System granularity is decided by the designer. You either allow 
people design their systems or force your design on them, if you 
do both, you contradict yourself.


An inconsistency in a transaction is a problem with the input, 
not a problem with the program logic itself.


Distinction between failures doesn't matter. Reliable system 
manages any failures, especially unexpected and unforeseen ones, 
without diagnostic.



If something is wrong with the input, the program
can detect it and recover by aborting the transaction (rollback 
the
wrong data). But if something is wrong with the program logic 
itself
(e.g., it committed the transaction instead of rolling back 
when it
detected a problem) there is no way to recover within the 
program

itself.


Not the case for an airplane: it recovers from any failure within 
itself. Another indication that an airplane example contradicts 
to Walter's proposal. See my post about the big picture.


A failed component, OTOH, is a problem with program logic. You 
cannot
recover from that within the program itself, since its own 
logic has
been compromised. You *can* rollback the wrong changes made to 
data by
that malfunctioning program, of course, but the rollback must 
be done by
a decoupled entity outside of that program. Otherwise you might 
end up
causing even more problems (for example, due to the compromised 
/
malfunctioning logic, the program commits the data instead of 
reverting

it, thus turning an intermittent problem into a permanent one).


No misunderstanding, I think that Walter's idea is good, just not 
always practical, and that real critical systems don't work the 
way he describes, they make more complicated tradeoffs.


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread Kagamin via Digitalmars-d
On Friday, 31 October 2014 at 21:06:49 UTC, H. S. Teoh via 
Digitalmars-d wrote:
This does not mean that process isolation is a silver bullet 
-- I

never said any such thing.


But made it sound that way:

The only failsafe solution is to have multiple redundant
processes, so when one process becomes inconsistent, you 
fallback to

another process, *decoupled* process that is known to be good.


If you think a hacker rooted the server, how do you know other 
perfectly isolated processes are good? Not to mention you 
suggested to build a system from *communicating* processes, which 
doesn't sound like perfect isolation at all.


You don't shutdown the *entire* network unless all redundant 
components have failed.


If you have a hacker in your network, the network is compromised 
and is in an unknown state, why do you want the network to 
continue operation? You contradict yourself.


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread Kagamin via Digitalmars-d
On Wednesday, 29 October 2014 at 21:23:00 UTC, Walter Bright 
wrote:
In any case, if the programmer knows than assert error is 
restricted to a particular domain, and is recoverable, and 
wants to recover from it, use enforce(), not assert().


But all that does is working around the assert's behavior to 
ignore cleanups.


Maybe, when it's known, that a failure is not restricted, some 
different way of failure reporting should be used?


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread H. S. Teoh via Digitalmars-d
On Sat, Nov 01, 2014 at 10:52:31AM +, Kagamin via Digitalmars-d wrote:
 On Friday, 31 October 2014 at 21:06:49 UTC, H. S. Teoh via Digitalmars-d
 wrote:
 This does not mean that process isolation is a silver bullet -- I
 never said any such thing.
 
 But made it sound that way:

 The only failsafe solution is to have multiple redundant processes,
 so when one process becomes inconsistent, you fallback to another
 process, *decoupled* process that is known to be good.
 
 If you think a hacker rooted the server, how do you know other
 perfectly isolated processes are good? Not to mention you suggested to
 build a system from *communicating* processes, which doesn't sound
 like perfect isolation at all.

You're confusing the issue. Process-level isolation is for detecting
per-process faults. If you want to handle server-level faults, you need
external monitoring per server, so that when it detects a possible
exploit on one server, it shuts down the server and fails over to
another server known to be OK.

And I said decoupled, not isolated. Decoupled means they can still
communicate with each other, but with a known protocol that insulates
them from each other's faults. E.g. you don't send binary executable
code over the communication lines and the receiving process blindly runs
it, but you send data in a predefined format that is verified by the
receiving party before acting on it. I'm pretty sure this is obvious.


 You don't shutdown the *entire* network unless all redundant
 components have failed.
 
 If you have a hacker in your network, the network is compromised and
 is in an unknown state, why do you want the network to continue
 operation? You contradict yourself.

The only contradiction here is introduced by you. If one or two servers
on your network have been compromised, does that mean the *entire*
network is compromised? No it doesn't. It just means those one or two
servers have been compromised. So you have monitoring tools setup to
detect problems within the network and isolate the compromised servers.
If you are no longer sure the entire network is in a good state, e.g. if
your monitoring tools can't detect certain large-scale problems, then
sure, go ahead and shutdown the entire network. It depends on what
granularity you're operating at. A properly-designed reliable system
needs to have multiple levels of monitoring and failover. You have
process-level decoupling, server-level, network-level, etc.. You can't
just rely on a single level of granularity and expect it to solve
everything.


T

-- 
Leather is waterproof.  Ever see a cow with an umbrella?


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread H. S. Teoh via Digitalmars-d
On Sat, Nov 01, 2014 at 09:38:23AM +, Dicebot via Digitalmars-d wrote:
 On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via Digitalmars-d
 wrote:
 Again, you're using a different definition of component.
 
 I see no justified reasoning why process can be considered such
 component ad anything else cannot.
 
 In practice it is completely dependent on system design as a whole and
 calling process a silver bullet only creates problems when it is in
 fact not.

I never said component == process. All I said was that at the OS
level, at least with current OSes, processes are the smallest unit
that is decoupled from each other. If you go below that level of
granularity, you have the possibility of shared memory being corrupted
by one thread (or fibre, or whatever smaller than a process) affecting
the other threads. So that means they are not fully decoupled, and the
failure of one thread makes all other threads no longer trustworthy.

Obviously, you can go up to larger units than just processes when
designing your system, as long as you can be sure they are decoupled
from each other.


T

-- 
No, John.  I want formats that are actually useful, rather than over-featured 
megaliths that address all questions by piling on ridiculous internal links in 
forms which are hideously over-complex. -- Simon St. Laurent on xml-dev


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread via Digitalmars-d
On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via 
Digitalmars-d wrote:
I never said component == process. All I said was that at 
the OS
level, at least with current OSes, processes are the smallest 
unit

that is decoupled from each other. If you go below that level of
granularity, you have the possibility of shared memory being 
corrupted
by one thread (or fibre, or whatever smaller than a process) 
affecting
the other threads. So that means they are not fully decoupled, 
and the
failure of one thread makes all other threads no longer 
trustworthy.


This is a question of probability and impact. If my Python 
program fails unexpectedly, then it could in theory be a bug in a 
c-library, but it probably is not. So it is better to trap it and 
continue.


If D provides bound checks, is a solid language, has a solid 
compiler, has a solid runtime, and solid libraries… then the same 
logic applies!


If my C program traps on divison by zero, then it probably is an 
unlucky incident and not a memory corruption issue. So it is 
probably safe to continue.


If my program cannot find a file, it MIGHT be a kernel issue, but 
it probably isn't. So it safe to continue.


If my critical state is recorded by a wall built on transactions 
or full blown event logging, then it is safe to continue even if 
my front might suffer from memory corruption.


You need to consider:

1. probability (what is the most likely cause of this signal?)

2. impact (do you have insurance?)

3. alternatives (are you in the middle of an air fight?)



Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread Walter Bright via Digitalmars-d

On 11/1/2014 3:35 AM, Kagamin wrote:

No misunderstanding, I think that Walter's idea is good, just not always
practical, and that real critical systems don't work the way he describes, they
make more complicated tradeoffs.


My ideas are what are implemented on airplanes. I didn't originate these ideas, 
they come from the aviation industry. Recall that I was employed as an engineer 
working on flight critical systems design for the 757.


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread Walter Bright via Digitalmars-d

On 11/1/2014 4:14 AM, Kagamin wrote:

On Wednesday, 29 October 2014 at 21:23:00 UTC, Walter Bright wrote:

In any case, if the programmer knows than assert error is restricted to a
particular domain, and is recoverable, and wants to recover from it, use
enforce(), not assert().


But all that does is working around the assert's behavior to ignore cleanups.


It is not working around anything unless you're trying to use a screwdriver as 
a hammer. Cleanups are not appropriate after a program has entered an unknown state.





Maybe, when it's known, that a failure is not restricted, some different way of
failure reporting should be used?


assert() and enforce() both work as designed.


Re: Program logic bugs vs input/environmental errors

2014-11-01 Thread Walter Bright via Digitalmars-d

On 10/10/2014 2:31 AM, Joseph Rushton Wakeling wrote:

I still think that was one of the single most important lessons in probability
that I ever had.


Research shows that humans, even trained statisticians, are spectacularly bad at 
intuitive probability.


-- Thinking, Fast and Slow by Daniel Kahneman


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread Kagamin via Digitalmars-d

On Thursday, 16 October 2014 at 19:53:42 UTC, Walter Bright wrote:

On 10/15/2014 12:19 AM, Kagamin wrote:
Sure, software is one part of an airplane, like a thread is a 
part of a process.
When the part fails, you discard it and continue operation. In 
software it works
by rolling back a failed transaction. An airplane has some 
tricks to recover
from failures, but still it's a no fail design you argue 
against: it shuts
down parts one by one when and only when they fail and 
continues operation no
matter what until nothing works and even then it still doesn't 
fail, just does

nothing. The airplane example works against your arguments.


This is a serious misunderstanding of what I'm talking about.

Again, on an airplane, no way in hell is a software system 
going to be allowed to continue operating after it has 
self-detected a bug. Trying to bend the imprecise language I 
use into meaning the opposite doesn't change that.


To better depict the big picture as I see it:

You suggest that a system should shutdown as soon as possible on 
first sign of failure, which can affect the system.


You provide the hospital in a hurricane example. But you don't 
praise the hospitals, which shutdown on failure, you praise the 
hospital, which continues to operate in face of an unexpected and 
uncontrollable disaster in total contradiction with your 
suggestion to shutdown ASAP.


You refer to airplane's ability to not shutdown ASAP and continue 
operation on unexpected failure as if it corresponds to your 
suggestion to shutdown ASAP. This makes no sense, you contradict 
yourself.


Why didn't you praise hospital shutdown? Why nobody wants 
airplanes to dive into ocean on first suspicion? Because that's 
how unreliable systems work: they often stop working. And 
reliable systems work in a completely different way, they employ 
many tricks, but one big objective of these tricks is to have 
ability to continue operation on failure. All the effort put into 
airplane design with one reason: to fight against immediate 
shutdown, defended by you as the only true way of operation. 
Exactly the way explicitly rejected by real reliable systems 
design. How an airplane without the tricks would work? It would 
dive into ocean on first failure (and crash investigation team 
diagnoses the failure) - exactly as you suggest. That's safe: it 
could fall on a city or a nuclear reactor. How a real airplane 
works? Failure happens and it still flies, contrary to your 
suggestion to shutdown on failure. That's how critical missions 
are done: they take a risk of a greater disaster to complete the 
mission, and failures can be diagnosed when appropriate.


That's why I think your examples contradict to your proposal.


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread Kagamin via Digitalmars-d
On Friday, 24 October 2014 at 18:47:59 UTC, H. S. Teoh via 
Digitalmars-d wrote:
Basically, if you want a component to recover from a serious 
problem
like a failed assertion, the recovery code should be in a 
*separate*

component. Otherwise, if the recovery code is within the failing
component, you have no way to know if the recovery code itself 
has been
compromised, and trusting that it will do the right thing is 
very
dangerous (and is what often leads to nasty security exploits). 
The
watcher must be separate from the watched, otherwise how can 
you trust

the watcher?


You make process isolation sound like a silver bullet, but 
failure can happen on any scale from a temporary variable to 
global network. You can't use process isolation to contain a 
failure of a larger than process scale, and it's an overkill for 
a failure of a temporary variable scale.


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread H. S. Teoh via Digitalmars-d
On Fri, Oct 31, 2014 at 08:15:17PM +, Kagamin via Digitalmars-d wrote:
 On Thursday, 16 October 2014 at 19:53:42 UTC, Walter Bright wrote:
 On 10/15/2014 12:19 AM, Kagamin wrote:
 Sure, software is one part of an airplane, like a thread is a part
 of a process.  When the part fails, you discard it and continue
 operation. In software it works by rolling back a failed
 transaction. An airplane has some tricks to recover from failures,
 but still it's a no fail design you argue against: it shuts down
 parts one by one when and only when they fail and continues
 operation no matter what until nothing works and even then it still
 doesn't fail, just does nothing. The airplane example works against
 your arguments.
 
 This is a serious misunderstanding of what I'm talking about.
 
 Again, on an airplane, no way in hell is a software system going to
 be allowed to continue operating after it has self-detected a bug.
 Trying to bend the imprecise language I use into meaning the opposite
 doesn't change that.
 
 To better depict the big picture as I see it:
 
 You suggest that a system should shutdown as soon as possible on first
 sign of failure, which can affect the system.
 
 You provide the hospital in a hurricane example. But you don't praise
 the hospitals, which shutdown on failure, you praise the hospital,
 which continues to operate in face of an unexpected and uncontrollable
 disaster in total contradiction with your suggestion to shutdown ASAP.
 
 You refer to airplane's ability to not shutdown ASAP and continue
 operation on unexpected failure as if it corresponds to your
 suggestion to shutdown ASAP. This makes no sense, you contradict
 yourself.

You are misrepresenting Walter's position. His whole point was that once
a single component has detected a consistency problem within itself, it
can no longer be trusted to continue operating and therefore must be
shutdown. That, in turn, leads to the conclusion that your system design
must include multiple, redundant, independent modules that perform that
one function. *That* is the real answer to system reliability.

Pretending that a failed component can somehow fix itself is a fantasy.
The only way you can be sure you are not making the problem worse is by
having multiple redundant units that can perform each other's function.
Then when one of the units is known to be malfunctioning, you turn it
off and fallback to one of the other, known-to-be-good, components.


T

-- 
Error: Keyboard not attached. Press F1 to continue. -- Yoon Ha Lee, CONLANG


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread H. S. Teoh via Digitalmars-d
On Fri, Oct 31, 2014 at 08:23:04PM +, Kagamin via Digitalmars-d wrote:
 On Friday, 24 October 2014 at 18:47:59 UTC, H. S. Teoh via Digitalmars-d
 wrote:
 Basically, if you want a component to recover from a serious problem
 like a failed assertion, the recovery code should be in a *separate*
 component. Otherwise, if the recovery code is within the failing
 component, you have no way to know if the recovery code itself has
 been compromised, and trusting that it will do the right thing is
 very dangerous (and is what often leads to nasty security exploits).
 The watcher must be separate from the watched, otherwise how can you
 trust the watcher?
 
 You make process isolation sound like a silver bullet, but failure can
 happen on any scale from a temporary variable to global network. You
 can't use process isolation to contain a failure of a larger than
 process scale, and it's an overkill for a failure of a temporary
 variable scale.

You're missing the point. The point is that a reliable system made of
unreliable parts, can only be reliable if you have multiple *redundant*
copies of each component that are *decoupled* from each other.

The usual unit of isolation at the lowest level is that of a single
process, because threads within a process has full access to memory
shared by all threads. Therefore, they are not decoupled from each
other, and therefore, you cannot put any confidence in the correct
functioning of other threads once a single thread has become
inconsistent. The only failsafe solution is to have multiple redundant
processes, so when one process becomes inconsistent, you fallback to
another process, *decoupled* process that is known to be good.

This does not mean that process isolation is a silver bullet -- I
never said any such thing. The same reasoning applies to larger
components in the system as well. If you have a server that performs
function X, and the server begins to malfunction, you cannot expect the
server to fix itself -- because you don't know if a hacker hasn't rooted
the server and is running exploit code instead of your application. The
only 100% safe way to recover, is to have another redundant server (or
more) that also performs function X, shutdown the malfunctioning server
for investigation and repair, and in the meantime switch over to the
redundant server to continue operations. You don't shutdown the *entire*
network unless all redundant components have failed.

The reason you cannot go below the process level as a unit of redundancy
is because of coupling. The above design of failing over to a redundant
module only works if the modules are completely decoupled from each
other. Otherwise, you have end up with the situation where you have two
redundant modules M1 and M2, but both of them share a common helper
module M3. Then if M1 detects a problem, you cannot be 100% sure it's
not caused by a problem with M3, so in this case if you just switch to
M2, it will also fail in the same way. Similarly, you cannot guarantee
that while malfunctioning, M1 may have somehow damaged M3, and thereby
also making M2 unreliable. The only way to be 100% sure that failover
will actually fix the problem, is to make sure that M1 and M2 are
completely isolated from each other (e.g., by having two redundant
copies of M3 that are isolated from each other).

Since a single process is the unit of isolation in the OS, you can't go
below this granularity: as I've already said, if one thread is
malfunctioning, it may have trashed the data shared by all other threads
in the same process, and therefore none of the other threads can be
trusted to continue operating correctly. The only way to be 100% sure
that failover will actually fix the problem, is to switch over to
another process that you *know* is not coupled to the old,
malfunctioning process.

Attempting to have a process fix itself after detecting an
inconsistency is unreliable -- you're leaving it up to chance whether or
not the attempted recovery will actually work, and not make the problem
worse. You cannot guarantee the recovery code itself hasn't been
compromised by the failure -- because the recovery code exists in the
same process and is vulnerable to the same problem that caused the
original failure, and vulnerable to memory corruption caused by
malfunctioning code prior to the point the problem was detected.
Therefore, the recovery code is not trustworthy, and cannot be relied on
to continue operating correctly. That kind of maybe, maybe not
recovery is not what I'd want to put any trust in, especially when it
comes to critical applications that can cost lives if things go wrong.


T

-- 
English has the lovely word defenestrate, meaning to execute by
throwing someone out a window, or more recently to remove Windows from
a computer and replace it with something useful. :-) -- John Cowan


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread Kagamin via Digitalmars-d
On Friday, 31 October 2014 at 20:33:54 UTC, H. S. Teoh via 
Digitalmars-d wrote:
You are misrepresenting Walter's position. His whole point was 
that once
a single component has detected a consistency problem within 
itself, it
can no longer be trusted to continue operating and therefore 
must be
shutdown. That, in turn, leads to the conclusion that your 
system design
must include multiple, redundant, independent modules that 
perform that

one function. *That* is the real answer to system reliability.


In server software such component is a transaction/request. They 
are independent.


Pretending that a failed component can somehow fix itself is a 
fantasy.


Traditionally a failed transaction is indeed rolled back. It's 
more a business logic requirement because a partially completed 
operation would confuse the user.


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread H. S. Teoh via Digitalmars-d
On Fri, Oct 31, 2014 at 09:11:53PM +, Kagamin via Digitalmars-d wrote:
 On Friday, 31 October 2014 at 20:33:54 UTC, H. S. Teoh via Digitalmars-d
 wrote:
 You are misrepresenting Walter's position. His whole point was that
 once a single component has detected a consistency problem within
 itself, it can no longer be trusted to continue operating and
 therefore must be shutdown. That, in turn, leads to the conclusion
 that your system design must include multiple, redundant, independent
 modules that perform that one function. *That* is the real answer to
 system reliability.
 
 In server software such component is a transaction/request. They are
 independent.

You're using a different definition of component. An inconsistency in
a transaction is a problem with the input, not a problem with the
program logic itself. If something is wrong with the input, the program
can detect it and recover by aborting the transaction (rollback the
wrong data). But if something is wrong with the program logic itself
(e.g., it committed the transaction instead of rolling back when it
detected a problem) there is no way to recover within the program
itself.


 Pretending that a failed component can somehow fix itself is a
 fantasy.
 
 Traditionally a failed transaction is indeed rolled back. It's more a
 business logic requirement because a partially completed operation
 would confuse the user.

Again, you're using a different definition of component.

A failed transaction is a problem with the data -- this is recoverable
to some extent (that's why we have the ACID requirement of databases,
for example). For this purpose, you vet the data before trusting that it
is correct. If the data verification fails, you reject the request. This
is why you should never use assert to verify data -- assert is for
checking the program's own consistency, not for checking the validity of
data that came from outside.

A failed component, OTOH, is a problem with program logic. You cannot
recover from that within the program itself, since its own logic has
been compromised. You *can* rollback the wrong changes made to data by
that malfunctioning program, of course, but the rollback must be done by
a decoupled entity outside of that program. Otherwise you might end up
causing even more problems (for example, due to the compromised /
malfunctioning logic, the program commits the data instead of reverting
it, thus turning an intermittent problem into a permanent one).


T

-- 
By understanding a machine-oriented language, the programmer will tend to use a 
much more efficient method; it is much closer to reality. -- D. Knuth


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread Walter Bright via Digitalmars-d

On 10/31/2014 2:31 PM, H. S. Teoh via Digitalmars-d wrote:

On Fri, Oct 31, 2014 at 09:11:53PM +, Kagamin via Digitalmars-d wrote:

On Friday, 31 October 2014 at 20:33:54 UTC, H. S. Teoh via Digitalmars-d
wrote:

You are misrepresenting Walter's position. His whole point was that
once a single component has detected a consistency problem within
itself, it can no longer be trusted to continue operating and
therefore must be shutdown. That, in turn, leads to the conclusion
that your system design must include multiple, redundant, independent
modules that perform that one function. *That* is the real answer to
system reliability.


In server software such component is a transaction/request. They are
independent.


You're using a different definition of component. An inconsistency in
a transaction is a problem with the input, not a problem with the
program logic itself. If something is wrong with the input, the program
can detect it and recover by aborting the transaction (rollback the
wrong data). But if something is wrong with the program logic itself
(e.g., it committed the transaction instead of rolling back when it
detected a problem) there is no way to recover within the program
itself.



Pretending that a failed component can somehow fix itself is a
fantasy.


Traditionally a failed transaction is indeed rolled back. It's more a
business logic requirement because a partially completed operation
would confuse the user.


Again, you're using a different definition of component.

A failed transaction is a problem with the data -- this is recoverable
to some extent (that's why we have the ACID requirement of databases,
for example). For this purpose, you vet the data before trusting that it
is correct. If the data verification fails, you reject the request. This
is why you should never use assert to verify data -- assert is for
checking the program's own consistency, not for checking the validity of
data that came from outside.

A failed component, OTOH, is a problem with program logic. You cannot
recover from that within the program itself, since its own logic has
been compromised. You *can* rollback the wrong changes made to data by
that malfunctioning program, of course, but the rollback must be done by
a decoupled entity outside of that program. Otherwise you might end up
causing even more problems (for example, due to the compromised /
malfunctioning logic, the program commits the data instead of reverting
it, thus turning an intermittent problem into a permanent one).


This is a good summation of the situation.



Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread via Digitalmars-d
On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via 
Digitalmars-d wrote:
You're using a different definition of component. An 
inconsistency in
a transaction is a problem with the input, not a problem with 
the
program logic itself. If something is wrong with the input, the 
program
can detect it and recover by aborting the transaction (rollback 
the

wrong data).


Transactions roll back when there is contention for resources 
and/or when you have any kind of integrity issue. That's why you 
have retries… so no, it is not only something wrong with the 
input. Something is temporarily wrong with the situation overall.


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread Walter Bright via Digitalmars-d
On 10/31/2014 5:38 PM, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com wrote:

Transactions roll back when there is contention for resources and/or when you
have any kind of integrity issue. That's why you have retries… so no, it is not
only something wrong with the input. Something is temporarily wrong with the
situation overall.


Those are environmental errors, not programming bugs, and asserting for those 
conditions is the wrong approach.


Re: Program logic bugs vs input/environmental errors

2014-10-31 Thread via Digitalmars-d

On Saturday, 1 November 2014 at 03:39:02 UTC, Walter Bright wrote:
Those are environmental errors, not programming bugs, and 
asserting for those conditions is the wrong approach.


The point is this: what happens in the transaction engine 
matters, what happens outside of it does not matter much.


Asserts do not belong in release code at all...


Re: Program logic bugs vs input/environmental errors

2014-10-30 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-29 22:22, Walter Bright wrote:


Assumptions are not guarantees.

In any case, if the programmer knows than assert error is restricted to
a particular domain, and is recoverable, and wants to recover from it,
use enforce(), not assert().


I really don't like enforce. It encourage the use of plain Exception 
instead of a subclass.


--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-29 Thread Bruno Medeiros via Digitalmars-d

On 18/10/2014 18:40, Walter Bright wrote:

As I've said before, tripping an assert by definition means the program
has entered an unknown state. I don't believe it is possible for any
language to make guarantees beyond that point.


The guarantees (if any), would not be made by the language, but by the 
programmer. The language cannot know if a program is totally broken and 
undefined when an assert fails, but a programmer can, for each 
particular assert, make some assumptions about which fault domains (like 
Sean put it) can be affected and which are not.



--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: Program logic bugs vs input/environmental errors

2014-10-29 Thread Walter Bright via Digitalmars-d

On 10/29/2014 5:37 AM, Bruno Medeiros wrote:

On 18/10/2014 18:40, Walter Bright wrote:

As I've said before, tripping an assert by definition means the program
has entered an unknown state. I don't believe it is possible for any
language to make guarantees beyond that point.


The guarantees (if any), would not be made by the language, but by the
programmer. The language cannot know if a program is totally broken and
undefined when an assert fails, but a programmer can, for each particular
assert, make some assumptions about which fault domains (like Sean put it) can
be affected and which are not.


Assumptions are not guarantees.

In any case, if the programmer knows than assert error is restricted to a 
particular domain, and is recoverable, and wants to recover from it, use 
enforce(), not assert().




Re: Program logic bugs vs input/environmental errors

2014-10-29 Thread Walter Bright via Digitalmars-d

On 10/27/2014 1:54 PM, Sean Kelly wrote:

On Friday, 24 October 2014 at 19:09:23 UTC, Walter Bright wrote:


You can insert your own handler with core.assertHandler(myAssertHandler). Or
you can catch(Error). But you don't want to try doing anything more than
notification with that - the program is in an unknown state.


Also be aware that if you throw an Exception from the assertHandler you could be
violating nothrow guarantees.


Right.


Re: Program logic bugs vs input/environmental errors

2014-10-27 Thread eles via Digitalmars-d

On Tuesday, 14 October 2014 at 15:57:05 UTC, eles wrote:

On Saturday, 4 October 2014 at 05:26:52 UTC, eles wrote:
On Friday, 3 October 2014 at 20:31:42 UTC, Paolo Invernizzi 
wrote:

On Friday, 3 October 2014 at 18:00:58 UTC, Piotrek wrote:

On Friday, 3 October 2014 at 15:43:59 UTC, Sean Kelly wrote:




For the curious, the flight analysis here:

http://www.popularmechanics.com/technology/aviation/crashes/what-really-happened-aboard-air-france-447-6611877



A just-printed new analysis of the same:

http://www.vanityfair.com/business/2014/10/air-france-flight-447-crash


 the movie of it:

http://www.almdares.net/vz/youtube_browser.php?do=showvidid=TsgyBqlFixo


Re: Program logic bugs vs input/environmental errors

2014-10-27 Thread Sean Kelly via Digitalmars-d

On Friday, 24 October 2014 at 19:09:23 UTC, Walter Bright wrote:


You can insert your own handler with 
core.assertHandler(myAssertHandler). Or you can catch(Error). 
But you don't want to try doing anything more than notification 
with that - the program is in an unknown state.


Also be aware that if you throw an Exception from the 
assertHandler you could be violating nothrow guarantees.


Re: Program logic bugs vs input/environmental errors

2014-10-24 Thread Ary Borenszweig via Digitalmars-d

On 9/27/14, 8:15 PM, Walter Bright wrote:

This issue comes up over and over, in various guises. I feel like
Yosemite Sam here:

 https://www.youtube.com/watch?v=hBhlQgvHmQ0

In that vein, Exceptions are for either being able to recover from
input/environmental errors, or report them to the user of the application.

When I say They are NOT for debugging programs, I mean they are NOT
for debugging programs.

assert()s and contracts are for debugging programs.


Here's another +1 for exceptions.

I want to add a a slash command to Slack 
(https://slack.zendesk.com/hc/en-us/articles/201259356-Slash-Commands). 
So, for example, when I say:


/bot random phrase

This hits a web server that processes that request and returns a random 
phrase.


Now, imagine I have an assert in my application. When the web server 
hits the assertion it shuts down and the user doesn't get a response. 
What I'd like to do is to trap that assertion, tell the user that 
there's a problem, and send me an email telling me to debug it and fix 
it. That way the user can continue using the bot and I meanwhile I can 
fix the bug.


In the real world where you don't want unhappy users, asserts don't work.

Walter: how can you do that with an assertion triggering?


Re: Program logic bugs vs input/environmental errors

2014-10-24 Thread H. S. Teoh via Digitalmars-d
On Fri, Oct 24, 2014 at 03:29:43PM -0300, Ary Borenszweig via Digitalmars-d 
wrote:
 On 9/27/14, 8:15 PM, Walter Bright wrote:
 This issue comes up over and over, in various guises. I feel like
 Yosemite Sam here:
 
  https://www.youtube.com/watch?v=hBhlQgvHmQ0
 
 In that vein, Exceptions are for either being able to recover from
 input/environmental errors, or report them to the user of the application.
 
 When I say They are NOT for debugging programs, I mean they are NOT
 for debugging programs.
 
 assert()s and contracts are for debugging programs.
 
 Here's another +1 for exceptions.
 
 I want to add a a slash command to Slack
 (https://slack.zendesk.com/hc/en-us/articles/201259356-Slash-Commands).
 So, for example, when I say:
 
 /bot random phrase
 
 This hits a web server that processes that request and returns a random
 phrase.
 
 Now, imagine I have an assert in my application. When the web server
 hits the assertion it shuts down and the user doesn't get a response.
 What I'd like to do is to trap that assertion, tell the user that
 there's a problem, and send me an email telling me to debug it and fix
 it. That way the user can continue using the bot and I meanwhile I can
 fix the bug.
 
 In the real world where you don't want unhappy users, asserts don't
 work.
 
 Walter: how can you do that with an assertion triggering?

Sure they do.

Your application should be running in a separate process from the
webserver itself. The webserver receives a request and forwards it to
the application process. The application process processes the request
and sends the response back to the webserver, which forwards it back on
the client socket. Meanwhile, the webserver also monitors the
application process; if it crashes before producing a response, it steps
in and sends a HTTP 500 response to the client instead. It can also
email you about the bug, possibly with the stack trace of the crashed
application process, etc..  (And before you complain about inefficiency,
there *are* ways of eliminating copying overhead when forwarding
requests/responses between the client and the application.)

But if the webserver itself triggers an assertion, then it should NOT
attempt to send anything back to the client, because the assertion may
be indicating some kind of memory corruption or security exploit
attempt. You don't know if you might be accidentally sending sensitive
personal data (e.g. password for another user) back to the wrong client,
because your data structures got scrambled and the wrong data is
associated with the wrong client now.

Basically, if you want a component to recover from a serious problem
like a failed assertion, the recovery code should be in a *separate*
component. Otherwise, if the recovery code is within the failing
component, you have no way to know if the recovery code itself has been
compromised, and trusting that it will do the right thing is very
dangerous (and is what often leads to nasty security exploits). The
watcher must be separate from the watched, otherwise how can you trust
the watcher?


T

-- 
Why ask rhetorical questions? -- JC


Re: Program logic bugs vs input/environmental errors

2014-10-24 Thread Walter Bright via Digitalmars-d

On 10/24/2014 11:29 AM, Ary Borenszweig wrote:

On 9/27/14, 8:15 PM, Walter Bright wrote:
Now, imagine I have an assert in my application. When the web server hits the
assertion it shuts down and the user doesn't get a response. What I'd like to do
is to trap that assertion, tell the user that there's a problem, and send me an
email telling me to debug it and fix it. That way the user can continue using
the bot and I meanwhile I can fix the bug.


Don't need an exception for that.

You can insert your own handler with core.assertHandler(myAssertHandler). Or you 
can catch(Error). But you don't want to try doing anything more than 
notification with that - the program is in an unknown state.




Re: Program logic bugs vs input/environmental errors

2014-10-23 Thread rst256 via Digitalmars-d

in the extreme case, if you want to change anything.
or as a way to shut problem quickly
add to debugging symbols generation algorithm info about file and 
line

- Walter?


Re: Program logic bugs vs input/environmental errors

2014-10-22 Thread rst256 via Digitalmars-d

On Tuesday, 21 October 2014 at 03:25:55 UTC, rst256 wrote:
In this post i forgot make correction machine translation.
I am so sorry!

On Monday, 20 October 2014 at 20:36:58 UTC, eles wrote:
On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright 
wrote:

On 10/18/2014 8:21 AM, Jacob Carlborg wrote:

On 2014-10-18 07:09, Walter Bright wrote:

Which means they'll be program bugs, not environmental 
errors.


Yes, but just because I made a mistake in using a function 
(hitting an assert)

doesn't mean I want to have undefined behavior.



As I've said before, tripping an assert by definition means 
the program has entered an unknown state. I don't believe it 
is possible for any language to make guarantees beyond that 
point.


What about using the contracts of a fucntion to optimize? They 
are mainly asserts, after all.

this(errnoEnforce(.fopen(name, stdioOpenmode),
text(Cannot open file `, name, ' in 
mode `,

stdioOpenmode, ')),
name);
making a couple instances of classes not knowing whether they 
are necessary at all, the performance did not cry.
And why do you have all type of cars. Is it really you are so 
good compiler?



What about using the contracts of a fucntion to optimize? They

Its linking time.

is possible for any language to make guarantees beyond that

Of cous no, I will explain later 2-3 hour/ Sorry bisnes
offtop:
string noexist_file_name = bag_file_global;
{writefln(-- begin scope: after);
auto fobj = File(noexist_file_name);
scope(failure) writefln(test1.failure);
scope(exit) writefln(test1.exit);
}

std.exception.ErrnoException@std\stdio.d(362): Cannot open file 
`bag_...

---
5 line with only a memory addr
0x7C81077 in RegisterWaitForInputIdle
i think you need stoped after first errmsg
see exception.d: in constructor or class ErrnoException : 
Exception





Re: Program logic bugs vs input/environmental errors

2014-10-22 Thread eles via Digitalmars-d

On Tuesday, 21 October 2014 at 03:25:55 UTC, rst256 wrote:

On Monday, 20 October 2014 at 20:36:58 UTC, eles wrote:
On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright 
wrote:

On 10/18/2014 8:21 AM, Jacob Carlborg wrote:

On 2014-10-18 07:09, Walter Bright wrote:



Its linking time.


No, it's not. At least if you move them in the generated .di 
files along with function prototypes. Basically, you would pull 
more source from .d files into .di files.


Re: Program logic bugs vs input/environmental errors

2014-10-22 Thread w0rp via Digitalmars-d

On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright wrote:

On 10/18/2014 8:21 AM, Jacob Carlborg wrote:

On 2014-10-18 07:09, Walter Bright wrote:


Which means they'll be program bugs, not environmental errors.


Yes, but just because I made a mistake in using a function 
(hitting an assert)

doesn't mean I want to have undefined behavior.



As I've said before, tripping an assert by definition means the 
program has entered an unknown state. I don't believe it is 
possible for any language to make guarantees beyond that point.


Now, if it is a known unknown state, and you want to recover, 
the solution is straightforward - use enforce(). enforce() 
offers the guarantees you're asking for.


Using assert() when you mean enforce() is like pulling the fire 
alarm but not wanting the fire dept. to show up.


I agree with you on this. I've only ever used assert() for 
expressing, This should never happen. There's a difference 
between this might happen if the environment goes wrong, which 
is like a tire being popped in car, and this should never 
happen, which is like a car turning into King Kong and flying 
away. My most common assert in D is typically assert(x !is null) 
for demanding that objects are initialised.


Re: Program logic bugs vs input/environmental errors

2014-10-22 Thread eles via Digitalmars-d

On Wednesday, 22 October 2014 at 15:05:58 UTC, w0rp wrote:
On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright 
wrote:

On 10/18/2014 8:21 AM, Jacob Carlborg wrote:

On 2014-10-18 07:09, Walter Bright wrote:



never happen, which is like a car turning into King Kong and


It depends on the environment:

http://i.dailymail.co.uk/i/pix/2011/03/27/article-1370559-0B499D2E0578-931_634x470.jpg


Re: Program logic bugs vs input/environmental errors

2014-10-22 Thread rst256 via Digitalmars-d

https://en.wikipedia.org/wiki/Code_smell
Do you read this?

Yes phobos.stdio very smell code. Disagree?
107 КБ of source code olny for call
few functions from stdio. Are you sure that
this code are full correctly?
silly rabbit, y should be positive - may
be becose his used class like this?

the funny thing is that this code can easily
be written machine based on a few rules and header.

Say that do you think about this code model
File(output){
...
some file operation
...
}else{
...
optional if defined handle error
...
}
...

this not class instance, is macrodef.
And output is not a file, this
external resurce. His may binded to
commandline arg (-output), static file
 and gui(just set asign property in
design form editor). May be get from
network.
That is cool, about this i can say:
Take it to bank. Gui on macros
like this may create very quick.

File declare as subject not class
(eg in json format) you anyway build it
from exists function.
example:
File = {
export: [ export list ]
blocks: {
main:{},
else: {}
},

}
Firshtein?
see https://en.wikipedia.org/wiki/Subject-oriented_programming
how this make on jscript https://github.com/jonjamz/amethyst



Re: Program logic bugs vs input/environmental errors

2014-10-20 Thread rst256 via Digitalmars-d

On Saturday, 18 October 2014 at 17:55:04 UTC, Walter Bright wrote:

On 10/18/2014 9:01 AM, Sean Kelly wrote:

So you consider the library interface to be user input?


The library designer has to make that decision, not the 
language.



What about calls that are used internally but also exposed as 
part of the library interface?


The library designer has to make a decision about who's job it 
is to validate the parameters to an interface, and thereby 
deciding what are input/environmental errors and what are 
programming bugs.


Avoiding making this decision means the API is underspecified, 
and we all know how poorly that works.


Consider:

/**
 * foo() does magical things.
 * Parameters:
 *   x   a value that must be greater than 0 and less than 8
 *   y   a positive integer
 * Throws:
 *   Exception if y is negative
 * Returns:
 *   magic value
 */
int foo(int x, int y)
in { assert(x  0  x  8); }
body {
   enforce(y = 0, silly rabbit, y should be positive);
   ... return ...;
}


Contract Programming. Contract-rider list  all required for this  
 item api conditions. In that case is a list of exseption handler 
on client side.

Epic sample of first post, may be like this:
in {rider(except, this class may trowing io exception, you must 
defined ...) }


Re: Program logic bugs vs input/environmental errors

2014-10-20 Thread eles via Digitalmars-d

On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright wrote:

On 10/18/2014 8:21 AM, Jacob Carlborg wrote:

On 2014-10-18 07:09, Walter Bright wrote:


Which means they'll be program bugs, not environmental errors.


Yes, but just because I made a mistake in using a function 
(hitting an assert)

doesn't mean I want to have undefined behavior.



As I've said before, tripping an assert by definition means the 
program has entered an unknown state. I don't believe it is 
possible for any language to make guarantees beyond that point.


What about using the contracts of a fucntion to optimize? They 
are mainly asserts, after all.


Re: Program logic bugs vs input/environmental errors

2014-10-20 Thread Timon Gehr via Digitalmars-d

On 10/18/2014 07:40 PM, Walter Bright wrote:


As I've said before, tripping an assert by definition means the program
has entered an unknown state. I don't believe it is possible for any
language to make guarantees beyond that point.


What about the guarantee that your compiler didn't _intentionally_ screw 
them completely?


Re: Program logic bugs vs input/environmental errors

2014-10-20 Thread Walter Bright via Digitalmars-d

On 10/20/2014 1:54 PM, Timon Gehr wrote:

On 10/18/2014 07:40 PM, Walter Bright wrote:


As I've said before, tripping an assert by definition means the program
has entered an unknown state. I don't believe it is possible for any
language to make guarantees beyond that point.


What about the guarantee that your compiler didn't _intentionally_ screw them
completely?


What does that mean?


Re: Program logic bugs vs input/environmental errors

2014-10-20 Thread rst256 via Digitalmars-d

On Monday, 20 October 2014 at 20:36:58 UTC, eles wrote:
On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright 
wrote:

On 10/18/2014 8:21 AM, Jacob Carlborg wrote:

On 2014-10-18 07:09, Walter Bright wrote:

Which means they'll be program bugs, not environmental 
errors.


Yes, but just because I made a mistake in using a function 
(hitting an assert)

doesn't mean I want to have undefined behavior.



As I've said before, tripping an assert by definition means 
the program has entered an unknown state. I don't believe it 
is possible for any language to make guarantees beyond that 
point.


What about using the contracts of a fucntion to optimize? They 
are mainly asserts, after all.

this(errnoEnforce(.fopen(name, stdioOpenmode),
text(Cannot open file `, name, ' in 
mode `,

stdioOpenmode, ')),
name);
making a couple instances of classes not knowing whether they are 
necessary at all, the performance did not cry.
And why do you have all type of cars. Is it really you are so 
good compiler?



What about using the contracts of a fucntion to optimize? They

Its linking time.

is possible for any language to make guarantees beyond that

Of cous no, I will explain later 2-3 hour/ Sorry bisnes
offtop:
string noexist_file_name = bag_file_global;
{writefln(-- begin scope: after);
auto fobj = File(noexist_file_name);
scope(failure) writefln(test1.failure);
scope(exit) writefln(test1.exit);
}

std.exception.ErrnoException@std\stdio.d(362): Cannot open file 
`bag_...

---
5 line with only a memory addr
0x7C81077 in RegisterWaitForInputIdle
i think you need stoped after first errmsg
see exception.d: in constructor or class ErrnoException : 
Exception


Re: Program logic bugs vs input/environmental errors

2014-10-18 Thread via Digitalmars-d

On Saturday, 18 October 2014 at 05:22:54 UTC, Walter Bright wrote:

2. If (1) cannot be done, then write the unittests like:

  {
openfile();
scope (exit) closefile();
scope (failure) assert(0);
... use enforce() instead of assert() ...
  }

3. In a script that compiles/runs the unittests, have the 
script delete any extraneous generated files.


This is bad, it means:

- I risk having my filesystem ruined by running unit-tests 
through the compiler.

- The test environment changes between runs.

Built in unit tests should have no side effects.

Something along these lines would be a better setup:

1. Load a filesystem from a read-only file to a virtual driver.
2. Run a special initializer for unit tests to set up the 
in-memory test environment.

3. Create N forks (N=number of cores):
4. Fork the filesystem/program before running a single unit test.
5. Mount the virtual filesystem (from 1)
6. Run the unit test
7. Collect result from child process and print result.
8. goto 4

But just banning writing to resources would be more suitable. D 
unit tests are only suitable for testing simple library code 
anyway.


Re: Program logic bugs vs input/environmental errors

2014-10-18 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-18 06:36, Walter Bright wrote:


This particular subthread is about unittests.


That doesn't make the problem go away.

--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-18 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-18 07:09, Walter Bright wrote:


Which means they'll be program bugs, not environmental errors.


Yes, but just because I made a mistake in using a function (hitting an 
assert) doesn't mean I want to have undefined behavior.


--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-18 Thread Sean Kelly via Digitalmars-d

On Saturday, 18 October 2014 at 05:10:20 UTC, Walter Bright wrote:


I understand that some have to work with poorly written 
libraries that incorrectly use assert. If that's the only issue 
with those libraries, you're probably lucky :-) Short term, I 
suggest editing the code of those libraries, and pressuring the 
authors of them. Longer term, we need to establish a culture of 
using assert/enforce correctly.


So you consider the library interface to be user input?  What 
about calls that are used internally but also exposed as part of 
the library interface?


Re: Program logic bugs vs input/environmental errors

2014-10-18 Thread Walter Bright via Digitalmars-d

On 10/18/2014 8:21 AM, Jacob Carlborg wrote:

On 2014-10-18 07:09, Walter Bright wrote:


Which means they'll be program bugs, not environmental errors.


Yes, but just because I made a mistake in using a function (hitting an assert)
doesn't mean I want to have undefined behavior.



As I've said before, tripping an assert by definition means the program has 
entered an unknown state. I don't believe it is possible for any language to 
make guarantees beyond that point.


Now, if it is a known unknown state, and you want to recover, the solution is 
straightforward - use enforce(). enforce() offers the guarantees you're asking for.


Using assert() when you mean enforce() is like pulling the fire alarm but not 
wanting the fire dept. to show up.


Re: Program logic bugs vs input/environmental errors

2014-10-18 Thread Walter Bright via Digitalmars-d

On 10/18/2014 9:01 AM, Sean Kelly wrote:

So you consider the library interface to be user input?


The library designer has to make that decision, not the language.



What about calls that are used internally but also exposed as part of the 
library interface?


The library designer has to make a decision about who's job it is to validate 
the parameters to an interface, and thereby deciding what are 
input/environmental errors and what are programming bugs.


Avoiding making this decision means the API is underspecified, and we all know 
how poorly that works.


Consider:

/**
 * foo() does magical things.
 * Parameters:
 *   x   a value that must be greater than 0 and less than 8
 *   y   a positive integer
 * Throws:
 *   Exception if y is negative
 * Returns:
 *   magic value
 */
int foo(int x, int y)
in { assert(x  0  x  8); }
body {
   enforce(y = 0, silly rabbit, y should be positive);
   ... return ...;
}




Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Atila Neves via Digitalmars-d
No, right now one can affect the way tests are run by simply 
replacing the runner to a custom one and it will work for any 
amount of modules compiled in. Beauty of `unittest` block 
approach is that it is simply a bunch of functions that are 
somewhat easy to discover from the combined sources of the 
program - custom runner can do pretty much anything with those. 
Or it could if not the issue with AssertError and cleanup.


Is cleaning up in a unittest build a problem? I'd say no, if it 
the tests fail it doesn't make much sense to clean up unless it 
affects the reporting of tests failing.


I catch assertion errors in unit-threaded exactly to support the 
standard unittest blocks and can't see why I'd care about 
clean-up. At least in practice it hasn't been an issue, although 
to be fair I haven't used that functionality a lot (of using 
unit-threaded to run unittest blocks).


Atila


Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Kagamin via Digitalmars-d

On Thursday, 16 October 2014 at 19:53:42 UTC, Walter Bright wrote:

On 10/15/2014 12:19 AM, Kagamin wrote:
Sure, software is one part of an airplane, like a thread is a 
part of a process.
When the part fails, you discard it and continue operation. In 
software it works
by rolling back a failed transaction. An airplane has some 
tricks to recover
from failures, but still it's a no fail design you argue 
against: it shuts
down parts one by one when and only when they fail and 
continues operation no
matter what until nothing works and even then it still doesn't 
fail, just does

nothing. The airplane example works against your arguments.


This is a serious misunderstanding of what I'm talking about.

Again, on an airplane, no way in hell is a software system 
going to be allowed to continue operating after it has 
self-detected a bug.


Neither does failed transaction. I already approved that:
When the part fails, you discard it and continue operation. In 
software it works by rolling back a failed transaction.


Trying to bend the imprecise language I use into meaning the 
opposite doesn't change that.


Do you think I question that? I don't. I agree discarding a 
failed part is ok, and this is what traditional multithreaded 
server software already do: rollback a failed transaction and 
continue operation, just like airplane: loosing a part doesn't 
lose the whole.


Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-16 21:35, Walter Bright wrote:


Ok, but why would 3rd party library unittests be a concern? They
shouldn't have shipped it if their own unittests fail - that's the whole
point of having unittests.


They will have asserts in contracts and other parts of that code that is 
not unit tests.


--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-17 10:26, Atila Neves wrote:


Is cleaning up in a unittest build a problem? I'd say no, if it the
tests fail it doesn't make much sense to clean up unless it affects the
reporting of tests failing.


I have used files in some of my unit tests. I would certainly like those 
to be properly closed if a tests failed (for whatever reason). Now, some 
of you will argue that one shouldn't use files in unit tests. But that 
would only work in a ideal and perfect world, which we don't live in.


--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-16 20:50, Walter Bright wrote:


I don't understand why unittests in druntime/phobos are an issue for
users. We don't release a DMD unless they all pass - it should be moot
for users.


There are asserts elsewhere in the code.

--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-16 21:31, Walter Bright wrote:


Contract errors in Phobos/Druntime should be limited to having passed it
invalid arguments, which should be documented


That doesn't mean it won't happen.

--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Walter Bright via Digitalmars-d

On 10/17/2014 9:05 AM, Jacob Carlborg wrote:

On 2014-10-16 21:35, Walter Bright wrote:


Ok, but why would 3rd party library unittests be a concern? They
shouldn't have shipped it if their own unittests fail - that's the whole
point of having unittests.


They will have asserts in contracts and other parts of that code that is not
unit tests.


This particular subthread is about unittests.



Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Walter Bright via Digitalmars-d

On 10/17/2014 9:13 AM, Jacob Carlborg wrote:

On 2014-10-16 21:31, Walter Bright wrote:


Contract errors in Phobos/Druntime should be limited to having passed it
invalid arguments, which should be documented


That doesn't mean it won't happen.


Which means they'll be program bugs, not environmental errors.

It is of great value to distinguish between program bugs and input/environmental 
errors, and to treat them entirely differently. It makes code easier to 
understand, more robust, and better/faster code can be generated.


Using asserts to detect input/environmental errors is a bad practice - something 
like enforce() should be used instead.


I understand that some have to work with poorly written libraries that 
incorrectly use assert. If that's the only issue with those libraries, you're 
probably lucky :-) Short term, I suggest editing the code of those libraries, 
and pressuring the authors of them. Longer term, we need to establish a culture 
of using assert/enforce correctly.


This is not as pie-in-the-sky as it sounds. Over the years, a lot of formerly 
popular bad practices in C and C++ have been relentlessly driven out of 
existence by getting the influential members of the communities to endorse and 
advocate proper best practices.


--

I do my best to practice what I preach. In the DMD source code, an assert 
tripping always, by definition, means it's a compiler bug. It is never used to 
signal errors in code being compiled or environmental errors. If a badly formed 
.d file causes dmd to assert, it is always a BUG in dmd.


Re: Program logic bugs vs input/environmental errors

2014-10-17 Thread Walter Bright via Digitalmars-d

On 10/17/2014 9:10 AM, Jacob Carlborg wrote:

On 2014-10-17 10:26, Atila Neves wrote:


Is cleaning up in a unittest build a problem? I'd say no, if it the
tests fail it doesn't make much sense to clean up unless it affects the
reporting of tests failing.


I have used files in some of my unit tests. I would certainly like those to be
properly closed if a tests failed (for whatever reason). Now, some of you will
argue that one shouldn't use files in unit tests. But that would only work in a
ideal and perfect world, which we don't live in.


This should be fairly straightforward to deal with:

1. Write functions to input/output from/to ranges instead of files. Then, have 
the unittests mock up input to drive them that does not come from files. I've 
used this technique very successfully in Warp.


2. If (1) cannot be done, then write the unittests like:

  {
openfile();
scope (exit) closefile();
scope (failure) assert(0);
... use enforce() instead of assert() ...
  }

3. In a script that compiles/runs the unittests, have the script delete any 
extraneous generated files.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-15 16:25, Dicebot wrote:


How can one continue without recovering? This will result in any kind of
environment not being cleaned and false failures of other tests that
share it.


I will probably use something else than assert in my unit tests. 
Something like assertEq, assertNotEq and so on. It's more flexible, can 
give better error message and I can have it throw an exception instead 
of an error. But there's still the problem with asserts in contracts and 
other parts of the code.


--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/15/2014 7:25 AM, Dicebot wrote:

How can one continue without recovering? This will result in any kind of
environment not being cleaned and false failures of other tests that share it.


Unittest asserts are top level - they shouldn't need recovering from (i.e. 
unwinding). Just continuing.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/15/2014 7:35 AM, Dan Olson wrote:

That is what I am looking for, just being able to continue from a failed
assert in a unittest.


Just use enforce() or something similar instead of assert(). Nothing says you 
have to use assert() in a unittest.




Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/15/2014 6:54 PM, Sean Kelly wrote:

I hate to say it, but I'm inclined to treat nothrow the same as in C++, which is
to basically pretend it's not a part of the language. The efficiency is nice,
but not if it means that throwing an Error will cause the program to be
invalid.  Please tell me there's no plan to change the unwinding behavior when
Error is thrown in standard (ie not nothrow) code.


Don't throw Errors when you need to unwind. Throw Exceptions. I.e. use enforce() 
instead of assert().




Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Sean Kelly via Digitalmars-d

On Thursday, 16 October 2014 at 07:44:37 UTC, Walter Bright wrote:

On 10/15/2014 6:54 PM, Sean Kelly wrote:
I hate to say it, but I'm inclined to treat nothrow the same 
as in C++, which is
to basically pretend it's not a part of the language. The 
efficiency is nice,
but not if it means that throwing an Error will cause the 
program to be
invalid.  Please tell me there's no plan to change the 
unwinding behavior when

Error is thrown in standard (ie not nothrow) code.


Don't throw Errors when you need to unwind. Throw Exceptions. 
I.e. use enforce() instead of assert().


I'm more concerned about Phobos. If it uses nothrow and asserts 
in preconditions then the decision has been made for me.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Dan Olson via Digitalmars-d
Walter Bright newshou...@digitalmars.com writes:

 On 10/15/2014 7:35 AM, Dan Olson wrote:
 That is what I am looking for, just being able to continue from a failed
 assert in a unittest.

 Just use enforce() or something similar instead of assert(). Nothing
 says you have to use assert() in a unittest.

Makes sense.  However it is druntime and phobos unittests that already use 
assert.  I have convinced myself that catching Throwable is just fine in my 
case because at worst, unittests that follow an Error might be tainted, but 
only a perfect score of passing all tests really counts.
-- 
dano


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Dan Olson via Digitalmars-d
Ola Fosheim Grøstad\ ola.fosheim.grostad+dl...@gmail.com writes:

 On Wednesday, 15 October 2014 at 14:25:43 UTC, Dicebot wrote:
 How can one continue without recovering? This will result in any
 kind of environment not being cleaned and false failures of other
 tests that share it.

 fork()?

Forking each unittest sounds like a good solution.
-- 
dano


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/16/2014 6:46 AM, Sean Kelly wrote:

On Thursday, 16 October 2014 at 07:44:37 UTC, Walter Bright wrote:

Don't throw Errors when you need to unwind. Throw Exceptions. I.e. use
enforce() instead of assert().


I'm more concerned about Phobos. If it uses nothrow and asserts in preconditions
then the decision has been made for me.


Which function(s) in particular?


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread monnoroch via Digitalmars-d

Hi all!
I've read the topic and I am really surprised so many engeneers
arguing for so long and not having systematic approach to the
problem.

As I see it, Walter states that there are eenvironmental errors
and program bugs, which are non-recoverable. So use exceptions
(enforce) for first ones and asserts for the latter.

Other folks argue, that you might want to recover from a program
bug or not recover from invalid input. Also,  exception might be
itself a program bug. This is a valid point too.

So if both are true, that clearly means that the right solution
would be to introduce four categories: a cross product of the
obove:

- bugs, that are recoverable
- bugs, that are unrecoverable
- input errors, that are recoverable
- input errors, that are not

Given that, makes sence to use exceptions for recoverable errors,
doesn't matter whether they are bugs or environmental errors, and
use asserts, if you can't recover.

So, the programmer decides, if his program can recover and puts
an assert or an enforce call in his code.

The problem is, as always, with libraries. The library writer
cannot possibly decide, is some unexpected condition recoverable
or not, so he just can't put both assert and enforce into his
library function and the caller must check the arguments before
calling the function. Yes, this is annoying, but it it the only
correct way.

But what if he didn't? This brings us to error codes. Yes, they
are the best for library error handling, imo, of course, in a
form of Maby and Error monads. They are as clear as asking the
ccaller to decide, what to do with the error. But I realize, that
you, guys, are all against error codes of any kind, so...

I would say, that since the caller didn't check the arguments
himself the bug becomes unrecoverable by default and there should
be an assert, which gives stack trace, so the programmer would
insert appropriate enforces before the function call.

Finally, this brings me to the conclusion: you don't need a stack
trace in the exception, it is never a bug.








Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/16/2014 8:36 AM, Dan Olson wrote:

Walter Bright newshou...@digitalmars.com writes:


On 10/15/2014 7:35 AM, Dan Olson wrote:

That is what I am looking for, just being able to continue from a failed
assert in a unittest.


Just use enforce() or something similar instead of assert(). Nothing says
you have to use assert() in a unittest.


Makes sense.  However it is druntime and phobos unittests that already use
assert.  I have convinced myself that catching Throwable is just fine in my
case because at worst, unittests that follow an Error might be tainted, but
only a perfect score of passing all tests really counts.


I don't understand why unittests in druntime/phobos are an issue for users. We 
don't release a DMD unless they all pass - it should be moot for users.




Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Sean Kelly via Digitalmars-d

On Thursday, 16 October 2014 at 18:49:13 UTC, Walter Bright wrote:

On 10/16/2014 6:46 AM, Sean Kelly wrote:
On Thursday, 16 October 2014 at 07:44:37 UTC, Walter Bright 
wrote:
Don't throw Errors when you need to unwind. Throw Exceptions. 
I.e. use enforce() instead of assert().


I'm more concerned about Phobos. If it uses nothrow and 
asserts in preconditions then the decision has been made for 
me.


Which function(s) in particular?


Nothing specifically... which is kind of the problem.  If I call
an impure nothrow function, it's possible the function accesses
shared state that will not be properly cleaned up in the event of
a thrown Error--say it contains a synchronized block, for
example.  So even if I can be sure that the problem that resulted
in an Error being thrown did not corrupt program state, I can't
be sure that the failure to unwind did not as well.

That said, I'm inclined to say that this is only a problem
because of how many things are classified as Errors at this
point.  If contracts used some checking mechanism other than
assert, perhaps this would be enough.  Again I'll refer to my on
errors post that gets into this a bit.  Using two broad
categories: exceptions and errors, is unduly limiting.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Sean Kelly via Digitalmars-d

On Thursday, 16 October 2014 at 18:53:22 UTC, monnoroch wrote:


So if both are true, that clearly means that the right solution
would be to introduce four categories: a cross product of the
obove:

- bugs, that are recoverable
- bugs, that are unrecoverable
- input errors, that are recoverable
- input errors, that are not


Yes, I've already started a thread for this:

http://forum.dlang.org/thread/zwnycclpgvfsfaact...@forum.dlang.org

but almost no one replied.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Dicebot via Digitalmars-d
On Thursday, 16 October 2014 at 06:11:46 UTC, Jacob Carlborg 
wrote:

On 2014-10-15 16:25, Dicebot wrote:

How can one continue without recovering? This will result in 
any kind of
environment not being cleaned and false failures of other 
tests that

share it.


I will probably use something else than assert in my unit 
tests. Something like assertEq, assertNotEq and so on. It's 
more flexible, can give better error message and I can have it 
throw an exception instead of an error. But there's still the 
problem with asserts in contracts and other parts of the code.


This is what we are using right now:

public void test ( char[] op, T1, T2 ) ( T1 a, T2 b, char[] file 
= __FILE__, size_t line = __LINE__ )

{
enforce!(op, TestException)(a, b, file, line);
}

but it won't work well with 3d-party libraries that use 
assertions.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/16/2014 12:08 PM, Sean Kelly wrote:

On Thursday, 16 October 2014 at 18:49:13 UTC, Walter Bright wrote:

On 10/16/2014 6:46 AM, Sean Kelly wrote:

On Thursday, 16 October 2014 at 07:44:37 UTC, Walter Bright wrote:

Don't throw Errors when you need to unwind. Throw Exceptions. I.e. use
enforce() instead of assert().


I'm more concerned about Phobos. If it uses nothrow and asserts in
preconditions then the decision has been made for me.


Which function(s) in particular?


Nothing specifically... which is kind of the problem.  If I call
an impure nothrow function, it's possible the function accesses
shared state that will not be properly cleaned up in the event of
a thrown Error--say it contains a synchronized block, for
example.  So even if I can be sure that the problem that resulted
in an Error being thrown did not corrupt program state, I can't
be sure that the failure to unwind did not as well.


Contract errors in Phobos/Druntime should be limited to having passed it invalid 
arguments, which should be documented, or simply that the function has a bug in 
it, or that it ran out of memory (which is generally not recoverable anyway).


I.e. I'm not seeing where this is a practical problem.



That said, I'm inclined to say that this is only a problem
because of how many things are classified as Errors at this
point.  If contracts used some checking mechanism other than
assert, perhaps this would be enough.  Again I'll refer to my on
errors post that gets into this a bit.  Using two broad
categories: exceptions and errors, is unduly limiting.


My initial impression is that there's so much confusion about what should be an 
Error and what should be an Exception, that adding a third category will not 
improve things.




Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/16/2014 12:21 PM, Dicebot wrote:

On Thursday, 16 October 2014 at 06:11:46 UTC, Jacob Carlborg wrote:

On 2014-10-15 16:25, Dicebot wrote:


How can one continue without recovering? This will result in any kind of
environment not being cleaned and false failures of other tests that
share it.


I will probably use something else than assert in my unit tests. Something
like assertEq, assertNotEq and so on. It's more flexible, can give better
error message and I can have it throw an exception instead of an error. But
there's still the problem with asserts in contracts and other parts of the code.


This is what we are using right now:

public void test ( char[] op, T1, T2 ) ( T1 a, T2 b, char[] file = __FILE__,
size_t line = __LINE__ )
{
 enforce!(op, TestException)(a, b, file, line);
}

but it won't work well with 3d-party libraries that use assertions.


Ok, but why would 3rd party library unittests be a concern? They shouldn't have 
shipped it if their own unittests fail - that's the whole point of having unittests.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/15/2014 12:19 AM, Kagamin wrote:

Sure, software is one part of an airplane, like a thread is a part of a process.
When the part fails, you discard it and continue operation. In software it works
by rolling back a failed transaction. An airplane has some tricks to recover
from failures, but still it's a no fail design you argue against: it shuts
down parts one by one when and only when they fail and continues operation no
matter what until nothing works and even then it still doesn't fail, just does
nothing. The airplane example works against your arguments.


This is a serious misunderstanding of what I'm talking about.

Again, on an airplane, no way in hell is a software system going to be allowed 
to continue operating after it has self-detected a bug. Trying to bend the 
imprecise language I use into meaning the opposite doesn't change that.




Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Dicebot via Digitalmars-d

On Thursday, 16 October 2014 at 19:35:40 UTC, Walter Bright wrote:
Ok, but why would 3rd party library unittests be a concern? 
They shouldn't have shipped it if their own unittests fail - 
that's the whole point of having unittests.


Libraries tend to be forked and modified. Libraries aren't always 
tested in environment similar to specific production case. At the 
same time not being able to use same test runner in all 
Continious Integration jobs greatly reduces the value of having 
standard unittest blocks in the very first place.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Walter Bright via Digitalmars-d

On 10/16/2014 12:56 PM, Dicebot wrote:

On Thursday, 16 October 2014 at 19:35:40 UTC, Walter Bright wrote:

Ok, but why would 3rd party library unittests be a concern? They shouldn't
have shipped it if their own unittests fail - that's the whole point of having
unittests.


Libraries tend to be forked and modified.


If you're willing to go that far, then yes, you do wind up owning the unittests, 
in which case s/assert/myassert/ should do it.




Libraries aren't always tested in
environment similar to specific production case.


Unittests should not be testing their environment. They should be testing the 
function's logic, and should mock up input for them as required.




At the same time not being able
to use same test runner in all Continious Integration jobs greatly reduces the
value of having standard unittest blocks in the very first place.


I understand that, but wouldn't you be modifying the unittests anyway if using 
an external test runner tool?


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Sean Kelly via Digitalmars-d

On Thursday, 16 October 2014 at 19:56:57 UTC, Dicebot wrote:


Libraries tend to be forked and modified. Libraries aren't 
always tested in environment similar to specific production 
case.


This seems relevant:

http://www.tele-task.de/archive/video/flash/16130/


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Dicebot via Digitalmars-d

On Thursday, 16 October 2014 at 20:18:04 UTC, Walter Bright wrote:

On 10/16/2014 12:56 PM, Dicebot wrote:
On Thursday, 16 October 2014 at 19:35:40 UTC, Walter Bright 
wrote:
Ok, but why would 3rd party library unittests be a concern? 
They shouldn't
have shipped it if their own unittests fail - that's the 
whole point of having

unittests.


Libraries tend to be forked and modified.


If you're willing to go that far, then yes, you do wind up 
owning the unittests, in which case s/assert/myassert/ should 
do it.


Which means changing almost all sources and resolving conflicts 
upon each merge. Forking library for few tweaks is not going 
that far, it is an absolutely minor routine. It also complicates 
propagating changes back upstream because all tests need to be 
re-adjusted back to original style.



Libraries aren't always tested in
environment similar to specific production case.


Unittests should not be testing their environment. They should 
be testing the function's logic, and should mock up input for 
them as required.


compiler version, libc version, kernel version - all it can 
affect behaviour or pretty self-contained function. Perfectly 
tested library is as much reality as program with 0 bugs.



At the same time not being able
to use same test runner in all Continious Integration jobs 
greatly reduces the
value of having standard unittest blocks in the very first 
place.


I understand that, but wouldn't you be modifying the unittests 
anyway if using an external test runner tool?


No, right now one can affect the way tests are run by simply 
replacing the runner to a custom one and it will work for any 
amount of modules compiled in. Beauty of `unittest` block 
approach is that it is simply a bunch of functions that are 
somewhat easy to discover from the combined sources of the 
program - custom runner can do pretty much anything with those. 
Or it could if not the issue with AssertError and cleanup.


Re: Program logic bugs vs input/environmental errors

2014-10-16 Thread Dan Olson via Digitalmars-d
Walter Bright newshou...@digitalmars.com writes:
 I don't understand why unittests in druntime/phobos are an issue for
 users. We don't release a DMD unless they all pass - it should be moot
 for users.

I think some context was lost.  This is different.  I am making mods to
LDC, druntime, and phobos to target iPhones and iPads (ARM-iOS).  I also
can't claim victory until all unittests pass.


Re: Program logic bugs vs input/environmental errors

2014-10-15 Thread Walter Bright via Digitalmars-d

On 10/14/2014 8:36 PM, Dicebot wrote:

On Wednesday, 15 October 2014 at 03:18:31 UTC, Walter Bright wrote:

However, the compiler is still going to regard the assert() as nothrow, so the
unwinding from an Exception won't happen until up stack a throwing function is
encountered.


This makes impossible to have non-fatal unittests and the very reason I was
looking for a replacement.


I don't really understand the issue. Unittests are usually run with a separate 
build of the app, not in the main app. When they are run in the main app, they 
are run before the app even gets to main().


Why do you need non-fatal unittests?


Re: Program logic bugs vs input/environmental errors

2014-10-15 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-15 07:57, Walter Bright wrote:


Why do you need non-fatal unittests?


I don't know if this would cause problems with the current approach. But 
most unit test frameworks don't NOT stop on the first failure, like D 
does. It catches the exception, continues with the next test and in the 
end prints a final report.


--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-15 Thread Kagamin via Digitalmars-d

On Saturday, 4 October 2014 at 08:08:49 UTC, Walter Bright wrote:

On 10/3/2014 4:27 AM, Kagamin wrote:
Do you interpret airplane safety right? As I understand, 
airplanes are safe
exactly because they recover from assert failures and continue 
operation.


Nope. That's exactly 180 degrees from how it works.

Any airplane system that detects a fault shuts itself down and 
the backup is engaged. No way in hell is software allowed to 
continue that asserted.


Sure, software is one part of an airplane, like a thread is a 
part of a process. When the part fails, you discard it and 
continue operation. In software it works by rolling back a failed 
transaction. An airplane has some tricks to recover from 
failures, but still it's a no fail design you argue against: it 
shuts down parts one by one when and only when they fail and 
continues operation no matter what until nothing works and even 
then it still doesn't fail, just does nothing. The airplane 
example works against your arguments.


The unreliable design you talk about would be committing a failed 
transaction, but no, nobody suggests that.


  1   2   3   4   5   >