Re: Calculating cosinus/sinus

2013-05-11 Thread Robert Dewar

On 5/11/2013 11:20 AM, jacob navia wrote:


OK I did a similar thing. I just compiled sin(argc) in main.
The results prove that you were right. The single fsin instruction
takes longer than several HUNDRED instructions (calls, jumps
table lookup what have you)

Gone are the times when an fsin would take 30 cycles or so.
Intel has destroyed the FPU.


That's an unwarrented claim, but indeed the algorithm used
within the FPU is inferior to the one in the library. Not
so surprising, the one in the chip is old, and we have made
good advances in learning how to calculate things accurately.
Also, the library is using the fast new 64-bit arithmetic.
So none of this is (or should be surprising).


In the benchmark code all that code/data is in the L1 cache.
In real life code you use the sin routine sometimes, and
the probability of it not being in the L1 cache is much higher,
I would say almost one if you do not do sin/cos VERY often.


But of course you don't really care about performance so much
unless you *are* using it very often. I would be surprised if
there are any real programs in which using the FPU instruction
is faster.

And as noted earlier in the thread, the library algorithm is
more accurate than the Intel algorithm, which is also not at
all surprising.


For the time being I will go on generating the fsin code.
I will try to optimize Moshier's SIN function later on.


Well I will be surprised if you can find significant
optimizations to that very clever routine. Certainly
you have to be a floating-point expert to even touch it!

Robert Dewar




Re: Calculating cosinus/sinus

2013-05-11 Thread Robert Dewar

On 5/11/2013 10:46 AM, Robert Dewar wrote:



As 1) only way is measure that. Compile following an we will see who is
rigth.


Right, probably you should have done that before posting
anything! (I leave the experiment up to you!)


And of course this experiment says nothing about accuracy!



Re: Calculating cosinus/sinus

2013-05-11 Thread Robert Dewar



As 1) only way is measure that. Compile following an we will see who is
rigth.


Right, probably you should have done that before posting
anything! (I leave the experiment up to you!)


cat "
#include 

int main(){ int i;
   double x=0;

   double ret=0;
   double f;
   for(i=0;i<1000;i++){
  ret+=sin(x);
 x+=0.3;
   }
   return ret;
}
" > sin.c

gcc sin.c -O3 -lm -S
cp sin.s fsin.s
#change implementation in to fsin.s
gcc sin.s -lm -o  sin; gcc fsin.s -lm -o fsin
for I in `seq 1 10` ; do
time ./sin
time ./fsin
done




I think that gcc has a problem here. I am pointing you to this problem,
but please keep in mind
I am no newbee...


Sure, but that does not mean you are familiar with the intracacies
of accurate computation of transcendental functions!


jacob







Re: Calculating cosinus/sinus

2013-05-11 Thread Robert Dewar

On 5/11/2013 5:42 AM, jacob navia wrote:


1) The fsin instruction is ONE instruction! The sin routine is (at
least) thousand instructions!
 Even if the fsin instruction itself is "slow" it should be thousand
times faster than the
 complicated routine gcc calls.
2) The FPU is at 64 bits mantissa using gcc, i.e. fsin will calculate
with 64 bits mantissa and
 NOT only 53 as SSE2. The fsin instruction is more precise!


You are making conclusions based on naive assumptions here.


I think that gcc has a problem here. I am pointing you to this problem,
but please keep in mind
I am no newbee...


Sure, but that does not mean you are familiar with the intracacies
of accurate computation of transcendental functions!


jacob





Re: C/C++ Option to Initialize Variables?

2013-02-18 Thread Robert Dewar



Wrong.  It specifies that objects with static storage duration that
aren't explicitely initialized are initialized with null pointers, or
zeros depending on type.  6.7.8.10.


OK, that means that the comments of my last mesage don't apply to
variables of this type. So they should at least optionally be excluded
from any feature to initialize variables


Hence if .bss is to be used to place such objects then the runtime system
_must_ make sure that it's zero initialized.




Re: C/C++ Option to Initialize Variables?

2013-02-18 Thread Robert Dewar



Forgive me, but I don't see where anything is guaranteed to be zero'd
before use. I'm likely wrong somewhere since you disagree.


http://en.wikipedia.org/wiki/.bss


This is about what happens to work, and specifically notes that it is
not part of the C standard. There is a big difference between programs
that obey the standard, and those that don't but happen to work on some
systems. The latter programs have latent bugs that can definitely
cause trouble.

A properly written C program should avoid uninitialized variables, just
as a properly written Ada program should avoid them.

In GNAT, we have found the Initialize_Scalars pragma to be very useful
in finding uninitialized variables. It causes all scalars to be 
initialized using a specified bit pattern that can be specified at

link time, and modified at run-time.

If you run a program with different patterns, it should give the same
result, if it does not, you have an uninitialized variable or other
non-standard aspect in your program which should be tracked down and
fixed.

Note that the BSS-is-always-zero guarantee often does not apply when
embedded programs are restarted, so it is by no means a universal
guarantee.



Re: hard typdef - proposal - I know it's not in the standard

2013-01-28 Thread Robert Dewar

On 1/28/2013 6:48 AM, Alec Teal wrote:

On 28/01/13 10:41, Jonathan Wakely wrote:

On 28 January 2013 06:18, Alec Teal wrote:

the very
nature of just putting the word "hard" before a typedef is something I find
appealing

I've already explained why that's not likely to be acceptable, because
identifiers are allowed before 'typedef' and it would be ambiguous.
You need a different syntax.


That is why I'd want both, but at least in my mind n3515 would be nearer to
"if I really wanted it I could use classes" than the hard-typedef.

I've already said N3515 is not about classes.

You keep missing the point of what I mean by "like classes" I mean in
terms of achieving the result, PLEASE think it though.


I have read this thread, and I see ZERO chance of this proposal being
accepted for inclusion into gcc at the current time.

Feel free to create your own version of gcc that has this feature (that
after all is what freedom in software is about) and promote it elsewhere
but it is really a waste of time to debate it further on this list.

The burden for non-standard language extensions in gcc is very high.
The current proposal is ambiguous and flawed, and in any case does not
begin to meet this high standard.

I think this thread should be allowed to RIP at this stage.



Re: Integer Overflow/Wrap and GCC Optimizations

2013-01-24 Thread Robert Dewar

On 1/24/2013 10:33 AM, Jeffrey Walton wrote:


In this case, I claim we must perform the operation. Its the result
that we can't use under some circumstances (namely, overflow or wrap).


You do not have to do the operation if the program has an
overflow. The compiler can reason about this, so for example

  a = b + 1;
  if (a < b) ...

The compiler can assume that the test is true, because the only
conceivable way it would be false is on an overflow that wraps,
but that's undefined. If a is not used other than in this test,
the compiler can also eliminate the addition and the assignment




Re: Integer Overflow/Wrap and GCC Optimizations

2013-01-24 Thread Robert Dewar

On 1/24/2013 10:02 AM, Jeffrey Walton wrote:


What I am not clear about is when an operation is deemed "undefined"
or "implementation defined".


The compiler is free to assume that no arithmetic operation
on signed integers results in overflow. It is allowed to
take advantage of such assumptions in generating code (and
it does so).

You have no right to assume *anything* about the semantics
of code that has an integer overflow (let alone make
asssumptions about the generated code).

This is truly undefined, not implementation defined, and
if your program has such an overflow, you cannot assume
ANYTHING about the generated code.



Re: hard typdef - proposal - I know it's not in the standard

2013-01-24 Thread Robert Dewar

On 1/24/2013 9:10 AM, Alec Teal wrote:


Alec I am eager to see what you guys think, this is a 'feature' I've
wanted for a long time and you all seem approachable rather than the
distant compiler gods I expected.


I certainly see the point of this proposal, indeed introducing
this kind of strong typing makes sense to anyone familiar with
Ada, where it is a standard feature of the language, and the
way that Ada is always used.

However, I wonder whether it is simply too big a feature for
gcc to add on its own to C++. For sure you would have to have
language lawyers look very carefully at this proposal to see
if it is indeed sound with respect to the formal rules of the
language. Often features that make good sense when expressed
informally turn out to be problematic when they are fully
defined in the appropriate language of the standard.


I can also see why 'strong typedefs' were not done, it tries to do
too much with the type system and becomes very object like


I don't see what this has to do with objects!


Re: gcc : c++11 : full support : eta?

2013-01-22 Thread Robert Dewar



About the time Clang does because GCC now has to compete."
How about that? Clang is currently slightly ahead and GCC really needs
to change if it is to continue to be the best.


Best is measured by many metrics, and it is unrealistic to expect
any product to be best in all respects.

Anyway, it still comes down to figuring out how to find the resources.
Not clear that there is commercial interest in rapid implementation
of c++11, we certainly have not heard of any such interest, and in the 
absence of such commercial interest, we do indeed come down to hoping

to find the volunteer help that is needed.



Re: not-a-number's

2013-01-16 Thread Robert Dewar

On 1/16/2013 7:10 AM, Mischa Baars wrote:


And as I have said before: if you are satisfied with the answer '2',
then so be it and you keep the compiler the way it is, personally I'm am
not able to accept changes to the sources anyway. I don't think it is
the right answer though.


The fact that you don't think that gcc shoudl follow the C standard
is hardly convincing unless it is backed up by convincing technical
argument. I see nothing surprising about the 2 here, indeed any other
answer *would* be surprising. I still don't understand the basis for
your non-stnadard views.


Mischa.





Re: not-a-number's

2013-01-16 Thread Robert Dewar

On 1/16/2013 6:54 AM, Mischa Baars wrote:
]

And indeed apparently the answer then is '2'. However, I don't think
this is correct. If that means that there is an error in the C
specification, then there probably is an error in the specification.


The C specification seems perfectly reasonable to me (in fact it is
rather familiar that x != x is a standard test for something being
a NaN. The fact that you for unclear reasons don't like the C spec
does not mean it is wrong!



Re: Fwd: Updating copyright dates automatically

2013-01-02 Thread Robert Dewar

On 1/2/2013 12:26 PM, Jeff Law wrote:


Any thoughts on doing something similar?

I've always found lazily updating the copyright years to be error prone.
   If we could just update all of them now, which is OK according to the
FSF guidelines we could avoid one class of problems.


For GNAT at AdaCore, we have a precommit script that does not let
you check in something with a wrong copyright date. That works well.

(boy that was a gigantic email, I hope we don't get a slew of people
being lazy and quoting it :-))


Re: Please don't deprecate i386 for GCC 4.8

2012-12-15 Thread Robert Dewar

On 12/15/2012 12:32 PM, Cynthia Rempel wrote:

Hi,

Thanks for the fast response!

So to keep an architecture supported by GCC, we would need to:

Three or more times a year preferably either during OR after
"stage3"

1. use the SVN version of gcc, 2. patch with an RTEMS patch, 3. use
./contrib/test_summary and pipe the output to a shell. 4. Report the
testresults to gcc-patches.

Would this be sufficient to maintain support for an architecture?  As
far as support goes, I rebuild RTEMS quite often, so once I
understand how to run the tests I don't mind doing so for the x86
architectures. If running the test script is all that's required, I
can do that.


Well of course it would always be appreciated if you can jump in
and help sort out problems that are 386 specific (hopefully there
won't be any!)


Re: Please don't deprecate i386 for GCC 4.8

2012-12-15 Thread Robert Dewar

On 12/15/2012 12:42 AM, Ralf Corsepius wrote:


If you want a port to be live show that it is live by posting regular
testresults to gcc-testresults.

Not all of this world is Linux nor backed by large teams at 
companies :)  We simply do not have the resources do to this.


But that's the point. If you don't have the resources, you seem
to be expecting others to provide them, but at this stage I
really don't see a strong argument for investing such effort.



Re: Please don't deprecate i386 for GCC 4.8

2012-12-14 Thread Robert Dewar

Having read this whole thread, Ivote for deprecating the 386.
People using this ancient architecture can perfectly well use
older versions of gcc that have this support.


Re: Please don't deprecate i386 for GCC 4.8

2012-12-14 Thread Robert Dewar

On 12/14/2012 3:13 PM, Cynthia Rempel wrote:

Hi,

RTEMS still supports the i386, and there are many i386 machines still
in use.  Deprecating the i386 will negatively impact RTEMS ability to
support the i386.  As Steven Bosscher said, the "benefits" are small,
and the impact would be serious for RTEMS i386 users.


Since there is a significant maintenance burden for such continued
support, I guess a question to ask is whether the RTEMS folks or
someone using RTEMS are willing to step in and shoulder this burden.


Re: Deprecate i386 for GCC 4.8?

2012-12-13 Thread Robert Dewar

On 12/13/2012 7:26 AM, Steven Bosscher wrote:


Ralf has found one such a vendor, it seems.

But to me, that doesn't automatically imply that GCC must continue to
support such a target. Other criteria should also be considered. For
instance, quality of implementation and maintenance burden.


Yes, of course these are valid concerns. It's just important to have
all the facts. In particular, it would be interesting to contact this
company and see if they use gcc. Perhaps they would be willing to invest
some development effort?



Re: Deprecate i386 for GCC 4.8?

2012-12-13 Thread Robert Dewar



Intel stopped producing embedded 386 chips in 2007.


Right, but this architecture is not protected, so the
question is whether there are other vendors producing
compatible chips. I don't know the answer.



Re: Deprecate i386 for GCC 4.8?

2012-12-12 Thread Robert Dewar

On 12/12/2012 2:52 PM, Steven Bosscher wrote:


And as usual: If you use an almost 30 years old architecture, why
would you need the latest-and-greatest compiler technology?
Seriously...


Well the embedded folk often end up with precisely this dichotomy :-)
But if no sign of 386 embedded chips, then reasonable to deprecate
I agree.


Ciao!
Steven





Re: Deprecate i386 for GCC 4.8?

2012-12-12 Thread Robert Dewar

On 12/12/2012 1:01 PM, Steven Bosscher wrote:

Hello,

Linux support for i386 has been removed. Should we do the same for GCC?
The "oldest" ix86 variant that'd be supported would be i486.


Are there any embedded chips that still use the 386 instruction set?



Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Robert Dewar

On 11/24/2012 1:13 PM, Jonathan Wakely wrote:


The official gmail app, which obviously integrates well with gmail and
is good in most other ways, won't send non-html mails.


There seem to be a variety of alternatives


http://www.tested.com/tech/android/3110-the-best-alternative-android-apps-to-manage-all-your-email/


K-9 is a free software client that looks interesting


I find that very annoying, but I get annoyed with the app and am not
suggesting the GCC lists should change to deal with it.





Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Robert Dewar

On 11/24/2012 12:59 PM, Daniel Berlin wrote:

On Sat, Nov 24, 2012 at 12:47 PM, Robert Dewar  wrote:



2) The fact that Android refuses to provide a non-HTML e-mail capability
is ridiculous but does not seem to me to be a reason for us to change
our policy.



Surely there are altenrative email client for Android that have plain
text capability???



Yes, we should expect users to change, instead of keeping up with users.


Well my experience with HTML-burdened mail is awful. From people who set
ludicrous font choices, to bad color choices, to inappropriate use of
multiple fonts, to inappropriate use of colors, it's a mess.

I think it is perfectly reasonable to expect serious developers to
send text messages in text form. BTW, our experience at AdaCore, where
we get lots of email from lots of customers, users, hobbyists, and
students, sending email from all sorts
of programs, is that yes, occasionally they send us HTML burdened
email, but almost always when we ask them to adjust their mailers to
send text, they can do so without problems.



Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Robert Dewar



2) The fact that Android refuses to provide a non-HTML e-mail capability
is ridiculous but does not seem to me to be a reason for us to change
our policy.


Surely there are altenrative email client for Android that have plain
text capability???



Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-23 Thread Robert Dewar

For me the most annoying thing about HTML burdened emails
is idiots who choose totally inappropriate fonts, that make
their stuff really hard to read. I choose a font for plain
text emails that is just right on my screen etc. I do NOT
want it overridden. And as for people who use color etc,
well others have said enough there .


Re: Fwd: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

On 11/7/2012 11:08 AM, Richard Kenner wrote:

Correct.  A court of competent jurisdiction can decide whether your scheme
conforms to the relevant licenses; neither licens...@fsf.org nor the
people on this list can.


A minor correction: licens...@fsf.org *could* determine that since they are
the copyright holders.  If they say it's OK, that would be permitting such
a scheme.  However, the FSF, as a matter of policy, *does not* respond to
queries about whether or not some scheme violates the GPL.


And why should they? Or why would they?



I believe in free software as a contribution to a better society and
believe in the use of licenses such as GPLv3 to promote software sharing
by providing a software commons that can be used by those who will
contribute their changes to that commons, and do not consider this list -
or any GNU Project list - an appropriate place to seek advice about how to
do things going against the spirit of that commons.


I very much agree!


Me too!






Re: Fwd: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

On 11/7/2012 9:44 AM, nk...@physics.auth.gr wrote:

Quoting Richard Kenner :


There are not many lawyers in Greece that deal with open-source licenses.


The legal issue here has nothing whatsoever to do with open-source
licenses: the exact same issue comes up with proprietary licenses and
that, in fact, is where most of the precedents come from.

The legal issue is in the definition of a "derived work" and what kind
of separation is needed between two programs ("works") to be able to
successfully assert that one is not a derived work of the other.


Yes, this is the major issue here.


One principle that can be applied is that if you have a program in
two pieces, then they are independent if either of them can be used
(and is used in practice) with other programs. But if the two pieces
can only work together, that seems part of the same program. I tried
to get this principle established in federal fourt in the Bentley
vs Intergraph trial, but unfortunately it settled 24 hours before
the judge published his opinion.



Re: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

On 11/7/2012 8:17 AM, nk...@physics.auth.gr wrote:


I disagree.


I think you are wrong, however it is not really productive to express it.


I would not casually ignore Richard's opinion, he has FAR more 
experience here than you do, and far more familiarity with

the issues involved.



Re: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

I'm pretty certain I have correctly interpreted GPL,v3. I have good
reasons to believe that. However, I'm willing to read your
interpretation of the GPL,v3, if you have any.


If you are certain enough, then you can of course proceed
on that assumption. I have no interest in giving my opinion
on this, why should I? Perhaps others will, who knows?
We will see, but it would not surprise me if no one is
willing to provide the equivalent of an electronic
letter of comfort :-)



BTW, it is no surprise that you got no response from
licens...@fsf.org.


I thought this was their job. Obviously I was wrong. I'm not trying to
circumvent the GPL just to adhere to it. Is this so wrong? Then what
is the point of the exception clauses? They are there but you don't
want people to understand how to use them?


Yes, you were wrong, it is not the job of that mailing list to
provide legal advice!

There are two comfortable ways to conform to the GPL.

a) make all your own stuff GPL'ed

b) write proprietary code, that links in only modules with
the standard library exception.

Anything else, and you are prettty much on your own. Especially
if trying to rig up some system that has full-GPL components, and
non-GPL components.

Even a) and b) are a little tricky if you don't have a well defined
entity that can guarantee the licensing of the modules you use (remember
that notices within files do not have legal weight).



Re: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

On 11/7/2012 5:52 AM, nk...@physics.auth.gr wrote:


1. Is it possible to use this scheme and not violate the GPL,v3 for
GCC? If I use GIMPLE dumps generated by "-fdump-tree-all" I think
there is a violation (correct me if not). Thus this module should be
FLOSS/GPL'ed, right?


You can't expect to get legal advice from a list like this, and if
you do get advice, you can't trust it. You have to consult an attorney
to evaluate issues like this, and even then you can't get
guaranteed definitive advice. Copyright issues are complex,
as Supap Kirtsaeng is discovering in his trip to the supreme court.

Furthermore, no one has any interest in assuring you that what
you are doing is OK in advance. The GPL is about encouraging
people to use the GPL, and the gcc community does not really
have an interest in making it easier for people to follow
some other path.

This may seem a little harsh, but it's (somewhat inevitably)
the way things are.

The only thing that would assure you that what you are planning
is OK is a specific intepretation of how the GPL applies by the
copyright holder. But this is not going to happen. Random non-expert
opinions by folks who are not attorneys may help confirm your
intepretation, but it's risky to rely on such opinions.

BTW, it is no surprise that you got no response from
licens...@fsf.org.

Robert Dewar


Re: Libgcc and its license

2012-10-10 Thread Robert Dewar

On 10/10/2012 4:16 PM, Joseph S. Myers wrote:


I'm not talking about the relation between the headings textually located
in a source file and the license of that source file.  I'm talking about
the relation between the license of a .o file and the license of .h files
#included at several levels of indirection from the .c source that was
compiled to that .o file (in particular, headers included within tm.h, but
most or all of the content of which is irrelevant for code being built for
the target).


Right, I understand, but that gets messy quickly!






Re: Libgcc and its license

2012-10-10 Thread Robert Dewar

On 10/10/2012 10:48 AM, Joseph S. Myers wrote:

On Wed, 10 Oct 2012, Gabor Loki wrote:


2) repeat all the compilation commands related to the previous list in
the proper environment. The only thing which I have added to the
compilation command is an extra "-E" option to preprocess every sources.
3) create a unique list of all source and header files from the
preprocessed files.
4) at final all source, header and generated files are checked for their
licenses.


The fact that a header is read by the compiler at some point in generating
a .o file does not necessarily mean that object file is a work based on
that header; that is a legal question depending on how the object code
relates to that header.


Well legally the status of a file is not in anyway affected by what
the header of the file says, but we should indeed try to make sure
that all headers properly reflect the intent.



Re: GCC

2012-09-24 Thread Robert Dewar

On 9/24/2012 6:53 AM, Jerome Huck wrote:

from Mr Jerome Huck

Good morning.

I have been using the GCC suite on Windows, mainly in the various
Fortran. 77, 2003,... Thanks for those tools ! The Little Google Nexus 7
seems a wonderfull tool. I would like to know if we can expect a version
of GCC to run on Android for such the Nexus 7 ?


Sooner if you get to work on creating the port!


Thanks in advance.

Best regards.





Re: Allow use of ranges in copyright notices

2012-07-02 Thread Robert Dewar

On 7/2/2012 8:35 AM, Alexandre Oliva wrote:

On Jun 30, 2012, David Edelsohn  wrote:


IBM's policy specifies a comma:



, 



and not a dash range.


But this notation already means something else in our source tree.



I think using the dash is preferable, and is a VERY widely used
notation, used by all major software companies I deal with!



Re: Code optimization: warning for code that hangs

2012-06-24 Thread Robert Dewar

On 6/24/2012 12:09 PM, Ángel González wrote:

"Peter A. Felvegi" writes:

My question is: wouldn't it be possible to print a warning when a jmp
to itself or trivial infinite recursion is generated? The code
compiled fine w/ -Wall -Wextra -Werror w/ 4.6 and 4.7.

Note that if the target architecture is a microcontroller, an endless
loop can be a legitimate way to finish / abort the program.



But not an infinite recursion! And an endless loop is such a rare
case that it deserves a warning, it's a false positive in this case,
so what?



Re: Code optimization: warning for code that hangs

2012-06-24 Thread Robert Dewar

On 6/24/2012 11:22 AM, Richard Guenther wrote:


I suppose I think it would be reasonable to issue a -Wall warning for
code like that.  The trick is detecting it.  Obviously there is nothing
wrong with a recursive call.  What is different here is that the
recursive call is unconditional.  I don't see a way to detect that
without writing a specific warning pass to look for that case.


Ada has this warning, and it has proved useful!


Re: How do I disable warnings across gcc versions?

2012-05-14 Thread Robert Dewar

On 5/14/2012 6:26 PM, Andy Lutomirski wrote:


This seems to defeat the purpose, and adding
#pragma GCC diagnostic ignored "-Wpragmas"
is a little gross.  How am I supposed to do this?


The gcc mailing list is for gcc development, not
quetions about the use of gcc, please address such
questions to the gcc help list.


Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-30 Thread Robert Dewar

On 4/30/2012 4:16 AM, Paulo J. Matos wrote:

Peter,

We have a working backend for an Harvard Architecture chip where
function pointer and data pointers have necessarily different sizes. We
couldn't do this without changing GCC itself in strategic places and
adding some extra support in our backend. We haven't used address spaces
or any other existing GCC solution.


Sounds like a useful set of changes to have in the main sources, since
this is hardly a singular need!


Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-29 Thread Robert Dewar

On 4/29/2012 1:19 PM, Basile Starynkevitch wrote:


For instance, I don't think that porting the Linux kernel (or the FreeBSD one) 
to such an
architecture (having data pointers of different size that function pointers) is 
easy.


Well it doesnt' surprise me too much that GNU/Linux has non-standard 
stuff in it


And GTK wants nearly all pointers to be gpointer-s, and may cast them to 
function
pointers internally.


But GTK surprises me more. I guess the C world always surprises me in 
the extent to which people ignore the standard :-)


Regards.




Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-29 Thread Robert Dewar

On 4/29/2012 12:47 PM, Basile Starynkevitch wrote:


My biased point of view is that designing a processor instruction set (for 
POSIX-like
systems or standard C software in mind) with function pointers of different 
size than
data pointers is today a mistake: most software make the implicit assumption 
that all
pointers have the same size.


What's your data for "most" here? I would have guessed that most
software doesn't care.


Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-29 Thread Robert Dewar

On 4/29/2012 9:25 AM, Andreas Schwab wrote:

Robert Dewar  writes:


Just to be clear, there is nothing in the standard that forbids the
sizes being different AFAIK? I understand that both gcc and apps
may make unwarranted assumptions.


POSIX makes that assumption, via the dlsym interface.


that's most unfortunate, I wonder why this assumption was ever
allowed to creep into the POSIX interface. I wonder if it was
deliberate, or accidental?


Andreas.





Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-29 Thread Robert Dewar

On 4/29/2012 8:51 AM, Georg-Johann Lay wrote:

Peter Bigot a écrit:


The MSP430's split address space and ISA make it expensive to place
data above the 64 kB boundary, but cheap to place code there.  So I'm
looking for a way to use HImode for data pointers, but PSImode for
function pointers.  If gcc supports this, it's not obvious how.

I get partway there with FUNCTION_MODE and some hacks for the case
where the called object is a symbol, but not when it's a
pointer-to-function data object.


I don't think it's a good solution to use different pointer sizes.
You will run into all sorts of trouble -- both in the application and
in GCC.


Just to be clear, there is nothing in the standard that forbids the
sizes being different AFAIK? I understand that both gcc and apps
may make unwarranted assumptions.



Re: Switching to C++ by default in 4.8

2012-04-17 Thread Robert Dewar

On 4/16/2012 5:36 AM, Chiheng Xu wrote:

On Sat, Apr 14, 2012 at 7:07 PM, Robert Dewar  wrote:

hand, but to suggest banning all templates is not a supportable
notion.



Why ?



Because some simple uses of templates are very useful, and
not problematic from any point of view.


Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/14/2012 6:02 AM, Chiheng Xu wrote:


If debugger fully support namespace, that will be nice. I just say,
in case debugger have trouble with namespace, you can avoid it.

But personally, when I write C++ code, I never use namespace.  I
always prefix my class name(and corresponding source file names) with
proper module name, and put the all source files of a module in its
dedicated sub-directory .  This make class name globally unique
throughout the project, and facilitate further re-factoring(searching
and replacing).


I find that rather a horrible substitute for proper use of namespaces.
I know it is common, partly because that's what you have to do in C,
and partly because namespac3es were added late


When using namespace,  people can and tend to use the same name in
different namespaces,  this seems like a advantage, but I see it as a
disadvantage.


I think that is a seriously misguided position. There is a good reason
for adding namespaces (Ada has always had this kind of capability in
the form of packages, and the package concept in Ada is, to Ada
programmers, one of its most powerful features). Since you never use
namespaces, it is not surprising that you do not appreicate their
importance.

To me, the ability to make extensive use of namespaces is one of
the strong arguments for switching to C++


If you want to change a name in one namespace to some
other more accurate/proper name,  you use some search tools to search
all the references of the name, you will find that the name is
probably also used in other namespaces, so you just can't use "replace
all" command to replace all references with the new name, you must
manually replace them one by one. Is this what you want ?.


You use proper tools that do the replacement just of references to
the entity whose name you want to change. It is often the case that
people avoid use of features because of a lack of proper tools, but
certainly there are tools that can do this kind of intelligent
replacement (GPS from AdaCore is one such example, but we certainly
wouldn't suggest it was unique in this respect!)


Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/14/2012 6:39 AM, Gabriel Dos Reis wrote:


Indeed, the notion that 'namspace' is "advance" is troublesome.
Similarly I would find any notion that simple uses  and definitions
of templates (functions, datatypes) "advanced" a bit specious.


Indeed! In the case of templates there is a real issue, in that
we all know that misuse of templates can get completely out of
hand, but to suggest banning all templates is not a supportable
notion.



Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/14/2012 6:38 AM, Chiheng Xu wrote:


Actually, I only partially agree with you on this. And I didn't say
smaller is necessarily better.
But normally, high cohesion and low coupling code tend not be large.
Normally large files tend to export only few highly related entry
points. Most of the functions in large file are sub-routines(directly
or indirectly) of the entry points. The functions can be divided into
several groups or layers, each group or layer can form a conceptual
sub-module. I often see GCC developer divide functions in large file
into sub-modules by prefix them with sub-module specific prefix and
group them together.  This is good,  but not enough. If the functions
in sub-modules are put in separate files,  then the code will be more
manageable than not doing so. This is because the
interfaces/boundaries between sub-modules are more clear, and the code
have higher cohesion and lower coupling.


I find the claim unconvincing in practice, it is possible to have code
in separate files with unclear interfaces and boundaries, and code in
single files with perfectly clear interfaces and boundaries. You can
claim without evidence that there is a causal relation here but that
is simply not the case in my experience.







Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/13/2012 9:34 PM, Chiheng Xu wrote:

On Wed, Apr 4, 2012 at 7:38 PM, Richard Guenther
  wrote:


Oh, and did we address all the annoyances of debugging gcc when it's
compiled by a C++ compiler? ...



Probably, if you can refrain from using some "advance" C++
features(namespace, template, etc.),  you will not have such
annoyances.


To me namespaces are fundamental in terms of the advantages that
moving to C++ can give in a large project, I would never regard
them as some "advanced" feature to be avoided. If namespaces
cause trouble for the debugger, that's surprising and problematic!






Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/13/2012 9:15 PM, Chiheng Xu wrote:


So, I can say, most of the GCC source code is in large files.

And this also hold for language front-ends.


I see nothing inherently desirable about having all small files.
For example, in GNAT, yes, some files are large, sem_ch3 (semantic
analysis for chapter 3 stuff which includes all of type handling)
is large (over 20,000 lines 750KB, but nothing would be gained
(and something would be lost) by trying to split this file up.

As long as all your tools can handle large files nicely, and
as long as the internal organization of the large file is
clean and clear, I see no problem.






Re: RFC: -Wall by default

2012-04-13 Thread Robert Dewar

On 4/13/2012 2:03 AM, Gabriel Dos Reis wrote:

On Thu, Apr 12, 2012 at 4:50 PM, Robert Dewar  wrote:

End of thread for me, remove me from the reply lists, thanks
discussion is going nowhere, at this stage my vote is for
no change whatever in the way warnings are handled.


I was asked "wassup with Robert?".  All I can say s that
it is a decade-old relationship :-)

-- Gaby


Nothing up, just felt nothing more was worth saying on this
thread, no point in just getting into the mode of repeating
stuff going nowhere.


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

End of thread for me, remove me from the reply lists, thanks
discussion is going nowhere, at this stage my vote is for
no change whatever in the way warnings are handled.


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 5:40 PM, Gabriel Dos Reis wrote:


It isn't non-sense just because you decide so or you don't like the observation.


  and
nonsense now, this has nothing to do with incompleteness!


I think you don't know what incompleteness is about, yes, it is
nonsense, because no one can make any sense out of it except you
and you refuse to elaborate or explain beyond just repeating
the observation. Feel free to explain.



-- Gaby







Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 5:35 PM, Gabriel Dos Reis wrote:


  There's nothing more ambiguous than saying that something is final in a
world where perfection is never achieved.  That's why software has
monotonically increasing version numbers, instead of just one that means "this
is done now".


As I observed earlier, Geodelization is great for machines.


You observed this before, but it was nonsense then and
nonsense now, this has nothing to do with incompleteness!


-- Gaby




Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 10:48 AM, Andrew Haley wrote:


Ultimately, it's a matter of taste and experience.  I'm going to find
it hard to write for people who don't know the relative precedence of
&  and | .


There are probably some programmers who completely know ALL the operator
precedence rules in C. There are probably some subset of those who feel
free to write code that takes full advantage of these rules. I would
hate to read code written by such people :-)


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 11:23 AM, Gabriel Dos Reis wrote:


less warnings to more warnings, what could be more
ordered than that!


What exactly do you put in -Wn to make it give *more* warning?
I can think of a reduced number of switch that would give you
more warning on a specific program without them being terribly
useful.


It's JUST like the optimization case, you use a higher number
to get more optimization. Yes, there may be cases where this
hurts (we have seen cases where -O3 is slower than -O2
due to cache effects)

For warnings you put a higher number to get more warnings. Yes,
you may find that you get too many warnings and they are not
useful. Remedy: reduce the number after -W :-)


-On means more optimizations for higher n, simple enough?


like the traditional -O2 vs. -O3?


Right, -O3 does more optimziations than -O2. Of course there
might be cases where this doesn't help. I bet if you look
hard enough you will find cases where -O1 code is slower
than -O0.

For -O, we do not guarantee that a higher number means faster code,
just that more optimizations are applied.

for -W, we do not guarantee that a higher number means a more
useful set of warnings, just more of them.


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 10:48 AM, Andrew Haley wrote:


Certainly, everything that adds to clarity (and has no runtime costs!)
is desirable.  But adding parentheses may not add to clarity if doing
so also obfuscates the code.  There is a cost to the reader due to a
blizzard of syntactically redundant parentheses; if there weren't, we
wouldn't bother with operator precedence.


Well I think blizzard is overblown. Ada requires these parentheses
and I never heard of anyone complaining of blizzards :-)


Ultimately, it's a matter of taste and experience.  I'm going to find
it hard to write for people who don't know the relative precedence of
&  and | .


Well it's always a problem for programmers who know too much to write
code that can easily be read by everyone, in Ada we take the position
that readability is paramount, and we really don't care if programmers
find it harder to write readable code :-)


Andrew.




Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 11:06 AM, Gabriel Dos Reis wrote:


What is nonsensical there?


But they *are* ordinal.


Now?  What is the order?


less warnings to more warnings, what could be more
ordered than that!


  It works just fine for -O,


Exactly what happens with -O?  -On does not necessarily
generate faster or better code when n is higher.


-On means more optimizations for higher n, simple enough?


In fact, -Os is a perfect example of a short name that is NOT
a number.


right, because -Os lies outside the more optimizations for
higher values rule.

I agree with Dave Korn, I do not understand your objection.

I would understand an objection of the general kind that you
prefer mnemonic names to numbers, but that ultimately is just
that a preference, nothing more. You seem on the contrary to
be trying to make a substantive argument against the digit
scheme, but I can't understand it.


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 10:26 AM, Gabriel Dos Reis wrote:


-W0: no warnings (equivalent to -w)
-W1: default
-W2: equivalent to the current -Wall
-W3: equivalent to the current -Wall -Wextra


  I like this suggestion a lot.


Me too!

I also like short switches, but gcc mostly favors long
hard-to-type not-necessarily-easy-to-remember switch
names.


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 9:30 AM, Andrew Haley wrote:



Sorry for the confusion: I intended to write


I would also suggest that your competent programmer would know what
they don't know; when reading code they'd look it up, when writing
code they'd insert parentheses for clarity.


Using two different definitions of "competent programmer" without
clarification makes me an incompetent writer, I suppose.  :-)

Andrew.


The correct thing to write definitely does NOT depend on the
competence or otherwise of the writer. If putting in
parentheses adds to clarity, then everyone should do it
since you are writing code for other people to read,
not yourself.





Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 6:44 AM, Andrew Haley wrote:


I would also suggest that a competent programmer would know what they
don't know; when reading code they'd look it up, when writing code
they'd insert parentheses for clarity.


Yes, of course I 100% agree with that. But then by your definition
code that does not have the "parentheses for clarity" is written by
incompetent programmers, and it seems reasonable to have a warning
that warns them of this incompetence :-)


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 5:55 AM, Miles Bader wrote:


... and it's quite possible that such bugs resulting from adding
parentheses means that the programmer "fixing" the code didn't
actually know the right precedence!


or that the layout (which is what in practice we should rely on
to make things clear with or without the parentheses) was sloppy
or plain incorrect.


I think the relative precedence of * and + can be safely termed "very
well known", but in the case of&&  and ||, it's not so clear...


indeed


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 4:55 AM, Fabien Chêne wrote:


I've got a radically different experience here, real bugs were
introduced while trying to remove this warning, and as far as I can
tell, I've never found any bugs involving precedence of&&  and || --
in the code I'm working on --, whose precedence is really well known
from everyone.


You simply can't make a claim on behalf of everyone like this, and it's
very easy to prove you wrong, i personally know many competent 
programmers who do NOT know this rule.


 In the real life, things are not as simple as (a&&  b)

|| ( c&&  d), some checks usually lie over more than five lines. This
warning applied to such checks are really a pain to remove.


a) complex conditionals over five lines are a bit of a menace
anyway, but ones that rely on knowing this precedence rule are
a true menace if you ask me.

b) it should be trivial to remove this warning, as it is a simple
automatic refactoring that should be easily done with a tool (most
certainly the automatic refactoring available in GPS for GNAT would
take care of this, if it needed to, which it does not, since in Ada
parentheses are required in such cases (the designers of Ada most
certainly disagreed with you that everyone knows this warning).


We shall definitely have an option to remove this very warning,
without  getting rid of the whole sets of usefull warnings embedded in
-Wparentheses.


Yes, that seems a perfectly reasonable proposition. In GNAT there is
a very general mechanism to suppress any specific warning (pragma
Warnings (Off, string), where string matches the text of the message
you want to suppress)) as well as a long list of specific warnings
switches, similar to what we have in GNU C.






Re: RFC: -Wall by default

2012-04-11 Thread Robert Dewar

O

This one is an interesting case, since there are strong arguments on
both sides.

I enabled the C++ warning about the precedence of&&  and || (it's been
in C for many years).  It found real bugs in real code, bugs that had
existed for years.


I think for ordinary programmers, the fact that AND binds more tightly
than OR is not well known. After all it makes no intrinsic sense (the
connection via boolean logic with * and + is obscure to say the least).

I am in favor of enabling this warning.

P.S. I like Ada's viewpoint here of requiring parenthesization in
this case.


Ian




Re: RFC: -Wall by default

2012-04-09 Thread Robert Dewar

On 4/9/2012 1:36 PM, Jonathan Wakely wrote:


Maybe -Wstandard isn't the best name though, as "standard" usually
means something quite specific for compilers, and the warning switch
wouldn't have anything to do with standards conformance.


-Wdefault

might be better


Re: RFC: -Wall by default

2012-04-09 Thread Robert Dewar

On 4/9/2012 1:29 PM, Eric Botcazou wrote:

That would be my preferred solution -- by far.  But, my understanding
is that that would provoke a riot so I am willing to compromise by
introducing a new warning switch (even if I dislike that thought.)
Hopefully, it is it is going to be the default, most people would not have
to learn yet another GCC switch.


Why to introduce a new switch then?  Just select a few -W switches and enable
them by default, keeping in mind that -w will disable them in any cases.


I think the idea is just to have an easy way to describe the relevant 
set of warnings, and a specific way (-Wno-standard) to go back to the

status quo.



Re: RFC: -Wall by default

2012-04-09 Thread Robert Dewar

On 4/9/2012 1:29 PM, Gabriel Dos Reis wrote:


We are in agreement.  I was just explaining to Gerald that his proposal
would have been my first choice, but I am compromising by moving to
your suggestion.  My complaint is the introduction of a new switch
just to accomodate warnings that should not have been in -Wall.  But,
I can live with that.


Well if the set of options is chosen right, -Wstandard is not a switch
that will be used, and equally -Wno-standard will not be often used,
so yes, it is an extra switch, but not one that has to be remembered.








Gerald







Re: RFC: -Wall by default

2012-04-09 Thread Robert Dewar

On 4/9/2012 1:08 PM, Gabriel Dos Reis wrote:

On Mon, Apr 9, 2012 at 11:29 AM, Gerald Pfeifer  wrote:

On Sun, 8 Apr 2012, Robert Dewar wrote:

Do you really want me to file hundreds of bug reports that are for
cases of uninitialized variables well known to everyone, and well
understood by everyone, and not easy to fix (or would have been
fixed long ago)?


Perhaps we should move this class of warning from -Wall to -Wextra?

(I'd prefer that over introducing yet another option -Wstandard.)


That would be my preferred solution -- by far.  But, my understanding
is that that would provoke a riot so I am willing to compromise by introducing
a new warning switch (even if I dislike that thought.)
Hopefully, it is it is going to be the default, most people would not have
to learn yet another GCC switch.


I would not like to see -Wall lose warnings that it has now, and I think
others would find that a problme. -Wextra may be too much for that same
group of people.

We have certainly found it useful to have three general categories of
warnings in GNAT

a) the warnings that are on by default
b) the warnings that are turned on by -gnatWa (similar to -Wall)
c) all warnings (tunred on by -gnatw.e)





Gerald




Re: RFC: -Wall by default

2012-04-08 Thread Robert Dewar

On 4/8/2012 4:59 PM, Gabriel Dos Reis wrote:


no, -Wstandard wasn't in my original proposal.  It is the name suggested
by Miles for the list I gave Arnaud upon request.


I know that, I can read -:)

I am just saying I think this issue still needs discussion (and you
were complaining about continuing "arguing", to me btw discussion
is American for argument :-))


Re: RFC: -Wall by default

2012-04-08 Thread Robert Dewar

On 4/8/2012 4:26 PM, Gabriel Dos Reis wrote:

On Sun, Apr 8, 2012 at 3:13 PM, Robert Dewar  wrote:

On 4/8/2012 4:02 PM, Jonathan Wakely wrote:



But I'd be just as happy with a -Wstandard (by any name) enabled by
default as I would be with -Wall on by default. Only enabling warnings
with very little chance of false positives would avoid most of the
negative consequences.



Yes, I think that is the case! That's certainly the philosophy we
follow in GNAT.


and I think that is all this is about.  I am puzzled we are still arguing...


We are discussing. And note that the idea of -Wstandard was certainly
not in your original proposal (note the [by now confusing] subject
of this thread!)


Re: RFC: -Wall by default

2012-04-08 Thread Robert Dewar

On 4/8/2012 4:25 PM, Gabriel Dos Reis wrote:

On Sun, Apr 8, 2012 at 2:54 PM, Robert Dewar  wrote:

On 4/8/2012 3:37 PM, Jonathan Wakely wrote:


Again, that also applies when people use -Wall today: a false positive
is unwanted even if you use -Wall, and those false positives are bugs
and so having them in bugzilla is good.



Do you really want me to file hundreds of bug reports that are for
cases of uninitialized variables well known to everyone,


Yes, unless thy are duplicates.


I think you know these *ARE* duplicates because everyone using
-Wall with gcc encounters them frequently!



  and well
understood by everyone, and not easy to fix (or would have been
fixed long ago)?




Re: RFC: -Wall by default

2012-04-08 Thread Robert Dewar

On 4/8/2012 4:23 PM, Gabriel Dos Reis wrote:


I think I agree with this.  I suspect the only difference might be that
I do not believe the fix is necessarily to turn them off.


Well there are three possibilities

a) fix the false positives, at the possible expense of introducing
new false negatives, but most of these warnings are very far from
sound anyway (they do not guarantee that code not raising the warning
is free from the problem involved).

b) remove from -Wstandard

c) leave in -Wstandard and live with the false positives

I am saying I prefer these alternatives in the order given above.
I suspect you agree with this ordering?

I use -Wstandard here just as a label for whatever gets turned
on by default if a change is made. Whether the new switch with
this name is introduced is an orthogonal issue.



  (certainly not an attitude that is
taken with -Wall, if I am wrong, I have hundreds of bugs to
report :-)) Yes, occasionally you get a case that you end up
considering SO obscure that you violate this rule, but it is
rare.


-Wall, despite the name, does not turn on all warnings.


Yes, I know, what's that got to do with the comment above



Re: RFC: -Wall by default

2012-04-08 Thread Robert Dewar

On 4/8/2012 4:02 PM, Jonathan Wakely wrote:

No, because those are already in bugzilla, and there's a whole wiki
page about improving that particular warning.


Yes, I know, and that page is to me good justification for NOT including
this warning in the set that is on by default.


But I'd be just as happy with a -Wstandard (by any name) enabled by
default as I would be with -Wall on by default. Only enabling warnings
with very little chance of false positives would avoid most of the
negative consequences.


Yes, I think that is the case! That's certainly the philosophy we
follow in GNAT.

One debatable issue is the following kind of warnings:


 1. procedure k is
 2.x : integer;
   |
>>> warning: variable "x" is assigned but never read

 3. begin
 4.x := 2;
   |
>>> warning: useless assignment to "x", value never referenced

 5. end;


These (not on by default in GNAT by the way) are examples of warnings
that most certainly are NOT false positives, but they are examples of
warnings about perfectly valid code.

That's quite different from a warning like:


 1. function l (m : integer) return Boolean is
 2. begin
 3.if m > 10 then
   |
>>> warning: "return" statement missing following this statement
>>> warning: Program_Error may be raised at run time

 4.   return False;
 5.end if;
 6. end;


Where you definitely have a real bug in the code, and the code is
not in any reasonable sense valid (yes, the RM does not make this
code illegal, but that's just because it would be too much effort).

An interesting third category is:


 1. procedure Norm is
 2. begin
 3.pragma Dummy_Body;
  |
>>> warning: unrecognized pragma "Dummy_Body"

 4.null;
 5. end;


Here the standard mandates ignoring unrecognized pragmas, so the
compiler is doing the right thing, and in one sense the above is
a false positive, since there is nothing wrong. However, in this
case we have the following (highly peculiar) statement in the RM


13  The implementation shall give a warning message for an unrecognized pragma
name.


(why highly peculiar, becuase in a formal definition of this
kind the notion of "warning message" is totally undefined and
pretty much undefinable.)



Re: RFC: -Wall by default

2012-04-08 Thread Robert Dewar

On 4/8/2012 3:37 PM, Jonathan Wakely wrote:


Again, that also applies when people use -Wall today: a false positive
is unwanted even if you use -Wall, and those false positives are bugs
and so having them in bugzilla is good.


Do you really want me to file hundreds of bug reports that are for
cases of uninitialized variables well known to everyone, and well
understood by everyone, and not easy to fix (or would have been
fixed long ago)?


Re: RFC: -Wall by default

2012-04-08 Thread Robert Dewar

On 4/8/2012 3:33 PM, Gabriel Dos Reis wrote:

On Sun, Apr 8, 2012 at 1:51 PM, Robert Dewar  wrote:

On 4/8/2012 1:56 PM, Jonathan Wakely wrote:


  The people who don't want -Wall (or
-Wstandard) enabled are likely to be the ones who know how to use
-Wno-all or whatever to get what they want.



I see no evidence that supports that guess. On the contrary, I
would guess that if -Wall is set by default,


so your evidence to the contrary is a guess ;-p


Yes, of course, though based to some extent on our experience
with warnings that are enabled by default in GNAT, we often
get newbie questions that complain about these warnings, it is
somewhat inevitable, that if you have people who do not know the
language, they will find some quite legitimate warnings puzzling,
especially if they are false positives (we really try VERY hard
to avoid false positives in the default set of warnings .. to me
the trouble with -Wall is that it generates lots of false positives.

Now a -Wstandard that is crafted with a different design goal than
-Wall (avoid false positives at all costs) would be quite a different
matter, and that is why I have supported this approach if anything
at all is done.

Basically in GNAT we regard it as a bug to work on if a default
warning is a false positive (certainly not an attitude that is
taken with -Wall, if I am wrong, I have hundreds of bugs to
report :-)) Yes, occasionally you get a case that you end up
considering SO obscure that you violate this rule, but it is
rare.


Re: RFC: -Wall by default

2012-04-08 Thread Robert Dewar

On 4/8/2012 1:56 PM, Jonathan Wakely wrote:

 The people who don't want -Wall (or
-Wstandard) enabled are likely to be the ones who know how to use
-Wno-all or whatever to get what they want.


I see no evidence that supports that guess. On the contrary, I
would guess that if -Wall is set by default, you will get lots
of (probably invalid) complaints of the sort "why is gcc complaining
at perfectly correct code", and of course in some cases those will
be false positives, so they will be valid complaints.



Re: GNU Tools Cauldron 2012 - Hotels and registered presentations

2012-04-08 Thread Robert Dewar

Hello Diego,

I am all set with my plans for Prague, but I have to
leave on a flight at 2pm on Wednesday. I hope my
presentation can be scheduled consistently with these
travel plans?

Robert Dewar


Re: Switch statement case range

2012-04-08 Thread Robert Dewar

On 4/8/2012 11:59 AM, Rick Hodgin wrote:

What are the possibilities of adding a GCC extension to allow:

switch (foo) {
case 1:
case 2:
case 3 to 8:
case 9:
default:
}

in C/C++ case statements?

Best regards,
Rick C. Hodgin


I think there is very little enthusiasm these days for adding
non-standard extensions of this type.


Re: RFC: -Wall by default

2012-04-07 Thread Robert Dewar

On 4/7/2012 6:57 PM, Miles Bader wrote:

Dave Korn  writes:

   IMHO we should move the -Wunused ones into -Wextra if we're going to turn on
-Wall by default.  The rest seem pretty reasonable defaults to me.


How about instead adding new "-Wstandard", which will be on by default,
and keeping -Wall / -Wextra the same (so as to not _remove_ warnings for
people that currently use -Wall).


I think that's a good idea, then if -Wstandard generates complaints that
can be fixed by rethinking inclusion of some options, that's easily
fixed.


-miles





Re: RFC: -Wall by default

2012-04-05 Thread Robert Dewar

On 4/5/2012 4:24 PM, Russ Allbery wrote:

Gabriel Dos Reis  writes:


If it is the non-expert that would be caught in code so non-obvious that
-Wuninitialized would trip into false positives, then it is highly
likely that the code might in fact contain an error.


I wish this were the case, but alas I continue to see fairly trivial false
positives from -Wuninitialized.  Usually cases where the initialization
and the use are both protected by equivalent conditionals at different
places in the function.


Yes, and often it is not so easy for the compiler to see that the
conditionals are always the same


Personally, as a matter of *style*, I eliminate such cases either by
initializing the variable or restructuring the function.  But this is very
much a question of style, not of correctness.


Indeed, and for me, when you are forced to do an initialization like
this, it is mandatory to comment why you are initializing it, otherwise
it obscures the code ("why is this being initialized, where is that
value used?") and that ends up junky IMO. The Ada front end 
unfortunately has quite a few such commented junk initializations.






Re: RFC: -Wall by default

2012-04-05 Thread Robert Dewar

On 4/5/2012 8:59 AM, Michael Veksler wrote:


They use an IDE, which is either Code-Blocks or Dev-C++, which run on
Windows, but these IDEs don't turn -Wall on by default. As for the advice
   to use -Wall, there is so much to advise  and so little time, and the
sheer
mass of information confuses students. I'd have GCC emit more warnings
by default rather than explain what -Wall is  (and have half of them forget
that by the time they get to the computer).


I would focus on the IDE here, it is an obvious defect for an IDE not
to be able to control the default switches IMO.



Re: RFC: -Wall by default

2012-04-05 Thread Robert Dewar



It's on my large TODO list, somewhere at the bottom, to propose
to make -O1 stop after early optimizations and drop right to
expansion from there.  That would turn optimization expectations
upside-down of course, but early optimizations should be mostly
reducing code size (and thus increase compile speed) with
no fancy optimization that inhibit debugging (SRA, IPA-SRA,
switch conversion and function splitting are an exception,
but all but SRA are not enabled at -O1).  So we'd move to
compile-time and debuggability for -O1 (I'd expect compile time
that should be better or at least not a lot slower than -O0).


I am all in favor of such work, but I would approach it in two
steps. First make it a separate -O level, then depending on
how successful this is in practice, propose making -O1 mean
this new level.

If you do both steps at once, you get opposition on the basis
of change-is-bad, rather than to the substance of the new
level of optimization.


Richard.




Re: RFC: -Wall by default

2012-04-05 Thread Robert Dewar

On 4/5/2012 8:28 AM, Michael Veksler wrote:


It is not that they can't remember. I am a TA at a moderately basic
programming course,
and student submit home assignments with horrible errors. These errors,
such as
free(*str) or *str=malloc(n)  are easily be caught by -Wall. I have to
remember to
advise them to use -Wall and to fix all the warnings, which I sometimes
forget to do.


Wouldn't it be better in a "moderately basic programming course" to
provide standard canned scripts that set things up nicely for students
including the switches they need? Indeed for such a course wouldn't it
be better to use an appropriate IDE, so they could concentrate on the
task at hand and not fiddling with commands. Yes, I think it is very
important for students to learn what is going on, but you can't do
everything at once in a basic course.

And even in the context you give, surely it is not too much to expect
a TA to remember important advice like this?


Re: RFC: -Wall by default

2012-04-05 Thread Robert Dewar

On 4/5/2012 8:06 AM, Vincent Lefevre wrote:

On 2012-04-05 06:26:43 -0400, Robert Dewar wrote:

Well a lot of users have been burned by using optimization
options, either becausae of compiler bugs, or because of bugs
in their own code triggered by optimization. So the requirement
of not using any optimization options is not that uncommon.


But no-optimizations (-O0) should not necessarily be the default
for these reasons.


I think it is a problem that even at -O1 the debugger is
seriously limited, especially for an inexperienced user.

What is missing to me is a reasonable cleanup of the code that
would remove some of the junk at -O0 but not impact debugging.
In fact a reasonable criterion would be all the optimization
that is cheap to do and does not affect debugging.

Then I would make THAT the default (or simply redefine -O0
to be this level, I see no real advantage in -O0 as it is now)




Re: RFC: -Wall by default

2012-04-05 Thread Robert Dewar

On 4/5/2012 2:39 AM, Arnaud Charlet wrote:

Can someone summarize what the most useful warnings people are expecting
that -Wall would bring?

I suspect not all of -Wall would actually be welcome/a good idea by default,
and we might be looking for a better compromise where most warnings are
enabled by default, but not "all".

In particular, I'm not convinced that -Wuninitialized should be enabled
by default, precisely because this warning does generate a good bunch
of false positives.

So to me it's not black or white, and considering -Wall as a single entity
is not the right way to address these user complains IMO.


This seems a good direction for the discussion to me, the issue
in practice revolves around

a) false positives

b) warnings that are not false positives, but that are
incomprehensible to nonexpert users

A set of warnings that for the most part avoids these two
problems is precisely what can be reasonably on by default.

There is a third category

c) warnings about things that are not errors but seem like
sloppy or unnecessary code (e.g. unused variables).

Category c) is trickier.

Generally the philosophy in GNAT is to enable by default
all warnings that avoid a) b) or c) and correspond to
definite likely errors.


Arno




Re: RFC: -Wall by default

2012-04-05 Thread Robert Dewar

On 4/5/2012 12:23 AM, Gabriel Dos Reis wrote:


-Wall is roughtly equivalent to -gnatwa in the GNAT front end,
and this is definitely NOT on by default. If you run GNAT in
default mode, there are virtually no false positives, since
the only warnings on by default are the kind of warnings that
say "if you execute this statement, your program will go wrong"


like calling a function with non-void return type whose definition
ails to return value.


Right, BTW in Ada a failure to provide a return value is detected at
run time and raises Program_Error. This is a clear case where a
warning is always desirable (basically this would be an error in
the language, except that to make it an error would require going
into the whole issue of defining possible threads of control, and
that's too much formal effort for too little gain at the level of
the language standard. So in GNAT, this is a warning that is on
by default. Like all warnings it can be suppressed, either by
suppressing all warnings (-gnatws) or by providing a Warnings
Off pragma that suppresses this particular warning.

Note that the ONE and only case where this warning is a false
positive is the ACATS test that makes sure you raise an
exception (in practice we suppress all warnings for ACATS
tests anyway, since they are deliberately full of dubious
coding practices!)

I wonder if there is a better forum for discussing whether
-Wall should be the default than this one. After all we always
emphasize that this list is for gcc developers, and this
particular issue is one better discussed by gcc users. Yes
I know there are gcc users on this list too (I am one!) but
still we don't exactly get representative user input on this
list!


Re: RFC: -Wall by default

2012-04-05 Thread Robert Dewar

On 4/5/2012 12:17 AM, Miles Bader wrote:

Robert Dewar  writes:

We have run into people running benchmarks where they were
specifically prohibited from using other than the default
options, and gcc fared badly in such comparisons.


Yeah, there was the silly "benchmark" at phoronix where they came to
the conclusion that tcc was a better compiler than gcc because it
generated faster programs when run without any options...

[*] Phoronix is well known for completely clueless benchmarking
practices, but ... unfortunately some people actually seem to pay
attention to what they say.


Well a lot of users have been burned by using optimization
options, either becausae of compiler bugs, or because of bugs
in their own code triggered by optimization. So the requirement
of not using any optimization options is not that uncommon.


-miles





Re: RFC: -Wall by default

2012-04-04 Thread Robert Dewar

On 4/4/2012 6:42 PM, Gabriel Dos Reis wrote:

On Wed, Apr 4, 2012 at 4:21 PM, Robert Dewar  wrote:

On 4/4/2012 2:34 PM, Dominique Dhumieres wrote:


IMO only the warnings in C that are likely errors should be the default as
it is in gfortran (don't ask for examples of such warnings for C, I am
quasi-illiterate).



That's also the default philosophy in GNAT,


In which case you should NOT be objecting to the proposal :-)


-Wall is roughtly equivalent to -gnatwa in the GNAT front end,
and this is definitely NOT on by default. If you run GNAT in
default mode, there are virtually no false positives, since
the only warnings on by default are the kind of warnings that
say "if you execute this statement, your program will go wrong"



  there never should be false
positives at all in the default mode IMO (well hardly ever :-)




Dominique

PS -Wall is a simple enough option to be remembered by all users who need
it (if they don't use it, they don't want it).







Re: RFC: -Wall by default

2012-04-04 Thread Robert Dewar

On 4/4/2012 7:03 PM, Gabriel Dos Reis wrote:


Again, this proposal does not come out of a whim.


But it does seem to come out of a few anecdotal requests
for a change, and you always have to be careful in considering
such input, because of course people who agree with the status
quo do not write in complaining. I see no evidence that a
majority of users are in favor of this change.

By the way, to me a much more significant issue is the default
optimization level. Gcc code quality is plain horrible at -O0,
often MUCH worse that competitive compilers with default
optimization (most compilers do much more than -O0 by default).

We have run into people running benchmarks where they were
specifically prohibited from using other than the default
options, and gcc fared badly in such comparisons.

So we have wondered from time to time whether -O1 should
be the default, but the debugger is not well behaved at
-O1, and it's too much of a change I am afraid.


Re: RFC: -Wall by default

2012-04-04 Thread Robert Dewar

On 4/4/2012 2:34 PM, Dominique Dhumieres wrote:


IMO only the warnings in C that are likely errors should be the default as
it is in gfortran (don't ask for examples of such warnings for C, I am
quasi-illiterate).


That's also the default philosophy in GNAT, there never should be false
positives at all in the default mode IMO (well hardly ever :-)



Dominique

PS -Wall is a simple enough option to be remembered by all users who need
it (if they don't use it, they don't want it).




Re: RFC: -Wall by default

2012-04-04 Thread Robert Dewar

On 4/4/2012 2:02 PM, Gabriel Dos Reis wrote:


The interesting thing about -Wall is that it is pretty safe, for the most part,
in terms of false positives.


And, for the record, I find lots of false positives, the front end of
GNAT has a lot of junk initialiations marked "keep back end quiet".


-- Gaby




Re: RFC: -Wall by default

2012-04-04 Thread Robert Dewar



Sometimes, we have to be brave to challenge tradition.  The world around
us is moving and we definitely want GCC to remain competitive.  It is
hard to define what "it's told" means without tripping over.

The interesting thing about -Wall is that it is pretty safe, for the most part,
in terms of false positives.


Well I find it too big a change to make, if people want warnings, it 
really is not that hard to ask for them!


-- Gaby




Re: GCC 5 & modularity

2012-03-21 Thread Robert Dewar

On 3/21/2012 11:35 AM, Basile Starynkevitch wrote:


I would be happy to help, but please understand that my understanding of GCC
is restricted to gengtype, ggc, and some parts of the middle-end. I know
nothing about the vast rest of the GCC compiler.


Perhaps suggestions about improvements in the modularity of
gcc would be better left up to those who DO have a global
understanding of the existing structure of gcc.


Re: GCC 5 & modularity

2012-03-21 Thread Robert Dewar

Very well said.  Discussing about modules also makes no sense.  Figure out
the present state.


these kind of meta discussions are very rarely of value, this
one is no exception IMO


Richard.


--
  P.





Re: GCC 5 & modularity

2012-03-18 Thread Robert Dewar

On 3/18/2012 12:56 PM, Basile Starynkevitch wrote:


* you can name and count the modules of a software


Well in a hierarchical system this is not so clear, since modules may
exist at different levels of abstraction. For instance in a compiler,
at one level of abstraction, the front end is a module, at another
level of abstraction, e.g. in the compiler, the semantic analysis for
chapter 7 constructs in the RM could be considered as a module.


* given a source line, or function, you can decide at a glance to which one 
module it
belongs


This seems totally bogus, if you have something like

n++;

you can't tell what module that belongs to, and if your idea is that
all variables should be long enough to know immediately what module
something is in, I would regard that as plain horrible and highly
undesirable.


* the interface between modules is well documented


Sure that's apple pie and motherhood, so it says nothing


I'm sorry to say that, but current GCC (ie 4.7 or today's trunk) is *not* 
modular.


Modularity is not a binary quality, so this is not a helpful statement


Don't
feel injured by that fact. Indeed, GCC is a little less messy than it was a few 
years
ago, but being less messy is not being modular IMHO. And something cannot be
"half-modular".


Absolutely it can, parts of the system can be arranged nicely into 
modules, and parts of the system may not be.



So I would be delighted if GCC was made of modules. But I have no idea of how 
that can be
done.


Then your comments are not at all helpful, since they just
reflect vague goals which everyone agrees on.


I do believe that identifiers in GCC should be organized in such a way that the 
module
they belong to is visible at once. I think that a prefix (à la GTK) or a C++ 
namespace
should be great. In particular, this means that most GCC identifiers should 
change
(which means that any such evolution is not syntactically gradual; it has to be 
made by
huge, but "easy", patches).


I am of the opinion that this would severely damage readability, it's 
the same sort of thing that leads people in Ada to avoid use clauses

completely.

Since in any decent IDE it's just a single click to find out where a 
variable is declared, it's just noise to include this information in

every variable name. Of course global variables with wide visibility
should have appropriate names, but the idea that all identifiers
should be prefixed is horrible IMO



Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Robert Dewar

On 2/4/2012 9:57 AM, Andreas Schwab wrote:
\

How can the sine function know which of the millions of numbers
represented by 0x1.0f0cf064dd591p+73 are meant?  Applying the sine to
this interval covers the whole result domain of the function.


The idea that an IEEE number necessarily represents an interval
is peculiar. IEEE represents certain numbers exactly. There is
no concept of representation of intervals, unless you care to
somehow superimpose it. IEEE arithmetic is not about producing
some vague ill-defined approximation of real arithmetic, even if
programmers may think of it that way, it is about implementing
completely well defined operations in a specified manner.

Yes, the programmer may regard fpt as some vague approximation
of real arithmetic, but the programmer may also be exepcting
exact IEEE results, and not think of there being any vague
apprximation, just well defined rounding.

The sine function gets a number as input, and it is supposed
to produce the sine of that number, end of story, where do you
find it written that the sine function is somehow supposed to
treat the input as an interval?

In IEEE arithmetic, the result of all operations is well
defined and gives exact results (they may not correspond
to the same results as mathematical real arithmetic, but
IEEE does not implement mathematical real arithmetic, it
implements well defined IEEE operations, which precisely
define the output for a given operation given the inputs.)

When you write a program in an environment which provides
IEEE semantics, all operations are exact, in the sense that
they result in well defined results. There is no sense in
which, e.g. the addition operator says "eacb of my operands
represents a range of possible numbers, therefore the output
can be anywhere from X to Y". Instead, it takes the exact
representations passed as inputs, and produces a unique,
well defined rounded result.

The sine function should attempt to do the same thing, take
an exact representation passed as input, and return the
correctly rounded well-defined result.

This is an ideal of course, in practice it may be too much
work (and not very useful) for the sine function to fulfill
this expectation for the entire input range, as the example
which started this thread shows, but that should be the goal.



Andreas.





Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Robert Dewar

On 2/4/2012 9:09 AM, Andreas Schwab wrote:

Robert Dewar  writes:


But if you write a literal that can be represented exactly, then it is
perfectly reasonable to expect trig functions to give the proper
result, which is unambiguous in this case.


How do you know that the number is exact?


Sorry, what are you asking? Are you asking how do I know 1e22 is
exact? That's an odd question, anyone who knows the IEEE format
knows this.

Are you asking how a programmer in general knows this? Well
that's a question. As Vincent points out, if we write

  x = some-fpt-constant;

it's hard to distinguish between

a) I know this can be represented exactly and I expect
the compiler to represent it exactly.

b) I know it cannot be represented exactly and I expect
the compiler to choose the nearest machine number using
round to nearest.

c) I don't have the slightest idea if this can be
represented exactly or not, but I expect the compiler
to come up with some close approximation.

If you don't have additional information, you really
can't distinguish these three cases. In the case where
we apply cos or sin to 1e22, we don't know if we have
case a) or case c). If we have case a), then it is
reasonable to expect sin (1e22) to be exactly the
right value for the sin function applied to this
exact number.

If you have case c), then taking the sin is a bit
senseless, since the level of approximation implied
by such a large number means that the sin function
iss essentially undefined in practice, since as someone
pointed out, it could reasonably range over a huge range
of values.


Andreas.





Re: weird optimization in sin+cos, x86 backend

2012-02-04 Thread Robert Dewar

On 2/4/2012 7:00 AM, Andreas Schwab wrote:

Vincent Lefevre  writes:


Wrong. 53 bits of precision. And 10^22 is the last power of 10
exactly representable in double precision (FYI, this example has
been chosen because of this property).


But it is indistinguishable from 10^22+pi.  So both -0.8522008497671888
and 0.8522008497671888 are correct results, or anything inbetween.


I don't see that 10**22 is a well defined exactly represented value, 
whose sin/cos values are exactly well defined, the fact that

10**22+pi cannot be represented exactly does not change that.

I agree that if you write a literal for a value which cannot be
represented exactly, there may be some ambiguity, as Vincent
suggests (you can't tell if it is just a request for a close
value, or very specifically a request for the correctly rounded
machine number). But if you write a literal that can be represented
exactly, then it is perfectly reasonable to expect trig functions
to give the proper result, which is unambiguous in this case.


Andreas.





Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Robert Dewar

On 2/3/2012 4:32 PM, Vincent Lefevre wrote:


Yes, I do! The floating-point representation of this number



This fact is not even necessarily correct because you don't know the
intent of the programmer. In the program,



   double a = 4.47460300787e+182;

could mean two things:

1. A number which approximates the decimal number 4.47460300787e+182,
in which case I agree with you. Note that even though it is an
approximation, the approximated value a is the same on two different
IEEE 754 platforms, so that one can expect that sin(a) gives two
values that are close to each other on these two different platforms.

2. A number exactly equal to the rounding (to nearest) of the decimal
number 4.47460300787e+182 in double precision. Imagine that you have
a binary64 (double precision) number, convert it to decimal with
sufficient precision in order to be able to convert it back to the
original binary64 number. This decimal string could have been the
result of such a conversion. IEEE 754 has been designed to be able
to do that. This choice has also been followed by some XML spec on
schemas, i.e. if you write 4.47460300787e+182, this really means a
binary64 number, not the decimal number 4.47460300787e+182 (even
though an hex format would be less ambiguous without context, the
decimal format also allows the human to have an idea about the
number).


Sure, in general that might be possible, but seemed unlikely in
this case, and indeed what was really going on is essentially
random numbers chosen by a test program generating automatic
tests.

Really that's in neither category, though I suppose you could argue
that it is closer to 2, i.e. that the intent of the automatically
generated test program is to get (and test) this rounding.

But in any case, it seems better for him to apply his suggestion
of sticking within the pi/4 makes a difference range :-)


No, thanks to correct rounding (provided by CRlibm), all machines with
the same inputs were giving the same results, even though the results
were meaningless.


All machines that implement IEEE arithmetic :-) As we know only too well
from the universe of machines on which we implement GNAT, this is not
all machines :-)






Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Robert Dewar

On 2/3/2012 1:12 PM, Konstantin Vladimirov wrote:

Hi,

I agree, that this case have no practical value. It was autogenerated
between other thousands of tests and showed really strange results, so
I decided to ask. I thought, this value fits double precision range
and, according to C standard, all double-precision arithmetics must be
avaliable for it.


Yes, indeed and it was available. There was nothing "really strange"
about the results. The only thing strange was your expectations here :-)


Thanks everybody for explanation, I will constrain trig function
arguments, according to "x is separable with x+pi/4" rule. It seems,
everything works inside this range.


Yes, it is a good idea to only generate useful tests when you
are autogenerating, otherwise you will get garbage in garbage out :-)


---
With best regards, Konstantin

On Fri, Feb 3, 2012 at 7:13 PM, Robert Dewar  wrote:

On 2/3/2012 10:01 AM, Michael Matz wrote:


No normal math library supports such an extreme range, even basic
identities (like cos^2+sin^2=1) aren't retained with such inputs.



I agree: the program is complete nonsense. It would be useful to know
what the intent was.




Ciao,
Michael.







Re: weird optimization in sin+cos, x86 backend

2012-02-03 Thread Robert Dewar

On 2/3/2012 10:55 AM, Vincent Lefevre wrote:

On 2012-02-03 10:33:58 -0500, Robert Dewar wrote:

On 2/3/2012 10:28 AM, Vincent Lefevre wrote:

If the user requested such a computation, there should at least be
some intent. Unless an option like -ffast-math is given, the result
should be accurate.


What is the basis for that claim? to me it seems useless to expect
anything from such absurd arguments. Can you site a requirement to
the contrary (other than your (to me) unrealistic expectations).
In particular, such large numbers are of course represented
imprecisely.


Actually you don't know.


Yes, I do! The floating-point representation of this number
does NOT represent the number you wrote, but a slightly
different number, whose cos/sin values will be wildly
different from the cos/sin values of the number you wrote,
so what's the point of trying to get that value exact, when
it is not the value you are looking for anyway.


Of course, the value probably comes from
somewhere, where it is imprecise. But there are codes that assume
that their input values should be regarded as exact or they will
no longer work. Reasons can be that algorithms are designed in such
a way and/or that consistency is important. A particular field is
computational geometry. For instance, you have a point and a line
given by their coordinates, which are in general imprecise.
Nevertheless, one generally wants to consider that the point is
always seen as being on one fixed side of the line (or exactly on
the line). If some parts of the program, because they do not compute
with high precision enough, behave as if the point were on some side
and other parts behave as if the point were on the other side, this
can yield important problems.


But if you write arbitrary floating-point constants, then of
course they are not represented exactly in general.


Another property that one may want is "correct rounding", mainly
for reproducibility. For instance, this was needed by the LHC@home
project of CERN (to check results performed on different machines,
IIRC), even though the results were complete chaos.


Well of course you will get different results for this on different
machines, regardless of "correct rounding", whatever that means!






  1   2   3   4   5   6   7   8   9   10   >