RE: [boost] Re: Date iterators in Boost Date-Time

2003-08-18 Thread Guillaume Melquiond
En réponse à "Paul A. Bristow" <[EMAIL PROTECTED]>:

> But as Michael Caine said "Not a lot of people know that" - so I trust
> you will explain what it does too for the benefit of us mere non-mathematical
> mortals!
> 
> Paul

I'm not sure to understand. Do you want me to explain what a convex hull is or
to explain what the function of the date-time library is supposed to do? I
suppose it's the first, since the second is what started this subthread.

A connected set is a set for which each couple of points are connected by a path
itself included in the set (all the points are reachable from all the points). A
convex set is a connected set with linear paths (all the points can be reached
from all the other points by following a segment). The convex hull of a set is
the smallest convex superset of it. For example, given three points in the
plane, the convex hull is the filled triangle defined by these points.

In the case of a 1-dimension space, connected and convex set are equals: they
are segments (or half-line or line or empty). Date manipulated by the date-time
library are in a 1-dimension space (the real line) and periods are segments
(non-empty bounded convex sets). So when you have two periods, the smallest
period enclosing these two is also the convex hull of them. Hence the name I
suggested.

I hope it makes sense.

Regards,

Guillaume
___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Guillaume Melquiond
On Sat, 5 Jul 2003, Beman Dawes wrote:

> So where does that leave us? There must be some reason Gennadiy's code is
> producing results one or two epsilons greater than expected.

It's just because the testcases are wrong. For example, line 157, there
is:

tmp = 11; tmp /= 10;
BOOST_CHECK_CLOSE_SHOULD_PASS_N( (tmp*tmp-tmp), 11./100, 1+3 );

The test will pass only if the result is at most 2 ulps away. Is it
possible?

Let t be tmp and e? be error values (smaller than ulp/2).

rounded(rounded(t*t) - t)
  = ((t*t) * (1+e1) - t) * (1+e2)
  = t*t - t + t*t * (e1+e2+e1*e2) - t*e2

So how much is the relative error? For example, if e1=ulp/2 and e2=0, the
absolute error reaches t*t*ulp/2 = 0.605*ulp. So the relative error is
0.605*ulp / (11/100) = 5.5*ulp. The result may be 6 ulps away! (and maybe
more)

Thinking rounding errors just add themselves is a misunderstanding of
floating-point arithmetic (if it was that easy, interval arithmetic
wouldn't be useful).

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: is_nan

2003-07-05 Thread Guillaume Melquiond
On Sat, 5 Jul 2003, Fernando Cacciola wrote:

> Thanks to Gabriel we may have an is_nan() right now.
> Is there anything else that the interval library uses which might be better
> packed as a compiler-platform specific routine?

All the hardware rounding mode selection stuff. It's equivalent to the
 C header file. In the interval library, it's handled by at least
9 files (all the boost/numeric/interval/detail/*_rounding_control.hpp
headers for example) and some new files may be added each time a new
compiler-platform support is needed.

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Guillaume Melquiond
On Sat, 5 Jul 2003, Beman Dawes wrote:

> fpt_abs may do a unary "-" operation. I'm assuming, perhaps incorrectly,
> that is essentially subtraction and subject to a possible rounding error.

I don't think there is a system where the opposite of a representable
number would not be a representable number. So there should be no rounding
error when changing signs. At least not with IEEE-754 floating point
numbers.

Moreover, 0-x may lead to results different from -x. So changing sign is
essantially "not" a subtraction :-).

Guillaume

PS: for those who wonder how 0-x can be different from -x, think about
exceptional cases like x=0 or x=NaN.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] test_fp_comparisons and rounding errors

2003-07-05 Thread Guillaume Melquiond
On Sat, 5 Jul 2003, Beman Dawes wrote:

> test_fp_comparisions has been failing for a long time. The issue has to do
> with how much tolerance to allow for rounding errors.
>
> The close_at_tolerance algorithm calculates tolerance as follows:
>
> n*std::numeric_limits::epsilon()/2
>
> where n is the number of possible floating-point rounding errors.
>
> The particular test case that is failing calls the comparison algorithm
> with a rounding error argument of 4. That allows for up to 2 epsilons
> tolerance.
>
> But if you step through the code, the actual error test values computed (d1
> and d2, in the code) is 3 epsilons for float, and 4 epsilons for double and
> long double.
>
> What is happening here? It seems to me that the error checking code itself
> that computes the values to be checked (d1 and d2) introduces one to four
> possible additional rounding errors. (The error checking code always does
> one subtraction, possibly one subtraction in an abs, possibly another
> subtraction in an abs, and possibly one division). That would account for
> the observed behavior.

Sorry, I don't see all the substractions you are speaking about, I only
see these three floating-point operations in close_at_tolerance:

  FPT diff = detail::fpt_abs( left - right );
  FPT d1   = detail::safe_fpt_division( diff, detail::fpt_abs( right ) );
  FPT d2   = detail::safe_fpt_division( diff, detail::fpt_abs( left ) );

We can suppose the user doesn't want to compare numbers with a relative
error bigger than 1/2 (they are no more "close" numbers). Consequently,
the substraction should not produce any rounding error (thanks to Sterbenz
law). However the divisions will produce rounding errors.

Consequently, the relative errors d1 and d2 are off by ulp/2.

> Is this analysis correct? I know almost nothing about floating-point
> arithmetic, so it is probably flawed. But it looks to me as if the correct
> tolerance formula is  (n+m)*std::numeric_limits::epsilon()/2, where m is
> 1,2,3, or 4 depending on the exact logic path through

m should then be 1, unless I am mistaken.

> close_at_tolerance::operator(). It would also be easy to change the code to
> eliminate some of the operations which add additional possible rounding
> errors.

If the number of ulps is a power of 2 (and consequently tolerance is
2**(-k)), this code doesn't produce any rounding error:

  FPT diff = abs(a - b);
  bool c1 = diff <= tolerance * abs(a);
  bool c2 = diff <= tolerance * abs(b);

(if a and b are not too small)

> If the above is all wrong, perhaps someone can offer the correct analysis
> for why the tests fail.
>
> --Beman

There are also some strange things.

For example, what is the meaning of
  BOOST_CHECK_CLOSE_SHOULD_PASS_N
   (1, 1+std::numeric_limits::epsilon() / 2, 1); ?
Indeed, 1+epsilon/2 is equal to 1 (by definition of epsilon) so this test
doesn't have any interest, does it?

Regards,

Guillaume

PS: I find the logging of test_fp_comparisons really ugly. I hope it is
not intended.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] is_nan

2003-07-04 Thread Guillaume Melquiond
On Fri, 4 Jul 2003, jvd wrote:

> [snip]
>
> > > does not work correctly on some machines.
> >
> > Could you be more specific. On which machines for instance ?
>
> Me, myself personally tested on Intel Celeron 733, OS WinXP.
> Compiler used: gcc 3.2 mingw port for windows.
>
> Also reported not to work on Sun machine although on some other computers
> this works.

Strange, I just tried it on a x86 (linux, cygwin, gcc 2.95, 3.2, 3.3) and
on sparc (gcc 2.95), and it works perfectly. The test program was:

int main() { double a = 0. / 0.; return a != a; }

> That mean sometimes this work and sometimes not :)
> Depends on processor perhaps, because I've read asm output (I'm just a crap
> when it comes to x86 asm reading but...):
>
> // g++ -save-temps -O0 (to avoid overoptimizations when variable comparison
> to itself is just replaced with some boolean constant)
>
> There we have  is_nan for floats -
>
>  .section .text$_ZN4math4geom6is_nanIfEEbRKT_,"x"
>  .linkonce discard
>  .align 2
> .globl __ZN4math4geom6is_nanIfEEbRKT_
>  .def __ZN4math4geom6is_nanIfEEbRKT_; .scl 2; .type 32; .endef
> __ZN4math4geom6is_nanIfEEbRKT_:
> LFB29:
>   // function prologue
>  pushl %ebp
> LCFI225:
>  movl %esp, %ebp
> LCFI226:
>  movl 8(%ebp), %eax
>  movl 8(%ebp), %edx
>  // operations on stack, sweet
>
>
>  flds (%eax)  //  load the same
>  flds (%edx)  // variable to  2 copr. registers
>  fxch %st(1)
>  fucompp  // comparison (x != x), obviously
>  fnstsw %ax
>  andb $69, %ah
>  xorb $64, %ah
>  setne %al
>  andl $255, %eax// && std::numeric_limits::has_quiet_NaN
>
>  popl %ebp // function epilogue
> ret
>
>
> Seems like this is not compilers fault. Correct me if I'm wrong. And as me

No, the code is correct. When fucom (unordered compare) encounters quiet
NaN arguments, it sets condition flags C0 (0x01), C1 (0x02) and C3 (0x40)
to 1. The status word is then put in %ax. The condition flags are
extracted ($69 = 0x45). C3 is then inverted. But C0 and C1 are still
non-zero. The function will answer true (according to Intel instruction
set reference, document 24547107).

Maybe you just found yourself a buggy processor :-)

> uses intervals lib and is_nan f-tion, specifically; I'm really concerned
> about such problems. If workaround is needed and it's not so simple, perhaps
> it could be separated from certain library and moved itself to
> numeric_utility.hpp or something?
>
> Respect,
> Justinas V.D.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] is_nan

2003-07-04 Thread Guillaume Melquiond
On 4 Jul 2003, Gabriel Dos Reis wrote:

> "Toon Knapen" <[EMAIL PROTECTED]> writes:
>
> | > seems like this code
> | >
> | > template< typename T >
> | > bool is_nan( const T& v )
> | > {
> | > return std::numeric_limits::has_quiet_NaN && (v != v);
> | > }
> | >
> | > does not work correctly on some machines.
> |
> | Could you be more specific. On which machines for instance ?
>
> If v is a signalling NaN, and you're for example using the SPARC
> architecture for example, you might get a trap.
>
> -- Gaby

Just to avoid any kind of misunderstanding, the interval library don't use
signaling NaN.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: is_nan

2003-07-04 Thread Guillaume Melquiond
On Fri, 4 Jul 2003, Fernando Cacciola wrote:

> Gabriel Dos Reis <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> > "jvd" <[EMAIL PROTECTED]> writes:
> >
> > | Dear boosters,
> > |
> > | seems like this code
> > |
> > | template< typename T >
> > | bool is_nan( const T& v )
> > | {
> > | return std::numeric_limits::has_quiet_NaN && (v != v);
> > | }
> > |
> > | does not work correctly on some machines.
> >
> > Yes.  It is an incorrect (unfortunately popular) implementation.
> >
> Right. We should say that more often. It is incorrect however popular.

Yes, it is incorrect for C++. But it's something we can hope to see one
day. For example, in the LIA-1 annex I about C langage bindings, it is
written that != is a binding for IEEE-754 ?<> operator (unordered
compare). In the C9X annex F.8.3 about relational operators, it is written
that the optimization "x != x -> false" is not allowed since "The
statement x != x is true if x is a NaN". And so on.

> Most compilers provide a non standard extension for this purpose.
> For instance, Borland uses _isnan.
> In general, these extensions are found on .

In fact, since it is not specified by the C++ standard, isnan comes from
the C headers and is supposed to be found in .

> The best approach, IMO, is to have a boost::is_nan() with compiler specific
> implementations.

Yes, and there also were discussions on this mailing-list about a
 header. But unless somebody finds the time to tackle this
whole problem...

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Re: Math constants - nearest values

2003-06-25 Thread Guillaume Melquiond
On Sun, 22 Jun 2003, Paul A Bristow wrote:

> |  Consequently, more than one constant out of 1 may suffer
> |  from this problem. So it is rare, but it is not impossible.
> |  It's why I was suggesting that a library should provide a
> |  mean to know if a number representing a constant is the
> |  nearest or not.
>
> One constant out of 1 seems a rather small risk.

> Can't we just check that all the constants we offer are in fact the
> nearest?

Yes.

> Since very few contain many zeros, I think I am prepared to wager a few
> beers on this one!

> So does this mean that the presentation function will use the 'exactly
> representable' decimal digits appropriate for the floating point format
> (choice of 5) and for the FP type float, double, long double to give to
> the compiler to use?

Sorry, I'm not sure I clearly understand what you mean. What I would do is
something like

float  the_constant_f = ... /* a 24 binary digits representation */;
double the_constant_d = ... /* a 53 binary digits representation */;
long double the_constant_ld =
#ifdef LONG_DOUBLE_IS_80_BITS
  ... /* a 64 binary digits representation */;
#elif defined(LONG_DOUBLE_IS_128_BITS)
  ... /* a 102 binary digits representation */;
#else /* we don't know this floating-point format */
  ... /* a 40 decimal digits approximation */;
  #define the_constant_LONG_DOUBLE_MAY_NOT_BE_ACCURATE
#endif
/* And then there would be all the wrappers... */

The binary representations will be exact values in order to avoid compiler
rounding. So all possible floating-point formats should be thought of.
However, if one of them is missing, we fall back on a common 40 decimal
digits constant. Since we can't be sure there won't be any rounding
problem, we flag the result as being "maybe inaccurate". Does it make
sense?

> Paul
>
> PS How do we check? Does it depend on the FP format?

It depends on the source FP format and on the destination FP format. We
check it by looking if the rounded down version of the most precise value
is equal to the less precise version of the constant.

But it doesn't matter anymore if we explicitely give the values of the
constant for all the formats and not only for the most precise format.

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Advanced match constants scheme

2003-06-22 Thread Guillaume Melquiond
On Sun, 22 Jun 2003, Gennaro Prota wrote:

> On Sun, 22 Jun 2003 00:50:32 +0200 (CEST), Guillaume Melquiond
> wrote:
>
> >We are sure c_l is the nearest 'long double'. Now we want the nearest
> >'double'. Can we simply do:
> >
> >double c_d() { return c_l(); }
> >
> >No, we can't.
>
> We already agreed that a different definition must be provided for
> each type (float, double, long double, and possibly UDTs in some of
> the proposed approaches). Is this the only objection? I haven't
> analyzed the rest of the post because this could be a crucial
> misunderstanding (I've read everything, only a little more
> absent-mindedly).
>
> Genny.

Yes, it's what I meant. A different value should be provided for each type
(and for 'long double', it will have to be machine-dependent: 80, 64+64 or
128 bits) and it must be exactly representable for the type (so, if the
decimal representation of a constant doesn't at least end by a 5, there is
an error). I thought I add to make it clear, since it's not the case in
the "reviewed" library and it will be tedious to implement.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Advanced match constants scheme

2003-06-21 Thread Guillaume Melquiond
On Sat, 21 Jun 2003, Gennaro Prota wrote:

> On Fri, 20 Jun 2003 22:04:53 +0200 (CEST), Guillaume Melquiond
> wrote:
>
> [...]
> >I know this part of the standard. But it doesn't apply in the situation I
> >was describing. I was describing the case of a constant whose decimal (and
> >consequently binary) representation is not finite. It can be an irrational
> >number like pi; but it can also simply be a rational like 1/3.
>
> I was just trying to start from something sure, such as the standard
> requirements. I had some difficulties in understanding your example.
>
> The way I read the paragraph quoted above, "the scaled value" is the
> value intended in the mathematical sense, not its truncated internal
> representation. So the compiler must behave *as if* it considered all
> the digits you provide and choose the nearest element (smaller or
> larger - BTW C99 is different in this regard). In your example:
>
> the constant is 1.00050001236454786005785305678
> you write it with 7 seven digits: 1.000500
> the floating-point format only uses 4 digits
>
> you wouldn't write 1.000500 but, for instance, 1.00050001 and the
> hypothetical base-10 implementation should then consider all the
> digits even if it can store only 4 of them. Thus if it chooses the
> *larger* value nearest to that it chooses 1.001. Of course if it
> chooses the smaller..., but that's another story.

Yes you're right, but you have lost track of the reason why I was giving
this example. I was speaking about the conversion from a constant in 'long
double' to 'double'. In your case, you can always add numbers to correct
the problem. But in the situation I was describing, you can't: the 'long
double' format is fixed once and for all; you can't suddenly add digits if
you need them.

So, even if you have the 'long double' representation of a constant, you
may still need the 'double' representation of this constant if you want
the 'double' version to be the nearest number.

> Just to understand each other: suppose I write
>
> double d =
> 2348542582773833227889480596789337027375682548908319870707290971532209025114608443463698998384768703031934976.0;
> // 2**360
>
> The value is 2**360. On my implementation, where DBL_MAX_EXP is 1024
> and FLT_RADIX is 2, isn't the compiler required to accept and
> represent it exactly?

Yes it is.

> >When manipulating such a number, you can only give a finite number of
> >decimal digit. And so the phenomenon of double rounding I was describing
> >will occur since you first do a rounding to have a finite number of
> >digits and then the compiler do another rounding (which is described by
> >the part of the standard you are quoting) to fit the constant in
> >the floating-point format.
>
> I'm not very expert in this area. Can you give a real example
> (constant given in decimal and internal representation non-decimal)?

Okay, here is a "real" example. Let's suppose you have an x86 processor:
'long double' mantissa is 64 bits wide, 'double' is 53 (52 + one
implicit). The constant is the fractional number:
  c = 73786976294838196225 / 55340232221128654848 (it's 1/3-3413/2^64)

Since you don't want the compiler to do the rounding (because of the
standard we can't be sure of the rounding direction), you give the exact
representation of the 'long double' nearest number.

long double c_l() {
  return 1.3332593184650249895639717578887939453125;
}

We are sure c_l is the nearest 'long double'. Now we want the nearest
'double'. Can we simply do:

double c_d() { return c_l(); }

No, we can't. Because the constant was specially crafted to make this
conversion fail (if rounded to nearest). And it wasn't that difficult, it
is just the matter of a four decimal digits number (3413) to make it fail.

In fact, it is enough that 13 bits have a particular value.
Consequently, more than one constant out of 1 may suffer from this
problem. So it is rare, but it is not impossible. It's why I was
suggesting that a library should provide a mean to know if a number
representing a constant is the nearest or not. Is it a bit more clear?

And for people still wondering why I didn't use this kind of code (please
note that the value of the constant has changed, this time it's the
nearest 53 decimal digits to the constant):

#define c 1.3331483142325986820016699615128648777803
long double c_l() { return c; }
double c_d() { return c; }

It's because the standard doesn't guarantee the direction of rounding. So
not only the 'double' value could be incorrect, but the 'long double'
value could also be 

Re: [boost] Re: Advanced match constants scheme

2003-06-20 Thread Guillaume Melquiond
On Fri, 20 Jun 2003, Gennaro Prota wrote:

> >> |  [*] It is not even true. Due to "double rounding" troubles,
> >> |  using a higher precision can lead to a value that is not the
> >> |  nearest number.
> >>
> >> Is this true even when you have a few more digits than necessary?
> >> Kahan's article suggested to me that adding two guard decimal digits
> >> avoids this problem.  This why 40 was chosen.
> >
> >I don't know if we are speaking about the same thing.
>
> I don't know either. What I know is the way floating literals should
> work:
>
>   A floating literal consists of an integer part, a decimal point,
>   a fraction part, an e or E, an optionally signed integer exponent,
>   and an optional type suffix. [...]
>   If the scaled value is in the range of representable values for its
>   type, the result is the scaled value if representable, else the
>   larger or smaller representable value nearest the scaled value,
>   chosen in an implementation-defined manner.

I know this part of the standard. But it doesn't apply in the situation I
was describing. I was describing the case of a constant whose decimal (and
consequently binary) representation is not finite. It can be an irrational
number like pi; but it can also simply be a rational like 1/3.

When manipulating such a number, you can only give a finite number of
decimal digit. And so the phenomenon of double rounding I was describing
will occur since you first do a rounding to have a finite number of
digits and then the compiler do another rounding (which is described by
the part of the standard you are quoting) to fit the constant in
the floating-point format.

Just look one more time at the example I was giving in the previous mail.

> Of course "the nearest" means nearest to what you've actually written.
> Also, AFAICS, there's no requirement that any representable value can
> be written as a (decimal) string literal. And, theoretically, the

I never saw any computer with unrepresentable values. It would require it
manipulates numbers in a radix different from 2^p*5^q (many computers use
radix 2, and some of them use radix 10 or 16; no computer imo uses radix
3).

> "chosen in an implementation-defined manner" above could simply mean
> "randomly" as long as the fact is documented.

The fact that it does it randomly is another problem. Even if it was not
random but perfectly known (for example round-to-nearest-even like in the
IEEE-754 standard), it wouldn't change anything: it would still be a
second rounding. As I said, it is more of an arithmetic problem than of a
compilation problem.

> Now, I don't even get you when you say "more digits than necessary".

What I wanted to say is that writing too much decimal digits of a number
doesn't improve the precision of the constant. It can degrade it due to
the double rounding. In conclusion, when you have a constant, it is better
to give an exact representation of the nearest floating-point number
rather than writing it with 40 decimal digits. By doing that, the compiler
cannot do the second rounding: there is only one rounding (the one you
did) and you are safe.

> One thing is the number of digits you provide in the literal, one
> other thing is what can effectively be stored in an object. I think
> you all know that something as simple as
>
>  float x = 1.2f;
>  assert(x == 1.2);
>
> fails on most machines.

Yes, but it's not what I was talking about. I hope it's a bit more clear
now.

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Advanced match constants scheme

2003-06-20 Thread Guillaume Melquiond
On Fri, 20 Jun 2003, Paul A Bristow wrote:

[snip]

> |  [*] It is not even true. Due to "double rounding" troubles,
> |  using a higher precision can lead to a value that is not the
> |  nearest number.
>
> Is this true even when you have a few more digits than necessary?
> Kahan's article suggested to me that adding two guard decimal digits
> avoids this problem.  This why 40 was chosen.

I don't know if we are speaking about the same thing. I am not sure I will
be clear if I try to explain it directly so I will just give an example.
The number are supposed to be stored in decimal in order to clarify the
reasoning (it also works with binary constants, juste replace non-zero
digits by some 1).

  the constant is 1.00050001236454786005785305678
  you write it with 7 seven digits: 1.000500
  the floating-point format only uses 4 digits
  so the compiler rounds the number to: 1.000 (round-to-nearest-even)
  but the nearest value was: 1.001 (since the constant is > 1.0005)

I hope it was clear enough. If you use more digits than necessary, the
value you finally obtain may not be the nearest one. And it doesn't have
anything to do with the compiler, with the number of digits used, with the
radix, etc.

With the most common constants and formats, I don't think the problem
arises. But it is still possible: a string of zeros at the wrong place is
enough (and when the only digits are 0 and 1, it is not that uncommon to
get such a string).

> Consistency is also of practical importance - in practice, don't all
> compilers read decimal digit strings the same way and will end up with
> the same internal representation (for the same floating point format),
> and thus calculations will be as portable as is possible?  This is
> what causes most trouble in practice - one gets a slightly different
> result and wastes much time puzzling why.

This problem doesn't depend on the compiler. If all the compilers read
digit strings the same way and apply the same kind of rounding, they will
all fail the same way. It is an arithmetic problem, not a compilation
problem.

> |  So maybe the interface should provide four
> |  values for each constant at a given
> |  precision: an approximation, the nearest value, a lower
> |  bound, and an upper bound.
>
> Possible, but yet more complexity?

Yes, but also more correct. Most people will rely on the approximation
(it's the 40 digits values you are providing) to do their computations.
But there may be people who expect to have the nearest value. They will
get it if the library is able to provide it, and will get a compilation
error otherwise. There is absolutely no surprise this way.

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Advanced match constants scheme

2003-06-20 Thread Guillaume Melquiond
On Thu, 19 Jun 2003, Augustus Saunders wrote:

> >PS I'd like to hear more views on this -
> >previous review comments were quite different,
> >being very cautious about an 'advanced' scheme like this.

I didn't react to this review at first because I was a bit disappointed by
the content of the library. It was more like some questions about the best
way to represent constants in a C++ library. And since I already had given
my thoughts about that, I didn't feel the need to speak about it again.

> Disclaimer: I am neither a mathemetician nor a scientist (I don't
> even play one one TV).  I do find the prospect of writing natural,
> effecient, and precise code for solving various equations a
> worthwhile goal.  So, since you asked for comments, here's my
> non-expert thoughts.
>
> As I understand it, the original proposal's goal was to provide
> conveniently accessible mathematical constants with precision greater
> than current hardware floating point units without any unwanted
> overhead and no conversion surprises.  Additionally, it needed to
> work with the interval library easily.  To work around some
> compilers' failure to remove unused constants or poor optimization,
> we wound up discussing function call and macro interfaces.  Nobody,
> however, is thrilled with polluting the global namespace, so unless
> Paul Mensonides convinces the world that macro namespaces are a good
> thing, some of us need convincing that macros are really the way to
> go.

I am not really interested in macros. I would prefer for the library to
only provide one kind of interface. There could then be other headers on
top of it to provide other interfaces to access the constants.

The standard interface should provide a way to access a constant at a
given precision and an enclosing interval of it. For example, this kind of
scheme would be enough for me: "constant::lower()". I'm not
suggesting that such a notation should be adopted; it's just a way to show
what I consider important in a constant.

If a particular precision is not available, the library should be able to
infer it thanks to the value of the constant for other precisions. For
example, if the only available precisions are "float" and "long double"
for a particular architecture and/or constant, and if the user needs
"double", the library should be able to do such conversions:

  constant::value() <-> constant::value()
  constant::lower() <-> constant::lower()

Please note that for the value of a constant, a higher precision constant
can be used instead [*]; but for the lower and upper bound, it must be a
lower precision constant. So it is a bit more complicated than just
providing 40 digits constants.

It is the reason why I was rooting for a library specialized in constants.
It would provide an interface able to hide the conversion problems. The
library would have to know the underlying format of floating-point numbers
since the precision of the formats is not fixed (there are 80-bits and
128-bits long double for example).

The Interval library defines three constants: pi, 2*pi and pi/2. They are
needed in order to compute interval trigonometric functions. At the time
we designed the library, it was not easy task to correctly define these
constants. Here is the example of one of the 91 lines of the header that
defines them:

static const double pi_d_l = (3373259426.0 + 273688.0 / (1<<21))
  / (1<<30);

Using such a formula was (in our opinion) necessary in order for the
compiler to correctly deal with these constants. I would be happy to
remove such an header and use another library instead.

> In the course of discussion, a more ambitions plan was proposed.
> Instead of just providing a big list of constants, IIUC it was
> suggested that an expression template library be used to allow common
> constant combinations like 2*pi or pi/2 to be expressed with normal
> operators.  This seems good, it provides a natural syntax and reduces
> namespace clutter and is easier to remember.  However, since the idea
> was to use a special math program to generate high precision
> constants, I'm not sure whether an ETL can eliminate the need to
> compute things like 2*pi with the third party program.  So I'd like
> to know:
>
> does
>
> 1) 2*pi ==> BOOST_2_PI
>
> where BOOST_2_PI is a constant already defined, or does
>
> 2) 2*pi ==> BOOST_PP_MULT( 2, BOOST_PI )
>
> using high precision preprocessor math (or something) to sidestep the
> need for defining BOOST_2_PI in the first place?
>
> If this was implemented the first way, then I would see any
> "advanced" scheme as being a layer on top of the actual constant
> library, to give it a more convenient interface.  The second way
> might actually impact what constants get defined in the first place,
> in which case we should talk it out enough to know what constants
> should be defined.  But I'm not sure the possibility of an advanced
> scheme should prevent us from defining the basic constants--an
> expression framework could be

[boost] Re: [test] first revision to 1.30.0

2003-06-10 Thread Guillaume Melquiond
On Tue, 10 Jun 2003, Gennadiy Rozental wrote:

> > Are the directories "libs/test/build/msvc??_proj" really necessary? Or is
> > it just a CVS mistake?
>
> Nessesarry for what? Each contains project files for respective version of
> msvc. I use them. You could either.
>
> Gennadiy.

No I don't use MSVC (I don't even use Windows). But that's absolutely not
the point of my previous mail.

What I meant is that the files contain data specific to your own
programing environment (there are absolute filesystem paths in the 7.1
project files for example). It's the reason why I was suggesting that it
may be a mistake with CVS. If there is no mistake and if the files are
really usable by other people having MSVC, that's fine.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] [test] first revision to 1.30.0

2003-06-10 Thread Guillaume Melquiond
On Tue, 10 Jun 2003, Gennadiy Rozental wrote:

[snip]

> Update is available in cvs. Let me know about any issues.

Are the directories "libs/test/build/msvc??_proj" really necessary? Or is
it just a CVS mistake?

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Boost header and library dependency tables

2003-06-08 Thread Guillaume Melquiond
On Sun, 8 Jun 2003, John Maddock wrote:

[snip]

> > Finally, what are "library dependencies"? Sorry if it's a dumb question.
> > But by looking at the results, I don't get the meaning of it.
>
> It's everything that's needed by the complete library - by it's test and
> example programs etc as well as the headers - for most libraries this means
> that the Boost.test source will be listed as a dependency for example.

Yes, it's what I thought at first. But the results are strange: I'm really
surprised the MultiArray library relies on the Interval library headers.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Boost header and library dependency tables

2003-06-07 Thread Guillaume Melquiond
On Sat, 7 Jun 2003, John Maddock wrote:

> A while ago Beman produced header dependency tables, unfortunately these
> began to get rather complicated and so were dropped, I've placed some
> alternative tables here:
>
> Boost header dependencies:
> http://www.regex.fsnet.co.uk/header_dependencies.html
> Boost library dependencies:
> http://www.regex.fsnet.co.uk/library_dependencies.html
>
> Whether these are actually any better than what we had before is
> questionable, but the information might prove useful to some.
>
> BTW these tables are produced using bcp in list mode (see separate thread).
>
> Regards,
>
> John.

I think it's a good idea. But I have a few comments.

First, it only handles headers that are directly under 'boost/'. However
some people have tried not to pollute the root directory and have put
their libraries in sub-directories. For example, the Graph library, uBlas,
the Interval library, the mathematical libraries, and so on. And your
script is unfortunately forgetting them.

Second, the links at the top of the pages don't work with all browsers
since the anchors are defined in a non-standard way. It shouldn't be  but  (no
sharp sign). With some quotes, it would be even better, but it's another
problem.

Finally, what are "library dependencies"? Sorry if it's a dumb question.
But by looking at the results, I don't get the meaning of it.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Should regression summary consider both pass and warnas passing?

2003-05-26 Thread Guillaume Melquiond
On Sun, 25 May 2003, Beman Dawes wrote:

> I think that Greg Comeau has a good point in his email below - reporting
> separate pass / warn / fail statistics in the regression summary can be
> misleading to naive readers.

(What are naive readers trying to do with Boost? :-) )

I think this column is important and should not be removed. I don't think
compiler developpers put warnings in order to just annoy the user. There
is always a meaning to them. Sometimes it's just noise. But some other
times it means the compiler hasn't really understand what the user wanted
and so will blindly do something that may be wrong.

Consequently, some developpers always disallow warnings when compiling
production code (for example the option -Werror for GCC). So, in their
opinion, a warning is nothing different than an error since the
compilation will fail. And when they come to the regression page in order
to know if their particular compiler/platform is supported by Boost, they
want to see this column.

That was for the user. Now for me as a Boost developper, I will just give
an example: if this column was not present, I wouldn't have sent a patch
to remove 50 warnings on Intel compiler for Linux. It's because I could
compare the number of warnings for GCC, ICC on Windows and ICC on Linux
that I saw there was a problem.

> On the other hand, we certainly want to continue to report warnings in the
> tables themselves.
>
> So it seems to me that in the summary we should lump "pass" and "warn" from
> the tables together into a single "pass" category in the summary.
>
> Opinions?
>
> --Beman

So if I had to take a decision (but it's not the case), I would let the
warning column as it is. On the particular problem of the number of
warnings with Comeau compiler, it would be a better bet to send a patch
that removes all spurious trailing semi-colon in the Date-time library. It
would remove a lot of warnings (~20 warnings) for this compiler and it
wouldn't be necessary anymore to remove the warning column.

Regards,

Guillaume

PS: Speaking about regression tests, would it be easy to add the kind
of failures in the tables? For example, indicate "Fail(R)" in place of
"Fail" when the error occured at run time.

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] New configuration macro BOOST_INTEL

2003-05-23 Thread Guillaume Melquiond
On Fri, 23 May 2003, John Maddock wrote:

> > I didn't apply the patches for type_traits and regex (is there a way to
> > know if Boost cvs contains the current version of a library or if all the
> > changes will be destroyed the next time the maintainer commits a new
> > version?). They would benefit from this new macro. As Pavel Vozenilek
> > suggested it in a recent mail, current_function.hpp would also benefit
> > from this macro.
>
> As far type traits and regex are concerned go ahead.

Done, thanks.

> Oh, and can you add the macro to config_info.cpp as well,

It was already done. Maybe you meant another file?

> Many thanks,
>
> John

Regards,

Guillaume

PS: Your web editor heavily modifies files; it's a bit hard to track
changes in the CVS (config.htm: +1180 -1451 lines).

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] New configuration macro BOOST_INTEL

2003-05-23 Thread Guillaume Melquiond
Since nobody complained, I have added this new configuration macro.

I didn't apply the patches for type_traits and regex (is there a way to
know if Boost cvs contains the current version of a library or if all the
changes will be destroyed the next time the maintainer commits a new
version?). They would benefit from this new macro. As Pavel Vozenilek
suggested it in a recent mail, current_function.hpp would also benefit
from this macro.

For all other cases, it requires a bit of work. The goal may be to remove
all references to __ICC and __ICL in Boost code. For example, in some
situations, __ICL is tested although it should be _MSC_VER: the
workarounds are needed not because the compiler is Intel CC, but because
it is running in MSVC emulation mode. So I hope all library maintainers
will now be careful before directly testing for __ICC or __ICL (is it too
much of a hope?)

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Patch for type_traits

2003-05-18 Thread Guillaume Melquiond
Hi,

This small patch removes all the warnings the type_traits regression suite
is actually encountering with Intel compiler on Linux. And thanks to this
patch, the number of warnings for the whole Boost regression suite on this
particular compiler drops from 50 to 1.

Index: libs/type_traits/test/test.hpp
===
RCS file: /cvsroot/boost/boost/libs/type_traits/test/test.hpp,v
retrieving revision 1.4
diff -u -r1.4 test.hpp
--- libs/type_traits/test/test.hpp  1 Feb 2003 12:12:08 -   1.4
+++ libs/type_traits/test/test.hpp  18 May 2003 07:58:19 -
@@ -126,7 +126,7 @@
 # ifdef BOOST_MSVC
 #  pragma warning(push)
 #  pragma warning(disable: 4181)
-# elif defined(__ICL)
+# elif defined(__ICL) || defined(__ICC)
 #  pragma warning(push)
 #  pragma warning(disable: 21)
 # endif
@@ -141,7 +141,7 @@
 typedef const r_type cr_type;
 # ifdef BOOST_MSVC
 #  pragma warning(pop)
-# elif defined(__ICL)
+# elif defined(__ICL) || defined(__ICC)
 #  pragma warning(pop)
 #  pragma warning(disable: 985) // identifier truncated in debug information
 # endif

The patch is trivial: the original file removes the warning on Windows,
this patch also removes them on Linux. I will apply the patch if nobody
complains.

But, more importantly, I want to use this mail to repeat my proposal of a
BOOST_INTEL macro to encompass __INTEL_COMPILER, __ICL, __ICC, __ECC
(thanks to Richard Hadsell for pointing it out) macros. I am not asking
for somebody else to do it, I just want a bit of approval before preparing
the patches since it is not a small job to add a new config macro.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Configuration for Intel compiler

2003-05-14 Thread Guillaume Melquiond
On Wed, 14 May 2003, Peter Dimov wrote:

> Guillaume Melquiond wrote:

> > Your patch does not fix the problem at all. In my opinion, it can even
> > break some working configurations.  I would rather use this
> > conditional expression:
> >
> > #  if !(defined(_GLOBAL_USING) && (_GLOBAL_USING+0 > 0 || _CPPLIB_VER
> > == 310)) && !defined(_STD)
> >
> > since the test _GLOBAL_USING+0 > 0 is false although we want it true
> > with Dinkumware 3.10 (_CPPLIB_VER == 310 and maybe other versions but
> > I don't know them).
>
> Testing _CPPLIB_VER is incorrect. AFAICS _GLOBAL_USING means "import
> identifiers from the global namespace into std:: via using declarations."
>
> _GLOBAL_USING can be 0 when the C library headers put identifiers in std::
> and ::, as is probably the case for Intel/Linux; no further "postprocessing"
> is necessary.

I am not sure to understand what you mean by "postprocessing". There is
actually a lot of code in the cmath file of the Dinkumware library that is
used after the inclusion of math.h in order to put all relevants C
functions in std:: namespace. So yes there is some postprocessing.
Moreover, some of the functions are only available in std:: since they are
directly defined by the library.

> Under Windows, the C library headers do not put identifiers in std::, and
> _GLOBAL_USING=1 can be used as a test whether the identifiers have been
> imported into std:: by the  headers. Note that this does not depend on
> _CPPLIB_VER.

It's quite problematic that the meaning of _GLOBAL_USING depends on the
system.

> I think that the right thing to do here is to specifically test for the
> problematic configuration (i.e. __ICC && __linux) to avoid breaking
> Dinkumware 3.10 on other platforms.

I think it's a bit too much: an user may want to use the Dinkumware
library on Linux without Intel compiler. But I won't complain too much: if
it's enough to fix the particular platform I use, I will be happy :-).

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Configuration for Intel compiler

2003-05-14 Thread Guillaume Melquiond
On Wed, 14 May 2003, John Maddock wrote:

> > Your patch does not fix the problem at all.
>
> Ah, I see I got the Intel version check backwards, fixed (hopefully!)

Yes, this time the conditional is correct. Unfortunately, this patch is
still not good: __ICL is not defined so it doesn't work. My version of the
compiler (the standard version available on Intel's website) does not
define __ICL, but only __ICC and __INTEL_COMPILER. So the patch is still
not enough.

As a matter of fact, what is the meaning of ICL? For ICC, it's easy: it's
the acronym of Intel C Compiler. But for ICL, I don't know. By doing a
grep on the Boost source tree, I also saw a lot of place where only __ICL
is tested and not __ICC. If they are supposed to have the same meaning,
maybe all the occurences of __ICC and __ICL should be replaced by a common
macro: BOOST_INTEL_CXX_VERSION (or maybe a shorter version like
BOOST_INTEL).

> > In my opinion, it can even
> > break some working configurations.  I would rather use this conditional
> > expression:
> >
> > #  if !(defined(_GLOBAL_USING) && (_GLOBAL_USING+0 > 0 || _CPPLIB_VER ==
> > 310)) && !defined(_STD)
>
> I'm not convinced that _GLOBAL_USING is used consistently between Dinkumware
> libraries (windows vs linux etc), so I'd rather not rely on _GLOBAL_USING at
> all - hence my attempt to check the Intel version number.

I was hoping the Dinkumware library was the same on all the platforms (for
a specific version number) since the one available for Linux tests for the
presence of MSVC and others typical Microsoft macros.

> > since the test _GLOBAL_USING+0 > 0 is false although we want it true with
> > Dinkumware 3.10 (_CPPLIB_VER == 310 and maybe other versions but I don't
> > know them). If there is a way to test the macro _GLOBAL_USING is defined
> > but doesn't have a value, it would be even better: it would work with all
> > the versions of the library that assume "#define _GLOBAL_USING" to be
> > equivalent to "#define _GLOBAL_USING 1".
>
> I don't think it's possible to check for that as far as I know :-(

Yes, it's also what I thought. Too bad.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Configuration for Intel compiler

2003-05-11 Thread Guillaume Melquiond
On Sun, 11 May 2003, Guillaume Melquiond wrote:

> The default configuration defines BOOST_NO_STDC_NAMESPACE for this
> compiler. So the library expects to find standard C math functions (the
> ones in ) in the global namespace. Unfortunately, they are where
> they are supposed to be: in the std namespace. So here is my question: is
> this macro really necessary for this compiler?

Thanks to a suggestion from Gennaro Prota, I did take a look to the
library part of the configuration. The Linux version of the Intel compiler
is shipped with the Dinkumware library (_CPPLIB_VER=310).

One of the first things the configuration header does is:

  #if !(defined(_GLOBAL_USING) && (_GLOBAL_USING+0 > 0)) && !defined(_STD)
  #   define BOOST_NO_STDC_NAMESPACE
  #endif

However, during compilation, _GLOBAL_USING is defined but without value
and _STD is not defined, so BOOST_NO_STDC_NAMESPACE is set.

I tried to uncover the meaning of these macros, but the only explanation I
found was this sentence of Peter Dimov in another mail: "Dinkumware puts
the names in std:: when _GLOBAL_USING is #defined to 1 in ."

But it doesn't seem to be true anymore since no value is given to
_GLOBAL_USING. And the comment for _GLOBAL_USING in yvals.h says: "*.h in
global namespace, c* imports to std". So it is enough for _GLOBAL_USING to
be defined for the standard C functions to be imported in namespace std.

Consequently I added a test for _CPPLIB_VER==310 (I don't know the
situation for the other versions of the library) in the previous
conditional expression. But it was not enough, BOOST_NO_STDC_NAMESPACE was
still defined. This time, it's in config/platform/linux.hpp:

  // Intel on linux doesn't have swprintf in std::
  #ifdef  __ICC
  #  define BOOST_NO_STDC_NAMESPACE
  #endif

Since there already exists a macro BOOST_NO_SWPRINTF (which is correctly
set), this portion of code is not needed, is it?

In conclusion, stdlib/dinkumware.hpp should take into account the new
versions (which versions of the library other than 3.10 are affected?).
And the lack of swprintf in std:: is not enough of a reason to define
BOOST_NO_STDC_NAMESPACE in platform/linux.hpp since swprintf is already
known to be non-conforming thanks to BOOST_NO_SWPRINTF. Comments?

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Configuration for Intel compiler

2003-05-11 Thread Guillaume Melquiond
Hi,

I found a bug in the interval library. but when I corrected it, I stumbled
over another problem: this bug was ironically what allowed the library to
be correctly compiled with my version of the compiler (Intel compiler 7.1
for Linux). When I remove it, the library no longer works...

The default configuration defines BOOST_NO_STDC_NAMESPACE for this
compiler. So the library expects to find standard C math functions (the
ones in ) in the global namespace. Unfortunately, they are where
they are supposed to be: in the std namespace. So here is my question: is
this macro really necessary for this compiler?

Just to be sure, I ran config_info to get the default configuration
options:

BOOST_DEDUCED_TYPENAME  =typename
BOOST_HAS_CLOCK_GETTIME  [no value]
BOOST_HAS_DIRENT_H   [no value]
BOOST_HAS_GETTIMEOFDAY   [no value]
BOOST_HAS_LONG_LONG  [no value]
BOOST_HAS_NANOSLEEP  [no value]
BOOST_HAS_NL_TYPES_H [no value]
BOOST_HAS_NRVO   [no value]
BOOST_HAS_PTHREADS   [no value]
BOOST_HAS_SCHED_YIELD[no value]
BOOST_HAS_SIGACTION  [no value]
BOOST_HAS_UNISTD_H   [no value]
BOOST_MSVC6_MEMBER_TEMPLATES [no value]
BOOST_NESTED_TEMPLATE   =template
BOOST_NO_HASH[no value]
BOOST_NO_MS_INT64_NUMERIC_LIMITS [no value]
 *  BOOST_NO_SLIST   [no value]
 *  BOOST_NO_STDC_NAMESPACE  [no value]
BOOST_NO_SWPRINTF[no value]
BOOST_STD_EXTENSION_NAMESPACE   =std
BOOST_UNREACHABLE_RETURN(0)  [no value]

and then I ran it one more time with the user.hpp given by the configure
script to get the local options:

BOOST_DEDUCED_TYPENAME  =typename
BOOST_HAS_CLOCK_GETTIME  [no value]
BOOST_HAS_DIRENT_H   [no value]
BOOST_HAS_GETTIMEOFDAY   [no value]
BOOST_HAS_LONG_LONG  [no value]
 *  BOOST_HAS_MACRO_USE_FACET[no value]
BOOST_HAS_NANOSLEEP  [no value]
BOOST_HAS_NL_TYPES_H [no value]
BOOST_HAS_NRVO   [no value]
BOOST_HAS_PTHREADS   [no value]
BOOST_HAS_SCHED_YIELD[no value]
BOOST_HAS_SIGACTION  [no value]
 *  BOOST_HAS_STDINT_H   [no value]
 *  BOOST_HAS_SLIST  [no value]
BOOST_HAS_UNISTD_H   [no value]
BOOST_MSVC6_MEMBER_TEMPLATES [no value]
BOOST_NESTED_TEMPLATE   =template
BOOST_NO_HASH[no value]
BOOST_NO_MS_INT64_NUMERIC_LIMITS [no value]
BOOST_NO_SWPRINTF[no value]
BOOST_STD_EXTENSION_NAMESPACE   =std
BOOST_UNREACHABLE_RETURN(0)  [no value]

So there are quite a lot of differences between what Boost expects from
the compiler and what the compiler actually provides (the stars indicate
the differences between the two configurations).

I can contribute patches to fix Boost configuration. But before that, I
would like to know if other people have also run the configure script with
the Intel compiler. In particular, I am interested in knowing if the
Windows version of the compiler behave like the Linux version. It would
also be interesting to know if the 7.0 version (and not only the 7.1)
benefits from these changes.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Little bug in unit_test_result.cpp

2003-04-24 Thread Guillaume Melquiond
Hi,

I'm not really interested in this file, but since gcc complained about an
unitialized variable, I did fix it. And it is the first patch. So please
apply it (and change the name of the enum item before, if necessary).

However, by looking at the code, I found a good example of data
duplication; and the purpose of the second patch is to clean it a bit.

Regards,

Guillaume
Index: boost/test/detail/unit_test_parameters.hpp
===
RCS file: /cvsroot/boost/boost/boost/test/detail/unit_test_parameters.hpp,v
retrieving revision 1.7
diff -u -r1.7 unit_test_parameters.hpp
--- boost/test/detail/unit_test_parameters.hpp  13 Feb 2003 08:07:20 -  1.7
+++ boost/test/detail/unit_test_parameters.hpp  24 Apr 2003 14:44:14 -
@@ -37,7 +37,7 @@
 c_string_literal const LOG_FORMAT= "BOOST_TEST_LOG_FORMAT"; // 
--log_format
 c_string_literal const OUTPUT_FORMAT = "BOOST_TEST_OUTPUT_FORMAT";  // 
--output_format
 
-enum report_level { CONFIRMATION_REPORT, SHORT_REPORT, 
DETAILED_REPORT, NO_REPORT };
+enum report_level { CONFIRMATION_REPORT, SHORT_REPORT, 
DETAILED_REPORT, NO_REPORT, BAD_REPORT };
 c_string_literal const report_level_names[] = { "confirm"  , "short" , 
"detailed" , "no"  };
 
 enum output_format { HRF /* human readable format */, XML /* XML */ };
Index: libs/test/src/unit_test_result.cpp
===
RCS file: /cvsroot/boost/boost/libs/test/src/unit_test_result.cpp,v
retrieving revision 1.15
diff -u -r1.15 unit_test_result.cpp
--- libs/test/src/unit_test_result.cpp  15 Feb 2003 21:55:32 -  1.15
+++ libs/test/src/unit_test_result.cpp  24 Apr 2003 14:44:14 -
@@ -490,7 +490,7 @@
 
 static int const map_size = sizeof(name_value_map)/sizeof(my_pair);
 
-report_level rl;
+report_level rl = BAD_REPORT;
 if( reportlevel.empty() )
 rl = CONFIRMATION_REPORT;
 else {
--- libs/test/src/unit_test_result.cpp.old  2003-04-24 16:45:52.0 +0200
+++ libs/test/src/unit_test_result.cpp  2003-04-24 16:50:33.0 +0200
@@ -476,27 +476,15 @@
 void
 unit_test_result::report( std::string const& reportlevel, std::ostream& where_to_ )
 {
-struct my_pair {
-c_string_literallevel_name;
-report_levellevel_value;
-};
-
-static const my_pair name_value_map[] = {
-{ "confirm" , CONFIRMATION_REPORT },
-{ "short"   , SHORT_REPORT },
-{ "detailed", DETAILED_REPORT },
-{ "no"  , NO_REPORT },
-};
-
-static int const map_size = sizeof(name_value_map)/sizeof(my_pair);
+static int const map_size = sizeof(report_level_names)/sizeof(c_string_literal);
 
 report_level rl = BAD_REPORT;
 if( reportlevel.empty() )
 rl = CONFIRMATION_REPORT;
 else {
-for( int i =0; i < map_size; i++ ) {
-if( reportlevel == name_value_map[i].level_name ) {
-rl = name_value_map[i].level_value;
+for( int i = 0; i < map_size; i++ ) {
+if( reportlevel == report_level_names[i] ) {
+rl = (report_level)i;
 break;
 }
 }
___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Fix for some Interval library tests

2003-02-08 Thread Guillaume Melquiond
On Fri, 7 Feb 2003, Dave Gomboc wrote:

> > I suggest adding another boost defect: BOOST_BROKEN_ADL (or similar)
>
> How about BOOST_LIBRARY_IMPL_VULNERABLE_TO_ADL?  It's not that the
> compiler's ADL implementation is broken, it's that the library
> implementation isn't protected against ADL lookups where it needs to be.
>
> Dave

Sorry, but what is adl? (I tried google on this one, but since there is a
c++ variant called adl, there was a lot of noise). I hope I don't
misunderstand your sentence: it seems it's not the compiler which is
broken but the library. So could you explain a bit more? We have tried to
make the library compliant and I don't want to leave such a fault in it.

Speaking about interval library tests, does anybody know why the windows
version of intel compiler fails all the tests although the linux version
has no problem? I'm not speaking about the missing rounding control (it's
probably just a change of macro definition) but about the other failure
(with equal), for example in interval/add. I see it gives the same error
as vc7; is there a compatibility mode between the two compilers for them
to fail at the same time?

Regards,

Guillaume
(still trying to understand why the interval library has flooded the
regression log for openbsd...)


___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Some comments about the regression tests

2003-02-05 Thread Guillaume Melquiond
Hi,

I tried to use the regression tests with the interval library; and it
worked: I just ran run_tests.sh on a linux computer with gcc 3.2.2 and
intel cc 7.0 and looked at the results. So, if nobody objects or does it
before me, I will modify status/Jamfile so that it automatically handles
the interval library.

However, something bothers me. In the big array with all the tests and
compilers (cs-something.html), library names are wrong. For example, all
the tests for ublas and interval are mixed under the same library called
numeric. Is it possible for the regression tools to pick up the name
defined in the jamfile (``test-suite "interval" : [ run...'')? Or to use a
longer name like numeric/ublas and numeric/interval for example?

Last point, is there something wrong with the linux computer used for the
regression tests on http://boost.sourceforge.net/regression-logs/ ? With
gcc 3.2, it fails 179 tests. When I gave it a try for the interval
library, I got a value which is roughly the same as for windows and
openbsd (only 13 tests failed, but it was sunday).

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Interval library merge

2003-01-20 Thread Guillaume Melquiond
Hi,

I think the Interval Arithmetic library is ready to be merged from the
boost-sandbox cvs into the main boost cvs. So cvs write rights will be
needed; but before that, something must be decided: where to put the
library?

This question was already discussed on this mailing-list some times ago,
but no clear answer was given at that time. The library directory is
actually directly under boost. To avoid cluttering the root, it would
probably be better to put it somewhere else; for example, boost/math or
boost/numeric.  Unfortunately, the library has some good reasons to be put
in each of these directories. So I suggest it is put in boost/numeric
(heads or tails).

Subsidiary question: should the namespace tree follow the directory tree?
I think it should; but since I will need a few hours to correct the whole
source and documentation (it isn't as easy as changing the #include at the
top of the files), I prefer to ask beforehand.

Regards,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] A little problem with unit-test

2003-01-13 Thread Guillaume Melquiond
Hi,

I'm quite annoyed with 'unit-test' in a Jamfile. I don't know if it's my
fault or not, but I hope somebody can help me with this problem.
'unit-test' doesn't seem to work anymore. Indeed, some times ago, when I
was launching 'bjam ...', test programs were compiled, linked, chmod'd and
finally run.

Now test programs are still compiled, linked and chmod'd, but they are not
run anymore. So the whole test-suite of the interval lib has lost its
meaning. Did I overlook something? Or maybe I should use something else?

My CVS is up to date. I did rebuild bjam. And, of course, test programs
don't fail to compile (it goes till the chmod).

Thanks,

Guillaume

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Typo in intel-linux-tools.jam

2002-12-18 Thread Guillaume Melquiond
Hi,

Here is a small patch for a typo in tools/build/intel-linux-tools.jam

At the beginning of the same file, the default version of ICC is set to be
5.0; but the 6.0 version has been available for a long time, and now even
ICC 7.0 is available as a non-commercial version on intel.com. Maybe it's
time to change the default version of Intel compiler in Boost?

Guillaume



--- tools/build/intel-linux-tools.jam 17 Dec 2002 12:57:50 - 1.14
+++ tools/build/intel-linux-tools.jam 18 Dec 2002 08:55:55 -
@@ -83,7 +83,7 @@
 DLLVERSION = $(DLLVERSION[1]) ;
 DLLVERSION ?= $(BOOST_VERSION) ;

-flags inttel-linux TARGET_TYPE  ;
+flags intel-linux TARGET_TYPE  ;

  Cc 

___
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost