llvm supports generating NaNs with specified payloads in class APFloat
```c++
static APFloat 
<http://iss.ices.utexas.edu/projects/galois/api/2.2/classllvm_1_1APFloat.html>
 llvm::APFloat::getQNaN(const fltSemantics 
<http://iss.ices.utexas.edu/projects/galois/api/2.2/structllvm_1_1fltSemantics.html>
 & Sem,bool Negative = false,const APInt 
<http://iss.ices.utexas.edu/projects/galois/api/2.2/classllvm_1_1APInt.html>
 * payload = 0 )[inline, static]

getQNan - Factory for QNaN values.



static APFloat 
<http://iss.ices.utexas.edu/projects/galois/api/2.2/classllvm_1_1APFloat.html>
 llvm::APFloat::getSNaN(const fltSemantics 
<http://iss.ices.utexas.edu/projects/galois/api/2.2/structllvm_1_1fltSemantics.html>
 & Sem,bool Negative = false,const APInt 
<http://iss.ices.utexas.edu/projects/galois/api/2.2/classllvm_1_1APInt.html>
 * payload = 0 )[inline, static]

getSNan - Factory for SNaN values.
```
Propogation of a QNaN should just happen in any IEEE 754 conformant system. 
 The handling of arith_op(QNaN1, QNaN2), if not overridden, may well fall 
into floating point hardware, and those rules are not uniform over 
conformant systems. It may make sense to override (+)... to ensure a 
preferred result from arith_op(QNaN1, QNaN2) during some phase of 
development.  It would be too much overhead otherwise.

On Wednesday, August 5, 2015 at 7:54:24 AM UTC-4, Sisyphuss wrote:
>
> I noticed this post just now. 
>
> I launched an issue on github about the use of NaN.
> https://github.com/JuliaLang/julia/issues/12446
>
> I'd like to learn more about it.
>
>
>
>
> On Monday, August 3, 2015 at 4:33:02 PM UTC+2, Stuart Brorson wrote:
>>
>> On Sun, 2 Aug 2015, Jeffrey Sarnoff wrote: 
>>
>> > Quiet NaNs (QNaNs) were introduced into the Floating Point Standard as 
>> a 
>> > tool for applied numerical work.  That's why there are so many of them 
>> > (Float64s have nearly 2^52 of them, Float32s have nearly 2^23 and 
>> Float16s 
>> > have nearly 2^10 QNaNs).  AFAIK Julia and most other languages use one 
>> or 
>> > two of each in most circumstances.  Half of the QNaNs are in some sense 
>> > positive and the other half negative (their sign bits can be queried, 
>> even 
>> > though they are not magnitudes).  While QNaNs are unordered by 
>> definition, 
>> > they each have an embedded *payload:* an embedded integer value that 
>> exists 
>> > to be set with information of reflective value. And then to carry it, 
>> > propagating through the rest of the numerical work so it becomes 
>> available 
>> > for use by the designer or investigator. 
>>
>> A logical application for the many different quiet NaNs is to encode 
>> different types of meta-numeric value.  Of course, there is the basic 
>> NaN, for example 0/0 => NaN.  However using a different payload in the 
>> NaN might be used to signal NA (i.e. missing).  One can think of many 
>> other fault states which arise in numerical computing with real data, 
>> such as "value out of bounds", "invalid value", etc.  All these 
>> different states might be encoded using NaNs of different payloads. 
>>
>> The devil is in the details, however.  For example, the missing value 
>> NA propagates differently from the standard NaN.  Consider: 
>>
>> mean([1 2 NA 4 5]) => 3 
>> mean([1 2 NaN 4 5]) => NaN. 
>>
>> Therefore, the function "mean()" needs to know how to treat the NaN 
>> differently from the NA. 
>>
>> Moreover, I believe that to make use of the different NaN payloads, 
>> hardware makers would need to build knowledge of the different NaN 
>> types (and propagation rules) into their floating point ALUs.  Is this 
>> right? 
>>
>> One can implement this scheme in software, 
>> but the problem is that one needs to match the NaN payload in 
>> software, which degrades floating point performance in a big way. 
>> Therefore, standardization and hardware support are important. 
>>
>> My question:  Have any hardware makers ever looked into utilizing the 
>> different NaN payloads for the above scheme?  How about 
>> standardization bodies? 
>>
>> Stuart 
>>
>

Reply via email to