Staying with Float64, see if the runtime comes way down when you prescale 
the data using prescale(x) = x *  2.0^64


Guessing your values to be less than 10^15,  and assuming the worst case 
smallest magnitude  
  the base10 exponent of your largest data value is below  70 scale by 
constant is a good strategy when the largest of the data values is not large

On Monday, July 13, 2015 at 12:04:32 PM UTC-4, Yichao Yu wrote:
>
> On Mon, Jul 13, 2015 at 11:39 AM, Jeffrey Sarnoff 
> <jeffrey...@gmail.com <javascript:>> wrote: 
>
> Thanks for sharing your view about denormal values. I hope what I said 
> doesn't seem that I want to get rid of them completely (and if it did 
> sound like that, I didn't meant it...). I didn't read the more detail 
> analysis of their impact but I would believe you that they are 
> important in general. 
>
> For my specific application, I'm doing time propagation on a 
> wavefunction (that can decay). For my purpose, there are many other 
> sources of uncertainty and I'm mainly interested in how the majority 
> of the wavefunction behave. Therefore I don't really care about the 
> actually value of something smaller than 10^-10 but I do want it to 
> run fast. Since this is a linear problem, I can also scale the values 
> by a constant factor to make underflow less of a problem. 
>
> > I have not looked at the specifics of what is going on ... 
> > Dismissing denormals is particularly dicey when your functional data 
> flow is 
> > generating many denormalized values. 
> > 
> > Do you what it is causing many values of very small magnitude to occur 
> as 
> > you run this? 
> > 
> > Is the data holding them explicitly?  If so, and you have access to 
> > preprocess the data, and you are sure that software 
> > cannot accumulate or reciprocate or exp etc them, clamp those values to 
> zero 
> > and then use the data. 
> > 
> > Does the code operate as a denormalized value generator? If so, a small 
> > alteration to the order of operations may help. 
> > 
> > 
> > 
> > On Monday, July 13, 2015 at 9:45:59 AM UTC-4, Jeffrey Sarnoff wrote: 
> >> 
> >> Cleve Moler's discussion is not quite as "contextually invariant" as 
> are 
> >> William Kahan's and James Demmel's. 
> >> In fact "the numerical analysis community" has made an overwhelmingly 
> >> strong case that, roughly speaking, 
> >> one is substantively better situated where denormalized floating point 
> >> values will be used whenever they may 
> >> arise than being free of those extra cycles at the mercy of an absent 
> >> smoothness shoving those values to zero. 
> >> And this holds widely for floating point centered applications or 
> >> libraries. 
> >> 
> >> If the world were remade with each sunrise by fixed bitwidth floating 
> >> point computations, supporting denormals 
> >> is to have made house-calls with few numerical vaccines to everyone who 
> >> will be relying on those computations 
> >> to inform expectations about non-trivial work with fixed bitwdith 
> floating 
> >> point types.  It does not wipe out all forms 
> >> of numerical untowardness, and some will find the vaccinces more 
> >> prophylatic than others; still, the analogy holds. 
> >> 
> >> We vaccinate many babies against measles even though there are some who 
> >> would never have become exposed 
> >> to that disease .. and for those who forgot why, not long ago the news 
> was 
> >> about a Disney vaction disease nexus 
> >> and how far it spread -- then California changed its law to make it 
> more 
> >> difficult to opt-out of childhood vaccination. 
> >> Having denormals there when the values they cover arise brings benifit 
> >> that parallels the good in that law change. 
> >> The larger social environment  gets better by growing stronger and that 
> >> can happen because somethat that had 
> >> been bringing weakness (disease or bad consequences from subtile 
> numbery 
> >> misadventures) no longer operates. 
> >> 
> >> There is another way denormals have been shown to be matter -- the way 
> >> above ought to help you feel at ease 
> >> with deciding not to move your work from Float64 to Float32 for the 
> >> purpose of avoiding values that hover around 
> >> smaller magnitudes realizable with Float64s.  That sounds like a 
> headache, 
> >> and you would not have changed 
> >> the theory in a way that makes things work  (or at all).  Recasting the 
> >> approch to solving ot transforming at hand 
> >> to work with integer values would move the work away from any cost and 
> >> benefit that accompany denormals. 
> >> Other that that, thank your favorite floating point microarchitect for 
> >> giving you greater throughput with denormals 
> >> than everyone had a few design cycles ago. 
> >> 
> >> I would like their presence without measureable cost .. just not enough 
> to 
> >> dislike their availability. 
> >> 
> >> On Monday, July 13, 2015 at 8:02:13 AM UTC-4, Yichao Yu wrote: 
> >>> 
> >>> > As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. 
> However, 
> >>> > I couldn't get it working without #11604[2]. Inline assembly in 
> >>> > llvmcall is working on LLVM 3.6 though[3], in case it's useful for 
> >>> > others. 
> >>> > 
> >>> 
> >>> And for future references I find #789, which is not documented 
> >>> anywhere AFAICT.... (will probably file a doc issue...) 
> >>> It also supports runtime detection of cpu feature so it should be much 
> >>> more portable. 
> >>> 
> >>> [1] https://github.com/JuliaLang/julia/pull/789 
> >>> 
> >>> > 
> >>> > [1] https://gist.github.com/simonbyrne/9c1e4704be46b66b1485 
> >>> > [2] https://github.com/JuliaLang/julia/pull/11604 
> >>> > [3] 
> >>> > 
> https://github.com/yuyichao/explore/blob/a47cef8c84ad3f43b18e0fd797dca9debccdd250/julia/array_prop/array_prop.jl#L3
>  
> >>> > 
>

Reply via email to