Matt,

Your second simulation confirms what I said:

The standard deviation in thickness from point to point in a stack of N tapes generally increases as the square root of N (typical statistical behavior).

Now follow that through, using, for example, Grant Bunker's formula for the distortion caused by a Gaussian distribution:

(mu x)eff = mu x_o - (mu sigma)^2/2

where sigma is the standard deviation of the thickness.

So if sigma goes as square root of N, and x_o goes as N, the fractional attenuation of the measured absorption stays constant, and the shape of the measured spectrum stays constant. There is thus no reduction in the distortion of the spectrum by measuring additional layers.

Your pinholes simulation, on the other hand, is not the scenario I was describing. I agree it is better to have more thin layers rather than fewer thick layers. My question was whether it is better to have many thin layers compared to fewer thin layers. For the "brush sample on tape" method of sample preparation, this is more like the question we face when we prepare a sample. Our choice is not to spread a given amount of sample over more tapes, because we're already spreading as thin as we can. Our choice is whether to use more tapes of the same thickness.

We don't have to rerun your simulation to see the effect of using tapes of the same thickness. All that happens is that the average thickness and the standard deviation gets multiplied by the number of layers.

So now the results are:

For 10% pinholes, the results are:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
#     1    |  10.0      |    0.900      |    0.300          |
#     5    |  10.0      |    4.500      |    0.675          |
#    25    |  10.0      |    22.500      |    1.500          |

For 5% pinholes:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
#     1    |   5.0      |    0.950      |    0.218          |
#     5    |   5.0      |    4.750      |    0.485          |
#    25    |   5.0      |    23.750      |    1.100          |

For 1% pinholes:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
#     1    |   1.0      |    0.990      |    0.099          |
#     5    |   1.0      |    4.950      |    0.225          |
#    25    |   1.0      |    24.750      |    0.500        |

As before, the standard deviation increases as square root of N. Using a cumulant expansion (admittedly slightly funky for such a broad distribution) necessarily yields the same result as the Gaussian distribution: the shape of the measured spectrum is independent of the number of layers used! And as it turns out, an exact calculation (i.e. not using a cumulant expansion) also yields the same result of independence.

So Lu and Stern got it right. But the idea that we can mitigate pinholes by adding more layers is wrong.

--Scott Calvin
Faculty at Sarah Lawrence College
Currently on sabbatical at Stanford Synchrotron Radiation Laboratory



On Nov 24, 2010, at 6:05 AM, Matt Newville wrote:

Scott,

OK, I've got it straight now. The answer is yes, the distortion from
nonuniformity is as bad for four strips stacked as for the single strip.

I don't think that's correct.

This is surprising to me, but the mathematics is fairly clear. Stacking multiple layers of tape rather than using one thin layer improves the signal to noise ratio, but does nothing for uniformity. So there's nothing wrong with the arguments in Lu and Stern, Scarrow, etc.--it's the notion I had that we use multiple layers of tape to improve uniformity that's mistaken.

Stacking multiple layers does improve sample uniformity.

Below is a simple simulation of a sample of unity thickness with
randomly placed pinholes.  First this makes a sample that is 1 layer
of N cells, with each cell either having thickness of 1 or 0.  Then it
makes a sample of the same size and total thickness, but made of 5
independent layers, with each layer having the same fraction of
randomly placed pinholes, so that total thickness for each cell could
be 1, 0.8, 0.6, 0.4, 0.2, or 0.  Then it makes a sample with 25
layers.

The simulation below is in python. I do hope the code is
straightforward enough so that anyone interested can follow. The way
in which pinholes are randomly selected by the code may not be
obvious, so I'll say hear that the "numpy.random.shuffle" function is
like shuffling a deck of cards, and works on its array argument
in-place.

For 10% pinholes, the results are:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
#     1    |  10.0      |    0.900      |    0.300          |
#     5    |  10.0      |    0.900      |    0.135          |
#    25    |  10.0      |    0.900      |    0.060          |

For 5% pinholes:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
#     1    |   5.0      |    0.950      |    0.218          |
#     5    |   5.0      |    0.950      |    0.097          |
#    25    |   5.0      |    0.950      |    0.044          |

For 1% pinholes:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
#     1    |   1.0      |    0.990      |    0.099          |
#     5    |   1.0      |    0.990      |    0.045          |
#    25    |   1.0      |    0.990      |    0.020          |

Multiple layers of smaller particles gives a more uniform thickness
than fewer layers of larger particles. The standard deviation of the
thickness goes as 1/sqrt(N_layers).   In addition, one can see that 5
layers of 5% pinholes is about as uniform 1 layer with 1% pinholes.
Does any of this seem surprising or incorrect to you?

Now let's try your case of 1 layer of thickness 0.4 with 4 layers of
thickness 0.4, with 1% pinholes.  In the code below, the simulation
would look like
   # one layer of thickness=0.4
   sample = 0.4 * make_layer(ncells, ph_frac)
   print format % (1, 100*ph_frac, sample.mean(), sample.std())

   # four layers of thickness=0.4
   layer1 = 0.4 * make_layer(ncells, ph_frac)
   layer2 = 0.4 * make_layer(ncells, ph_frac)
   layer3 = 0.4 * make_layer(ncells, ph_frac)
   layer4 = 0.4 * make_layer(ncells, ph_frac)
   sample = layer1 + layer2 + layer3 + layer4
   print format % (4, 100*ph_frac, sample.mean(), sample.std())

and the results are:
# N_layers | % Pinholes | Ave Thickness | Thickness Std Dev |
#     1    |   1.0      |    0.396      |    0.040          |
#     4    |   1.0      |    1.584      |    0.080          |

The sample with 4 layers had its average thickness increase by a
factor of 4, while the standard deviation of that thickness only
doubled.  The sample is twice as uniform.

OK, that's a simple model and of thickness only.  Lu and Stern did a
more complete analysis and made actual measurements of the effect of
thickness on XAFS amplitudes.  They *showed* that many thin layers is
better than fewer thick layers.

Perhaps I am not understanding the points you're trying to make, but I
think I am not the only one confused by what you are saying.

--Matt


_______________________________________________
Ifeffit mailing list
Ifeffit@millenia.cars.aps.anl.gov
http://millenia.cars.aps.anl.gov/mailman/listinfo/ifeffit

Reply via email to