Am Fri, 22 Jun 2018 23:11:47 +0000 schrieb "Poul-Henning Kamp" <[email protected]>:
> -------- > In message <[email protected]>, > Florian Teply writes: > > >Let me see if I understandd that correctly: > >Assuming no adjustments have been made to the instrument in between, > >with calibration history I could work out the actual drift rate of > >the instrument. Of course, the more datapoints I have, the more > >accurate that estimation might be. Then I could use that to project > >into the future to see when it will likely drift out of spec. > > Provided it has a uniform low-ish drift rate. > > That is probably something you will only see on the high-end kit. > > Low range kit will probably be dominated by all other sources of > noise. > Sure, if the error is dancing all over the place, the simple term "drift" is misleading as it implies a mainly linear behaviour. > >And, additionally, given that I worked out the drift, I could even > >try and post-process the data taken with that instrument and correct > >for the drift we just established, if this extra precision actually > >has some value to someone. After all, in the end it's just the > >removal of some systematic error I just happen to know after the > >analysis. > > If you do that, your uncertainty calculations just got a fair bit > more complicated, because now you also have to factor in the > uncertainty of the drift rate. > > But yes, that is basically how all cal-labs without Josephson > Junctions estimate their Volt and Resistance. > Agreed, that's another can of worms one probably doesn't want to open unless necessary. As far as I understand it by now, this approach would be a possible solution if a) sufficient data exists to support it (including a reasonable model of drift) AND b) the manufacturer specs are not sufficient for the task at hand (for the unit to be calibrated and/or the unit taken as reference). > >But would the evaluation of drift rate still be possible if > >adjustments have taken place? > > No. > > Until you have solid evidence to the contrary, you have to assume > that adjustments changed the drift rate. > That's my gut feeling as well. At least my guts are well calibrated ;-) > One interesting idea in this space is to maintain per instrument > Kalman filters on the calibration results. > > The predictions+uncertainty you get out will be way better than the > formal uncertainty calculation, because the Kalman filter does not > factor in risks (ie: things that _could_ happen) until they actually > _do_ happen, whereas the manufacturers specs have an allowance for > anything they could imagine or have heard about (jumps, thermals, > air pressure, etc.) > > The main trick is that if you ever see your formal uncertainty dip > below the Kalman filter, you know something is seriously wrong and > in the mean time, the filter *probably* tells you what the situation > is much more precisely than the formal numbers. > Granted, I just read a bit up on Kalman filters, and I'm surely far from understanding the major part of it. But doesn't that need a priori a reasonable model of what the uncertainty is composed of? Or at least a reasonable starting point for the covariance matrix. That sounds to me like a chicken-egg problem, trying to use some procedure to get an idea of something else that's needed as input... Definitely I'll have to wrap my head around it as it does indeed sound like a promising technique. So lots more to read up on... > >> The biggest advantage to inhouse calibration, is that you can do it > >> much more often, and therefore don't need to do it as precisely > >> as the cal-lab, because the sticker only needs date some months > >> ahead. > >> > >One more question to that, as it's not entirely clear to me what > >exactly you mean here: > >Do I take it correctly that in case I would be willing to re-cal in, > >say three months instead of next year, the instruments used for > >calibration do not necessarily need to be as precise as if I needed > >calibration good for one year? Or did you mean that I could afford > >coming closer to the manufacturer spec limit? Or something else > >altogether? > > All of the above, but you need to do the math to show that it is ok. > > If you stick to the manufacturers instructions, they did the math, > and you wont need to. > > If you invent your own schedule, you need to do the math to find out > the consequences for your uncertainty. > As I take it, doing calibration more often than the recommended calibration period is safe as long as I don't want to claim less uncertainty than what the spec sheet for the instrument gives. Then there's also no real need to re-cal more often than that though. But I'd still be interested in how one actually can come up with numbers, that is, how to do the uncertainty analysis for different situations. I'm curious how far one actually could get without detailed knowledge of the equipment... Again, more to find and digest details on. > Linear scaling is a good first approximation for drift, but not for > other sources of noise or failure. > Failure I wouldn't even include here unless it's parametric failure, which then would be the result of some drift process > >I guess a good Calibrator like a Fluke 5730A might do the trick as > >well for the mentioned measurement range if low currents don't > >mattter too much. And might be easier to get nowadays as even > >well-known distributors don't quote a 3458A anymore. Might try to > >get a quote for a Fluke 8508 and a Keithley 2002 as well... > > If you are in EU, I think you need to buy the 3458A directly from > Keysight for ROHS reasons. > Apparently, datatec in germany also still has a few left in stock, priced at 8500 Euros. They're actually cheaper than I had expected. One single calibration run for our modular DC instruments would already nearly buy me one, and one 3458A would nearly be sufficient to calibrate the stuff (just missing a few resistors, which at 0.1% required accuracy aren't that hard to come by...). I should discuss that with my boss... > I don't think I'm qualified to recommend specific equipment. > > I only mentioned the 3458A because it is generally seen as the "gold > standard" and it is a damn good instrument in my own personal > experience. > That's my interpretation as well. I just looked what I can come up with that has spec'ed uncertainty below 10 ppm, and the three units mentioned were all I found from the major test equipment manufacturers. And there's probably a reason why even Keithley lists a 3458A as Equipment needed for calibration of some of their high end meters. As it stands, the most likely solution hardware-wise in my case is repurposing an 3458A we already have for use as calibration reference. No need to buy more fancy equipment, just some money for regular calibration of the beast. Even though it often is easier to come up with 100k Euros for new equipment than with 10k for regular maintenance of what we already have :-( All the best, Florian _______________________________________________ volt-nuts mailing list -- [email protected] To unsubscribe, go to https://lists.febo.com/cgi-bin/mailman/listinfo/volt-nuts and follow the instructions there.
