I wish you'd made this response to the list. Here's another thought, to address your overall problem:
* Find out what flavor of splines FEMM uses (there are subtleties). * Figure out what Scilab calls them (naturally, everyone uses different names -- particularly when there's multiple countries and/or disciplines involved) * Get your bad data point problem beaten to the ground * Use Scilab to find the minimum number of points and their locations which, when smoothed with the right kind of splines, gives an acceptably close match to your B-H curve. Determining what's "acceptably close" is your job, of course. You may find that minimizing the number of points helps the speed, you may not -- it may just be that FEMM's numerical solver doesn't like a large second derivative in the B-H data, in which case you'll just need to smile and accept long solver times. On Mon, 2016-04-04 at 08:43 -0800, scilab.20.browse...@xoxy.net wrote: > Tim, > > (sorry for the earlier "Tom"; finger trouble not name recognition problems :) > > > If your FEM software can't handle non-monotonic data, can it handle > > noisy data? > > That's a good (and open) question. > > The software is FEMM (www.femm.info) which is widely held in high esteem. > > When entering BH curve values into the materials definitions for those with > non-linear magnetic properties, the program simply won't allow you to enter > non-monotonic data; hence my need to remove these discontinuities before I > can progress my project. > > Once given data it will accept, it then manipulates the data to allow it to > ... "do its thang". It's easier (and far more accurate) for me to quote David > Meaker here than try to paraphrase him: > > "Since FEMM interpolates between your B-H points using cubic splines, it is > possible to get a bad curve if you haven’t entered an adequate number of > points. “Weird” B-H curves can result if you have entered too few points > around relatively sudden changes in the B-H curve. Femm “takes care of” bad > B-H data (i.e. B-H data that would result in a cubic spline fit that is not > single-valued) by repeatedly smoothing the B-H data using a three-point > moving average filter until a fit is obtained that is single-valued. This > approach is robust in the sense that it always yields a single-valued curve, > but the result might be a poor match to the original B-H data. It may also be > important to note that FEMM extrapolates linearly off the end of your B-H > curve if the program encounters flux density/field intensity levels that are > out of the range of the values that you have entered. This extrapolation may > make the material look more permeable than it “really” is at high flux > densities. You have to be careful to enter enough B-H values to get an > accurate solution in highly saturated structures so that the program is > interpolating between your entered data points, rather than extrapolating. > Also in the nonlinear parameters box is a parameter, φhmax. For nonlinear > problems, the hys-teresis lag is assumed to be proportional to the effective > permeability. At the highest effective permeability, the hysteresis angle is > assumed to reach its maximal value of φhmax. This idea can be represented by > the formula: φh(B) = ( µe f f (B) ) / ( µe f f ,max ) φhmax " > > So the first goal is to get data that FEMM will accept. After that there are > other criteria that may require the raw data to be further manipulated. > > I've demonstrated that 'inconveniently real-world' data can interact badly in > (at least) two different ways. with FEMMs manipulation and use of that data > in its Tensor equations: > > 1) Unless the final points approach a slope of 0 very closely, it can > extrapolate the data to produce ludicrously high B values. (Eg. the 'best# I > achieved was 67.7T!) > > 2) Unless the data values works with FEMM's interpolations rather than > against them; it can result in extraordinarily long processing times as the > Newton-Ralphson iterations oscillate and converge very slowly. > > The former is fairly easy to deal with by simply appending a few artificial > values to the dataset that ensure a very near 0 slope. > > The latter is far harder to get a handle on. One of the reasons I'm being > quite adamant about making a little change to the real data as possible is so > as to give me a chance to determine the absolute minimum requirements for > ensuring accurate simulations with minimal run times. > > The typical approach to the latter problem seems to be to approximate the > data using some nicely behaving mathematical approximations (1-exp( tanh ) or > similar), and feed those values as the materials definition. > > The problem with that is that it throws away much of the subtlety of the > data; which for silicon steel laminates doesn't matter; but for > amorphous/nono-crystalline metals, can mask important parts of their > desirable behaviours. > > Indeed, my entire foray into learning SciLab is to try and establish a good > method for pre-processing these kind of curves so that it strikes the balance > between horrendous run times; and using artificial substitutes that ignore > those subtleties. > > And yes. You've certainly given me food for thought :) > > > > > > > I'm taking "smoothing" to mean "filtering" here. > > > > In general the filtering problem is to decide on the characteristics of > > your desired signal, the characteristics of your noise, and then make > > some mathematical transformation that minimizes the effects of anything > > with noise-like characteristics while retaining as much of anything with > > desired-signal-like characteristics. > > > > This has two salient corollaries: first, you have to know what the > > characteristics of your signal and noise are; and second, the more that > > your signal resembles your noise, the less of it you will be able to > > recover. > > > > You can spin off from there in a number of different directions, leading > > you to a number of different filtering techniques. > > > > It appears that you're doing your measurements by varying some > > controlled parameter along the vertical axis and measuring along the > > horizontal axis. If this is the case, then I would start by trying > > filtering techniques using the vertical axis as your "time" axis. > > > > There is no linear filtering technique (i.e., output sample = weighted > > some of a vector of input samples) that you can use that guarantees > > monotonicity for all possible inputs. If you use linear filtering then > > you need to verify monotonicity by experiment. If your FEC program is > > messed up by noise as much as non-monotonicity, then this is not a bad > > thing -- it's just an indication that you're filtering heavily enough. > > > > If the problem really, truly is monotonicity, then take the derivative > > at each point, and snip out the points that go the wrong direction. I > > suspect this won't lead to joy, but it'll meet your stated objectives. > > > > If the problem with the smoothing filters that you've tried is that it > > works well in some spots and not in others, then consider using a filter > > whose properties vary depending on where on the axis your output sample > > is located. This isn't something that you'll find pre-packaged -- > > you'll have to whomp something up using more primitive functions of > > Scilab. > > > > I could go on and on -- sorry for coming to a disorganized end here, but > > I hope I'm giving you some food for thought. > > > > ____________________________________________________________ > FREE ONLINE PHOTOSHARING - Share your photos online with your friends and > family! > Visit http://www.inbox.com/photosharing to find out more! > > > > _______________________________________________ > users mailing list > users@lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users -- Tim Wescott www.wescottdesign.com Control & Communications systems, circuit & software design. Phone: 503.631.7815 Cell: 503.349.8432 _______________________________________________ users mailing list users@lists.scilab.org http://lists.scilab.org/mailman/listinfo/users