Dear All,

    Thank you very much for giving this your consideration.

    Both the below and a comment by Mr. Hauberg indicate that the
documentation should point out that you can not run over an
integrand's singularity, x=+/-1/5 below, with a power series.  The
below is only one of an infinite number of functions that the proposed
should not be able to handle properly.  This is always the chance
taken when dealing with data.

    This peaked my interest in just how many data points it would
take to get five place accuracy for various functions for both the
proposed, and the trapz, sighted below, routines.  I found that for
the below atan example, the proposed took 23 while the trapz took
98, for exp(-x) on [0,1] it was 5 and 41, for exp(x) on [0,1] it was
5 and ~500, and for exp(-X^2) on [0,1] it was 7 and somewhere
between 100 and 200, respectively.  Also, as mentioned previously,
the trapz method is all but useless for simple powers of x.
    I don't know how quad got into the discussion, since this is
about dealing with given data points, rather than functions for
which the evaluation points are adjustable.
    I was a bit surprised at how well this did work over the singularity,
but this has indeed been my experience with such things.

    If by higher dimensionality, you mean two or three orthogonal
dimensions, that would be a very useful, but an awfully time consuming
undertaking. I just do not have the time for that now.  If you mean
vectorized, so that several data sets could be done at the same time,
this can be done.  However, all the sets would have to have exactly the
same number of elements.  I had it this way once, but found I never
took advantage of it, and the feature was just overhead.  If you think
this might be useful, I could try to resurrect it.
    With respect to a standard name, it has been way too long since I
either took or taught this sort of thing, and reference to my numerical
analysis texts* came up empty.  This is part of why I wish to submit
this;  I don't see this simple but useful tool in practice.
    In any case, "integ1es" sounds fine to me.

    Yes, I would be happy to have it in the public domain.  I will expand
the documentation, and resubmit.


    O.K., so the name for the character ø is "ø".  Who would have thought?
Thank you for the link; I enjoyed it.  From my youth in a partly
Danish-American culture I knew the pronunciation, but, from the website,
I was doing it wrong when it was at the end of a word, if my ears are
right.




Thanks again,

dmelliott



* Survey of Numerical Analysis,  John Todd editor, McGraw-Hill
  Handbook of Mathematical Functions, Abramowitz and Stegun editors,
     National Bureau of Standards
  etc.





----- Original Message ----- 
From: Carlo de Falco
To: dmelliott
Cc: Søren Hauberg ; [email protected]
Sent: Tuesday, June 02, 2009 3:15 PM
Subject: Re: [OctDev] Developer Registration Request



On 2 Jun 2009, at 21:12, dmelliott wrote:

>
> Dear Søren,
>
>    More thoughts:
>
>
>    The idea of this routine is that it integrates existing data in
> its most
> ususal form.  That is, it does not need a function to call, and,
> since, for
> better or for worse, most data comes on a domain of equally sized
> steps,
> this is what it uses.  This is an attempt to make it compatable
> whith most
> experimantal data, and the outputs of other routines, e.g. sales by
> week,
> Fourie analysed amplitudes by frequency, actuarial data by age,
> etc.  The
> patrial interval integrals, including cumulative, can be achieved by
> looping
> or otherwise manipulating the endpoint values.
>
>    The best way to see its worth is to compare it to the available
> routine
> for existing data, by editing the "integrator_verifier" to use the
> "trapz"
> routine.  The latter is only of first order, and even for as many as
> 500
> points on [0,1] yields a 2.01e-6 relative error for the next higher
> order,
> the second.  For higher orders higher than second it is even more
> useless.
> There probably should a caution appended to the instructions about
> this.

Douglas,

It seems to me there is one detail you are missing here:
experimental data are not always polynomials, actually most often you
don't even have a way to tell wether they are smooth, if all you have
are equispaced samples.

Using a high order quadrature rule only makes sense if you know a-
priori your data can be
accurately approximated by a polynomial, e.g. by Taylor expansion.
If you have no a-priori knowledge about the smoothness of your data,
using a low order quadrature rule is a safer bet. For example consider
this well known example:

 >> x = linspace (-1, 1, 10);
 >> f = @(x) 1./(1+25.*x.^2);
 >> F = f(x);
 >> integrator (F, -1, 1)
ans =  0.47972
 >> sum ((F(2:end)+F(1:end-1)).*diff(x)/2)
ans =  0.54437

while the exact value of the integral is

 >> .4 * atan (5)
ans =  0.54936
 >> quad(f, -1, 1)
ans =  0.54936

so in this case even the simple trapezoidal rule beats your 10th order
method because it's based on approximating f by means of a highly-
order and thus highly oscillating polynomial.

So I believe your function would actually be more useful if the order
of the quadrature rule to use was to be given as an input option
rather than be set automatically depending on the size of the sample.

HTH,
c.= 


------------------------------------------------------------------------------
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
_______________________________________________
Octave-dev mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/octave-dev

Reply via email to