Yes, Bo, that looks like a fair statement of the problem, in terms of
least-squares minimization.


On Thu, Jun 21, 2012 at 4:46 PM, Bo Jacoby <bojac...@yahoo.dk> wrote:
> Consider the step functions
>
>    H=:(]#0:),-#1:
>    10 H 0
> 1 1 1 1 1 1 1 1 1 1
>    10 H 3
> 0 0 0 1 1 1 1 1 1 1
>    10 H 10
> 0 0 0 0 0 0 0 0 0 0
> The problem is to find constants A,B,C to minimize the expression
>
>    +/*:X-A+B*(#X)H C
> Right?
> -Bo
>
>
>
>>________________________________
>> Fra: Ian Clark <earthspo...@gmail.com>
>>Til: Programming forum <programming@jsoftware.com>
>>Sendt: 14:34 torsdag den 21. juni 2012
>>Emne: Re: [Jprogramming] Kalman filter in J?
>>
>>Thanks, Bo and Ric. Yes, I'd tried searching jwiki on "Kalman" and
>>Pieter's page was the only instance.
>>
>>I neglected to mention that in the most general case I can't really be
>>confident that X1 and X2 have the same distribution function, let
>>alone the same variance. But looking at it again, I see that under the
>>restrictions I've placed the problem simplifies immensely to fitting a
>>step-function H to: X=: K+G+H. If I can just do that, I'll be happy
>>for now.
>>
>>Repeated application of FFT should allow me to subtract the noise
>>spectrum F(G), or at least see a significant change in the overall
>>spectrum emerge after point T, and that might handle the more general
>>cases as well.
>>
>>Anyway it's simple-minded enough for me, and worth a try. FFT is,
>>after all, "fast" :-)
>>
>>But won't an even faster transform do the trick, such as (+/X)? On the
>>above model, X performs a drunkard's walk around a value M1 until some
>>point T, after which it walks around M2. Solution: simply estimate M1
>>and M2 on an ongoing basis.
>>
>>I get the feeling I ought to be searching on terms like "edge
>>detection", "step detection" and CUSUM.
>>
>>Anyway, there's enough here to try.
>>
>>On Thu, Jun 21, 2012 at 10:28 AM, Ric Sherlock <tikk...@gmail.com> wrote:
>>> Hi Ian,
>>>
>>> A quick search of the J wiki finds this:
>>> http://www.jsoftware.com/jwiki/Stories/PietdeJong
>>> Sounds like he might have what you're after?
>>>
>>> Cheers,
>>> Ric
>>>
>>> On Thu, Jun 21, 2012 at 4:33 PM, Ian Clark <earthspo...@gmail.com> wrote:
>>>> Can anyone help? Has anyone written a Kalman filter in J?
>>>>
>>>> I'm not a specialist in either statistics or control theory, so I'm
>>>> only guessing a Kalman filter is what I need. Though I do have a
>>>> passing acquaintance with the terms: stochastic control and linear
>>>> quadratic Gaussian (LQG) control. I am aware that a "Kalman filter"
>>>> (like ANOVA) is more a topic than a black-box.
>>>>
>>>> So let me explain what I want it for.
>>>>
>>>> I have a time series X which I am assuming can be modelled like this:
>>>>
>>>> X=: K + G + (X1,X2)
>>>>
>>>> where
>>>>
>>>> K is constant
>>>> G is Gaussian noise
>>>> X1 is a random variable with mean: M1 and variance: V1
>>>> X2 is a random variable with mean: M2 and variance: V2
>>>>
>>>> Typically X is a sequence of sensor readings, but may also be
>>>> measurements from a series of user trials conducted on a working
>>>> prototype, which suffers a design-change at a given point T.
>>>>
>>>> Simplifying assumptions (which unfortunately I may need to relax in due 
>>>> course):
>>>>
>>>> (a) X is not multivariate
>>>> (b) X1 and X2 are Gaussian
>>>> (c) V1=V2 (only the mean value changes, not the variance).
>>>>
>>>> The problem:
>>>>
>>>> 1. Estimate T=: 1+#X1 -- the point at which X1 gives way to X2.
>>>>
>>>> 2. Given T, estimate (M2-M1) -- the "underlying improvement", if any,
>>>> of the change to the prototype.
>>>>
>>>> 3. (subcase of 2.) Given T, test the null hypothesis: M1=M2, viz that
>>>> there has been no underlying improvement.
>>>>
>>>> 4. Estimate U=: #X2 -- the minimum number of samples needed after T in
>>>> order to achieve 1-3 above with 95% confidence.
>>>>
>>>> In other words, detect the signal-in-noise: M1-->M2, and do so in 
>>>> real-time.
>>>>
>>>> Because of 4, the need to estimate T and (M2-M1) on an ongoing basis,
>>>> I can't do a randomised block design. I gather that a Kalman filter,
>>>> or some sort of adaptive filter, will handle this problem.
>>>>
>>>> But maybe something simpler will turn out good enough?
>>>>
>>>> Supposing I can get hold of a "black box" Kalman filter, I propose to
>>>> test it out on generated data and compare its performance to some
>>>> simple-minded approach, like estimating M1 / M2 from a simple moving
>>>> average of the last U samples, or applying the F-test to 2 sets of U
>>>> samples taken either side of T.
>>>>
>>>> But since the technique aims to be published, or at least critically
>>>> scrutinised (and maybe incorporated in a software product), I'd rather
>>>> depend on a state-of-art packaged solution than reinvent the wheel: a
>>>> large and very well-turned wheel it appears to me.
>>>>
>>>> Ian Clark
>>>> ----------------------------------------------------------------------
>>>> For information about J forums see http://www.jsoftware.com/forums.htm
>>----------------------------------------------------------------------
>>For information about J forums see http://www.jsoftware.com/forums.htm
>>
>>
>>
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to