Hi Filip,

Am Mi., 22. Nov. 2023 um 14:24 Uhr schrieb <filip.domi...@gmail.com>:
>
> Convolution is often used for smoothing noisy data; a typical use will keep 
> the 'same' length of data and may look like this:
>
> >    convol = 2**-np.linspace(-2,2,100)**2;
> >    y2 = np.convolve(y,convol/np.sum(convol), mode='same') ## simple 
> > smoothing
> >    ax.plot(x, y2, label="simple smoothing", color='g')

First, maybe it might be useful to calculate the convolution "father"
`convol` with the exponent of the normal distribution in its full
glory; s.t. its standard deviation is known. Currently it's just
"some" normal distribution which goes down to 0.0625 as its edges.

What you might want to achieve is to use a *different kernel* at the
edges. It seems you're trying to use, at the edges, a kernel version
normalised using the positions which overlap with the domain of `y`.

Before diving into this: It possibly would be safest, to just discard
the positions in the support of `y` which suffer from the boundary
effect. In your example, it would mean to cut away say 20 positions at
each side. The choice can be made systematically by looking at the
standard deviation of the kernel. This would produce reliable results
without much headache and subsidiary conditions ... and it would make
interpretation of the results much easier as it wouldn't be required
to keep that extra maths in mind.

> >    convol = 2**-np.linspace(-2,2,100)**2;
> >    norm = np.convolve(np.ones_like(y),convol/np.sum(convol), mode='same')
> >    y2 = np.convolve(y,convol/np.sum(convol), mode='same')/norm ## simple 
> > smoothing
> >    ax.plot(x, y2, label="approach 2", color='k')

`norm` holds the sums of the "truncated" Gaussians. Dividing by it
should mean the same as using the truncated kernels as the edges,
which are normalised *on their truncated domain*. So it should
implement what I described above. Maybe this can be checked by
applying the method to some artificial test function, most easily to a
constant input function to be convolved. It should result in
completely constant values also at the edges. I would be interested if
this works. Looking at your "real world" result I am not entirely sure
if am not mistaken at some point here.

> In my experimental work, I am missing this behaviour of np.convolve in a 
> single function. I suggest this option should be accessible numpy under the 
> mode="normalized" option. (Actually I believe this could have been set as 
> default, but this would break compatibility.)

I deem your possible solution as too special for this. It can be
implemented by a few lines in "numpy user space", if needed.

Best,
Friedrich
_______________________________________________
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com

Reply via email to