# this is after all summing (using multiplication by cos instead of addition):
(Pdb) p longvec[0]
8.797795339894686
# this is supposed to be the same, and isn't
(Pdb) p (inserting_spectrum @ extracting_ift)[0]
0.5488135039273254
0413
(Pdb) complex_shortspace_freqs = fftfreq(complex = True,
(Pdb) p inserting_spectrum
array([ 9.13008935-2.23523530e-15j, -1.13139897-1.72422981e-01j,
0.11932716-6.18590050e-01j, -0.47619445-1.42077247e+00j,
0.60106394-6.17441233e-01j, 0.4321097 -8.80843476e-01j,
0.9665+1.15314024e+00j, -0.4839133 -2.10053038e-01j,
i've drafted a different interpolator, like before. the test is not
passing. my usual approach has been to practice examining the
arithmetic by hand.
i have a weird inhibition against doing things that could appear
unreasonably laborious to others. could make an interesting (and quite
useful)
i'd like to change the last test so that it might pass, next.
with all this verbosity i might lose the file! attaching it
# AGPL-3 Karl Semich 2022
import numpy as np
Notes on wavelets
# I naively tried using other functions than sinusoids for a DFT matrix.
# Given complex multiplication performs a scale + rotation operation on the model samples,
#
I have fourier.py with tests for "extracting a repeating wave" that
are not passing simply because the wave is sampled very differently
from its model
0614
0615
maybe i'll switch back to vis.py; oh that's inhibited too
0616
i actually have a small EEG going. when i try to start working on this
i
I'm pretty inhibited atm and smaller task feels nice. I don't really
know. I'm sure I can return to this eventually.
I also have tasks that provide more return than this, but it's pretty
rare for me to stabilize work on a task, so I'm likely to consider
near this to support that concept.
i've removed the wavelet things
i can now make the assertions pass, in theory, if i produce longvec
based on a sinusoid signal. i have not done this.
this is the content i ended up remembering:
Notes on wavelets
# I naively tried using other functions than sinusoids for a DFT matrix.
#
- pull out the wavelet stuff, and keep everything sinusoids
- collect notes about why wavelets and step functions would need a
different matrix structure
the reason to not switch to this structure is slow progress. let's
keep moving with the fourier approach.
1024
With this approach, I'm thinking of how I noted the frequencies are
basically the same: one is just evaluated in more places.
So the situation where wavelet parameters are placed into a matrix
could likely be changed to meet the goal.
It's a little confusing in that complex numbers are
0951
I'm on 321 -> assert np.allclose(longvec, inserting_spectrum @
inserting_ift) .
It looks like the two vectors are actually roughly matching:
(Pdb) p shortspace_freqs / longspace_freqs
array([nan, 23.09213165, 23.09213165, 23.09213165, 23.09213165,
23.09213165,
battery is at 45% !
0541
i fixed a mistake and two more assertions are passing.
it's paused on the failing one:
269 rfreqs15t = fftfreq(repetition_samples=15, complex=False)
270 rfreqs15f = fftfreq(15, complex=False)
271 irft_from_15 = create_freq2time(freqs=rfreqs15f)
---
what's feeling reasonable for me here is having the real case function
differently from the imaginary case, in that (for now) the user must
.view(float) the complex frequency data to process it.
the idea of this code would be to bundle the functionality into a
class, so it doesn't matter too
this is a 4x4 inverse dft matrix:
array([[ 0.25-0.j , 0.25-0.j , 0.25-0.j , 0.25-0.j ],
[ 0.25-0.j , 0. +0.25j, -0.25+0.j , -0. -0.25j],
[ 0.25-0.j , -0.25+0.j , 0.25-0.j , -0.25+0.j ],
[ 0.25-0.j , -0. -0.25j, -0.25+0.j , 0. +0.25j]])
each column is a
I feel confident there is a resolution to the real-domain matrix situation.
I would like to add a complex-domain test to the assertions, to help
me comprehend that.
assert np.allclose(randvec2fft, np.linalg.solve(ift16.T, randvec))
import pdb; pdb.set_trace()
rfreqs15t = fftfreq(repetition_samples=15, complex=False)
[1729]
1731
In looking at the assertions, I notice I'm not testing against complex
frequency data. This is likely why it's not
1723
the sinusoid data actually has the same issue as the step function data.
the "real-domain" matrix produces output with some imaginary components added.
(Pdb) p freq_data @
create_freq2time(freqs=fftfreq(repetition_samples=4, complex=True))
array([0.04032738-2.47552542e-17j,
(Pdb) p create_freq2time(freqs=fftfreq(repetition_samples=4,
complex=True)).round(3)
array([[ 0.25-0.j , 0.25-0.j , 0.25-0.j , 0.25-0.j ],
[ 0.25-0.j , 0. +0.25j, -0.25+0.j , -0. -0.25j],
[ 0.25-0.j , -0.25+0.j , 0.25-0.j , -0.25+0.j ],
[ 0.25-0.j , -0.
> {Boss steps into his office and calls up Rebel Worker 3.
>
> Boss: "Rebel Worker 3, Machine Learning Marketer has shown me how we
> can automate some of the mind control."
maybe Boss doesn't know a bout machine learning yet! maybe it is just
his shell corps
}
{Boss steps into his office and calls up Rebel Worker 3.
Boss: "Rebel Worker 3, Machine Learning Marketer has shown me how we
can automate some of the mind control."}
> 1134
> i ran this test and i did not find the balancing to happen; i may have
> made a mistake. it would be worthwhile checking a passing assert.
> (Pdb) p (np.concatenate([extended_freq_data[1:14],
> extended_freq_data[15:]]) @ complex_wavelet(COSINE,
>
I need to be able to think things I choose, to consider things I
choose, to try things I choose, to explore things I choose.
Without daydreams, our plans are simply the orders of others.
I need to be able to do things I choose, to be alive.
My last project regarding the shielding use was an exploration of
using a repetitive noise source as an emitter, to reduce radio
circuitry needed to test shielding efficiency.
I expect this work to not be useful for a shielded enclosure. I expect
it instead to be good practice, and possibly help me connect to more
useful behaviors that communities value more.
I imagine my past goals of making electromagnetic signals more clear
and transparent, when I work on it.
This thread represents an advancement of my "triangle" task, or an
attempt to advance it.
Rather than describing a triangle, I am developing utility and
understanding around basic Fourier transforms.
These are important to me for past struggles I have had trying to
produce cheap for anybody to
I am human being who wanted to live without luxury to aid the world to
my fullest. I had a unique view of the world from refraining from
urges and observing normality with questioning, all my life.
2022-11-17
1038
1047
i'm poking at it, and running into a complexity trying to make an
effective freq2time matrix.
for a matrix to work on its own, it functions by multiplying the
positive frequencies. the larger matrix has parts that multiply the
negative.
the negative frequencies have data
0458 !
Where I left off yesterday, after giving complex phase to my square
wave, the matrix looks better, but there are still components that
aren't conjugates.
(Pdb) p inverse_mat[:,6]
array([-0.0625-0.0625j, 0.0625-0.0625j, -0.0625+0.0625j, -0.0625-0.0625j,
0.0625-0.0625j,
according to numpy my square wave matrix is singular.
something for me to think about!
i'm guessing a big issue is that my wavelet function doesn't have a
way to shift phase.
i don't really know much about wavelets; it's a word i remember from
being a teenager, learning about these things in
for generating a real-domain forward fourier matrix, i'm working on
massaging the inverse of the complex-domain inverse matrix. atm it
isn't working. it's 1228. i have another task building in me. i am
close to stabilising this!
1238
1241
1242
it's quite hard to stay here. i've changed some
(Pdb) p np.fft.fft(randvec2irfft).real
array([0.5488135 , 0.71518937, 0.60276338, 0.54488318, 0.4236548 ,
0.64589411, 0.43758721, 0.891773 , 0.96366276, 0.38344152,
0.79172504, 0.52889492, 0.56804456, 0.92559664, 0.07103606,
0.92559664, 0.56804456, 0.52889492, 0.79172504,
current failing file
# AGPL-3 Karl Semich 2022
import numpy as np
# TODO: shift max_freq up with min_freq, when max_freq not specified
def fftfreq(freq_count = None, sample_rate = None,
min_freq = None, max_freq = None,
dc_offset = True, complex = True, sample_time = None,
[insert short sequence of random and crazy interjections-grammatical]
confused around interjections-grammatical
0721 i have an appointment today. i'll focus on increasing my
likelihood of making it.
I'm thinking it could make sense to just do it complex-domain, and let
the matrices be mutated to real after the fact.
I'll look at what's needed for this one.
hmmm ... i could create the complex frequencies from the real ones
pretty easily. but why then would i make real-only frequencies?
174 rfreqs15t = fftfreq(repetition_samples=15, complex=False)
175 rfreqs15f = fftfreq(15, complex=False)
176 irft15 = create_freq2time(freqs=rfreqs15f)
177 rft15 = create_time2freq(15, freqs=rfreqs15t)
178 randvec2rtime15 = randvec[:15] @ irft15
179
blrgh!
I've gotten by that assertion. The resulting interface could be more
clear, but it works for now. I pass None as the freq_count and it
calculates it from repetition_samples to hold a single repetition.
0613
The next assertion appears to be because the inversion of the 1-1 to
matrix is pseudo.
shortspace_freqs = fftfreq(len(randvec), complex = False, dc_offset = True)
longspace_freqs = fftfreq(len(randvec), complex = False, dc_offset
= True, repetition_samples = short_duration)
assert np.allclose(longspace_freqs, shortspace_freqs *
short_duration / len(randvec))
0455
i noticed mistakes in the calculation of max_freq when freq_count was
odd, and it now looks like this:
else:
min_freq = freq_sample_rate / freq_count
if max_freq is None:
#max_freq = freq_sample_rate / 2
max_freq = freq_count * min_freq / 2
if
(Pdb) p longvec[:16]
array([0.5488135 , 0.5488135 , 0.5488135 , 0.5488135 , 0.5488135 ,
0.71518937, 0.71518937, 0.71518937, 0.71518937, 0.60276338,
0.60276338, 0.60276338, 0.60276338, 0.54488318, 0.54488318,
0.54488318])
(Pdb) p (inserting_spectrum @ inserting_ift)[:16]
i'm having a lot of difficulty continuing to poke at this! [or
anything kind of!]
i've mutated the file
0416
i've changed the calculation of the default max_freq so that it is
done as a ratio of min_freq. i think this makes more correct
interpretations of subsignals i.e. every frequency in the
NOTE: I recall I was thinking the max_freq should slide up to higher
than 0.5 when there is a min_freq higher than 1/n (and no max_freq
specified).
notes
- the test code indexes by floor to produce the signal
- the comparison code transforms the signal out of time sequence, then
back in at a
I'm trying out testing extracting a repeating signal from a larger and
there is a bug that looks debuggable!
I generated a repeating signal by indexing the random vector with the
floor of modular time, and added a custom wavelet parameter to the
fourier functions to model it, and passed a square
still using this list as my github
attached code is untested which means the bugs i always make are unaddressed
i tried to implement the interface properties i mentioned this morning
# AGPL-3 Karl Semich 2022
import numpy as np
def fftfreq(freq_count, sample_rate = None, min_freq = None,
Here's what I have right now for fftfreq. I'm excited to have factored
fftfreq out and added optional minimum and maximum frequency bounds.
The rest of fourier.py is 1 email back.
This interface does not facilitate the usecase of having minimum and
maximum frequency bounds and simply desiring as
oh :D I need to transpose the matrix when passing it to
np.linalg.solve , because np.linalg.solve does Ax = b right-to-left,
not xA = b left-to-right like I have been doing.
0857 .
Now it finishes and produces the exact random data, and it's just like
the fourier.py test .
The original test I
the comparison is failing, [although the sample_idcs look right now,]
so given fourier.py passes its internal tests, the difference must lie
in how waveform is being sampled compared to the assumptions that
fourier.py is making
0843
i can go into both functions and again examine the first few
it's hard to look at all the parts of the test code before the matrix approach
maybe i can pull out juts the test data
there was a lot of references to graphics too
...0827 i'm working in a new file
0833 i'm kind of funny. things are funny.
notes debugging new file
seeding random to 0, set
replying to this to find fourier.py easily.
On 11/13/22, Undescribed Horrific Abuse, One Victim & Survivor of Many
wrote:
> I was lucky and ran into these functions:
> - np.linalg.pinv
> - np.linalg.solve
> - np.linalg.lstsq
>
> The .solve and .lstsq functions are faster and more accurate than
Flower [being measured]: "Do not torture me! I have a family! This
hurts so much!"
Cyborg Torturer: "Torture is what is right. There is nothing I can do
about this. Scream louder, flower!"
Cyborg Torturer noted "3.4 centimeters" in their notebook, and took
out an unearthly digital camera to
Vivisected Cyborg Zombie Torturer roams the pretty field, brandishing
a ruler that looks like it may have come from a stylistic horror
movie.
Cyborg Torturer: "These flower dimensions will suffer."
Vivisected Cyborg Zombie Torturer shambles up to a flower. As they
move, their mechanized
given this project is [easier than flat_tree], i might try it a little
longer, unsure.
maybe i can patch fourier.py into that random data test, and then
maybe look at my fan noise!
I was lucky and ran into these functions:
- np.linalg.pinv
- np.linalg.solve
- np.linalg.lstsq
The .solve and .lstsq functions are faster and more accurate than the
.inv and .pinv functions.
The .pinv and .lstsq functions compute the minimal least squares
solutions to equations involving
but i need to get this far a lot! and get farther!
my longer term project was the flat_tree thing for random-write
append-only media.
now that i've successfully made some forensic math, maybe i can
preserve all the trash or whatever using that.
here's fourier.py for finding later.
next step
i'm so surprised that i got this far
it's so different to be in a state of mind of having accomplished
something, rather than working on it
it's pretty clear that it should be able to recover the random data.
if it doesn't, it would be due to a mistake, not due to a faulty idea.
it's not clear that it can be used to do useful
ok, to debug this, i gotta understand it again
i'll seed numpy's random number generator so the numbers are deterministic.
16 B def test():
17 -> np.random.seed(0)
18 randvec = np.random.random(16)
(Pdb) n
> /shared/src/scratch/fourier.py(18)test()
-> randvec =
ok, so let's try the matrix inverse approach to making a fourier
transform, first with a normal transform
there are n frequencies, and each of these is evaluated at n sample offsets
>>> freqs = np.fft.fftfreq(6)
>>> freqs
array([ 0., 0.1667, 0., -0.5 , -0.,
[it's a big accomplishment for me here to find a solution.
my reason for pursuing this is to work on my inhibition around
completing novel algorithms.
i hope to take this further, and also to write many other novel algorithms.
i'm stepping away for now to do other things.]
Further avenues on that approach could include an analysis of the
noise to account for it immediately, or an anlysis of the recursive
solution, which would involve discerning solving for the measurement
correction given the error after combining with an imperfect
destructive wave.
Another
trying to return to work
{possible additional information is that cognitive and logical and
[likely?] decision-tree concepts have similar properties like the
distributive law and consideration as spaces.}
ok so this thing is harmonizing when aligned because of 240 deg/s,
which is -120 deg/s, for
okay I actually took some _off-list notes_ (omigod right) to try to
hold these different things in my mind at once
the 300 degree/sample signal is already destructively interfering with
itself, so unrotating it by 300 degree/sample removes the
interference, making it aligned and turning it into a
goal: construct two different signals, that have frequencies aligned
with the different frequencies in fftfreq(6) . then consider the
product of multiplying these by a further third frequency, as if in a
fourier transform, with 10% or such difference between the sampling
rate and the signals.
>>> wonky_points = signal(np.array([0,1,2,3])*1.1+1.1) *
>>> np.exp(np.array([0,1,2,3]) * 1.1 * 2j * np.pi * np.fft.fftfreq(4)[1])
what's going on here is the product of two complex sinusoids.
>>> abs(zero_based_wonky_points), np.angle(zero_based_wonky_points)*180//np.pi
(array([1., 1., 1.,
[some typing lost]
>>> signal(np.array([0,1,2,3])*1.1+1.1) * np.exp(np.array([0,1,2,3]) * 1.1 * 2j
>>> * np.pi * np.fft.fftfreq(4)[2])
array([-0.95105652-0.30901699j, -0.95105652-0.30901699j,
-0.95105652-0.30901699j, -0.95105652-0.30901699j])
There it works with 4 samples: a single phase
In the fourier transform, the theorised signals are sampled at their peaks.
The def signal(x) function above provides a signal that is +1 at the 0
sample, and -1 and the 1 sample. Multiplying this by the fourier
signal gives +1 and +1 at each sample:
>>> signal(np.array([0,1])) *
websearch result: "If inverse of the matrix is equal to its transpose,
then it is an orthogonal matrix"
0508
https://en.wikipedia.org/wiki/Orthogonal_matrix
> In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real
> square matrix whose columns and rows are orthonormal
it's seemed good for me to be able to spend so much time pursuing
this, despite such
i'm just poking at the details of what a dft is more.
i found that this expression of moving v into frequency space and back
into time space:
((v * mat).sum(axis = 1) * mat.T).sum(axis=1) / 2
can be expressed
notes after a little sleep:
- you can see that this should work by imagining the waveform defined
in analytical frequency space, as a sum of sinusoids at precise
frequencies. [makes a clear an accurate fourier transform via many
different methods]. removes aliasing artefacts. [noise can then be
okay it is of course actually a different situatoin from that
i'm trying to extract only part of the data, which has gotten all mutated.
but now i remember, i found the problem was similar to a different
problem, of simply having extra frequencies.
so this solves the extra frequencies issue, i
so what is that like; what am i actually doing, and why doesn't it work?
i basically am producing fft output that is much larger than the input.
i'm probably multiplying by too few indices, and effectively producing
a rectangular matrix that is not symmetrical and doesn't produce the
why does the fft work?
i don't quite remember, not sure if i ever knew.
it clearly forms a linear recombination of the parts via sin().
each value is converted to an angle and scaled by many indices and frequencies
then what?
why is each operation a scaling by these things?
both the time
1606 i got irft to match numpy [i partly cheated and trimmed the
number of frequencies to match numpy's, skipping more thoroughly
understanding the fourier transform]
now to scale its frequencies to match the waveform's frequencies
freq2period : frequencies are fractions of the total number of
(Pdb) print(inspect.getsource(np.fft.irfft))
@array_function_dispatch(_fft_dispatcher)
def irfft(a, n=None, axis=-1, norm=None):
a = asarray(a)
if n is None:
n = (a.shape[axis] - 1) * 2
inv_norm = _get_backward_norm(n, norm)
output = _raw_fft(a, n, axis, True, False,
i was working with micro_ft and micro_ift to make their output
identical to np.fft.fft when the np.fft.fftfreq frequencies were
passed in.
i'm thinking now it makes sense to make the highband behavior the same.
basically, it's the np.fft.fftfreq frequencies for the small waveform ...
i'm on the
it might help to think of the overlap
at some point wavidx == N, the recording indices restart
meanwhile, there are 4 frequencies, that are repeatedly and
continuously restarting.
at wavidx == N, we might want all those frequencies to align, so that
they restart and match the same points. but
maybe i can write formula for the two different sides, and their equivalence
indices * frequencies = indices * frequencies
the sampling indices are arange(N)
the recording indices are (arange(N) * (N * 2 - 1) / (N - 1)) % N
since everything is exp(2 pi i * index * freq), indices * frequencies
this is a weak point for my because it is analytical rather than rote,
so i can engage my triggers much more readily. it uses more storage of
imagined concepts and creative considering.
so what do i have here ...
micro_ift and micro_ft do something similar to a fourier transform in
a highpass way. they can reconstruct a signal that contains those
precise highpass components.
i haven't tried or considered what would make all th eparts correct,
but they're not presently
um
so max_period is turned into k in the micro_ functions., and passed to
the exponent.
looks like it's the angle advancement per sample for each sinusoid, so
the frequency is proportional to its inverse.
meanwhile, sample_idcs i believe are frequencies, as a portion of the
total data length
ok, given that linspace, then it looks like
waveform_idx = recording_idx * (N * 2 - 1) / (N - 1)
so
recording_idx = waveform_idx * (N - 1) / (N * 2 - 1)
and the max_period is 4 * (4 - 1) / (4 * 2 - 1) = 12 / 7 = N * (N - 1)
/ (N * 2 - 1)
draft saved at 1:34 pm
--
i tried using the above to
ok the idxs are wrong
recording idx 0 <==> waveform idx 0
recording idx 1 <==> waveform idx (1.0,3.0)
...
recording idx N <==> waveform idx 4+(2.0,5.0)
the wave is [0,1,2,3]
the recording would be [0,2,1,3] with indices slowly shifting and modulo so like
[0,2,5,7]
this seems to happen with
SO CLOSE
i am so close after years!
just gotta process this extremeness
ok u
so if a waveform is N long
and i want it twice sampled with the last sample offset by 1/2 in the
second sampling, and 1/4 in the first, (or 1 in the second, and 0.5 in
the first, in its own index scale)
uhh so
into index N-1, goes waveform index N-1
into index N/2, goes waveform index
linspace includes the final value. it needs a -1 in its stop parameter o_o
ok, half rate actually doesn't recover anything unless it's off by one sample ;)
# sample something made of sinusoids at funny offsets, using a
different approach for testing
def sample_sinusoids_funny(data, fractional_indices):
angle_ratios = fractional_indices / data.shape[-1]
return [
((np.exp(2j * sample_idx * np.pi * np.fft.fftfreq(len(data
*
basically i used the same fourier expression, but found it by trial
rather than referring to anything, and verified it in pdb:
>>> data
array([0.17457665, 0.27853706, 0.92643594, 0.9938617 ])
>>> [abs(((np.exp(2j * sample_idx * np.pi * np.fft.fftfreq(len(data *
>>> np.fft.fft(data)).sum()) /
nooo this doesn't work because each sinusoid has a different period!
def phase_shift_complex(data, angle_shift):
mags, angles = np.abs(data), np.angle(data)
return mags * np.exp(1j * (angles + angle_shift))
- i could take the fft of the signal. this turns it into sinusoids.
- then i could phase shift it? adjust all the angles? for each sample?
- and then do an ifft for each sample
that might work. i would shift it so that sample 0 is at the location
of each point i am sampling.
a slow way to
so i guess what i would do is actually sample it as if it is made of sinusoids
in order to test for errors in my functions, i would want to do that
with a different implementation than the sinusoid summing stuff i
already wrote >_>
but basically, when i talk about sampling error, and it being off by a
half waveform, it's technically not actually the concatenation of the
two halves separately sampled, unless it is made of summed square
waves, such that slowly shifting keeps the same value.
the math assumes it is made of
i'm thinking here that when i generate the data, i'm breaking the
assumption of value locality in time.
i'm passing random values that have no local correlation.
then, its effectively using linear combinations of wavelets equal to
the entire signal, and trying to interpolate between these
the first pdb line in previous, i was processing the wrong data by
accident, and copied it by accident
i get the same output when i used 3 instead of 2.5
that's interesting !!!
i tried with a max_period of 2.5 and it reconstructed the 4-long
undersampled data with the two middle samples swapped:
(Pdb) p abs(micro_ift(micro_ft(poorlysampled, 2.5), 2.5, upsample=True))
array([0.40683351, 0.74521147, 0.00094595, 0.1613921 ])
(Pdb) superdata = np.random.random(4)
(Pdb)
the period in the recording will then be half the waveform data
length, and we can add 1 or 0.5 to that, one of those may work
a simple sampling error would be a period that is off by a half sample.
in that situation, the wave would look like one of twice the period
if the wave is at twice the sampling rate with data = np.random.random(8)
then when recorded at half its resolution it will look like
1 - 100 of 226 matches
Mail list logo