[LAD] Bugfix update of zita-bls1

2024-10-04 Thread Fons Adriaensen
Hello all,

Version 0.4.0 of zita-bls1 is available at



fixing a bug resulting from the update of the zita-convolver library
from major version 3 to 4.

This required a small change in zita-bls1, but that updated version
was not released...

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Fw: Re: that explains why FA is so angry...

2024-08-18 Thread Fons Adriaensen
- Forwarded message from Fons Adriaensen  -

Date: Sun, 18 Aug 2024 21:17:59 +0200
From: Fons Adriaensen 
To: Juan P C 

On Sun, Aug 18, 2024 at 03:03:17PM +, Juan P C wrote:
 
> i was having a private conversation about pulseaudio and he almost got a 
> brain aneurysm. LOL

Full story here:

<http://kokkinizita.linuxaudio.org/linuxaudio/downloads/JAPA_bug.txt>

Ciao,

-- 
FA


- End forwarded message -
___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-17 Thread Fons Adriaensen
On Sat, Aug 17, 2024 at 03:51:28PM -0400, Marc Lavallée wrote:
 
> What's a professional?

(IMHO)

Someone who is paid for his/her work, the purpose and sometimes the
methods of which is/are defined by someone else rather than chosen
by him/herself.

A professional will have extensive domain-specific knowledge,
may have to work to accepted 'industry standards' and will face
monetary and/or reputational consequences when he/she fails.

In the audio context this could be a sound engineer, or a scientist
or technician using audio equipment and software as an essential part
of his/her work, or someone very familiar with this type of work and
developing the tools used for it.

Professionals tend to use tools that are reliable and predictable.

The latter means that the tool does exactly what it is told to and
nothing else. And that is the reason why I wouldn't consider PW fit
for professional use, unless it can be configured once and for all
to not have a mind of its own.

When I'm recording a live performance, or my output goes to a 50 kW
PA system or a broadcast network, I do not want the system to be
'smart' and reconfigure things just because something new is plugged
in, or because some app thinks it is so important that it needs to
produce some noise to get my attention [1]. Or anything similar.


[1] As actually happened many years ago when some stupid
Gnome CD burner app damaged two very expensive speakers by
sending a 0 dB jingle to a system set up to produce 85 dB 
SPL at -18 dB working level. Since then I've banned all
'desktop' audio, and anything that autoconnects to Jack's
system:* ports.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-17 Thread Fons Adriaensen
On Fri, Aug 16, 2024 at 10:24:10PM +0200, Wim Taymans wrote:
 
> (sorry, I'm on holidays and not really much on my computer)

I don't want to spoil you holidays ! Anyway I'll be in holidays
myself in a week, so all of this will have to wait until late
September.

But meanwhile I'm more confused than ever.

You last remark:

> For linking ports, you can use any of the jack tools you probably
> already use (jack_matchmaker) or jack_connect.

seems to suggest that you think that I want to use PW's Jack emulation.
That is not the case. I want to run all native Jack apps in Jack2, and
use the 'Jack Tunnel' to allow all others (e.g. a video conference app)
to connect to Jack. Or to route Jack signals to a bluetooth device.

> Creating ports on a sink/source is part of the session manager policy.
> More precisely as part of the linking policy.

I really fail to understand this. If a sound card or app has N inputs
why should it ever have any other number of input ports ? 

Unless a 'port' can represent more than single signal, e.g. a group
of signals that should always used together. Simplest example would be
a stereo 'port' with L and R signals. But then I'd expect such groups
be defined by the application. For example my Tetraproc Jack app has
8 inputs but logically these are 2 groups of 4: one Ambisonic A-format
input and B-format input. It makes no sense at all to group them
otherwise, nor to swap or combine signals within each group. So it's
certainly not up to a session manager to define this - it's fixed by
the app's function.

Of course in the Jack world such groups don't exist - only the port
names can be used to 'suggest' them. So at least for the Jack Tunnel
the only interpretation that makes sense is that each port represents
a single audio signal. Is there any way to set the port names, or at
least to have generic ones (numbered, starting at 0 or 1 - both make
sense depending on the situation) ?

Another mystery is the difference between context.modules and
context.objects. I used to think that the modules section somehow
defines what sorts of objects are available (by loading the code 
required to create them), and the objects section actually creates
them. But that is obviously not true: the modules section contains
actual parameters for objects. And in my config I have the Jack
Tunnel in context.modules only and coppwr shows it is created and
even 'Running'.

> 1. Use the auto port config option on streams:
> https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/daemon/minimal.conf.in#L165
> or on the sink/source
> https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/daemon/minimal.conf.in#L306

But where should these things go in the config file ? I tried
a lot possible variations in my config file, it didn't make any
difference. That's a general problem: which properties/options
are available and where should they go ?

Finally, if that is the only solution, I could create my own
'session manager' that would just support what I need and 
probably could be relatively simple. If all it takes is handling
and sending POD messages it can probably be done in Python.

Ciao, enjoy the holidays ?

-- 
FA


___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-16 Thread Fons Adriaensen
On Fri, Aug 16, 2024 at 10:23:51AM -0400, Marc Lavallée wrote:

> It looks like wireplumber is an integral part of pipewire.

I don't think that is actually the case and stil hope that
*someone* can point out what I should do in order to make
it work as desired.

> (rant alert)
> 
> This system is adding to the complexity of the Linux audio stack. Even the
> syntax of config files is different.

It is certainly complex, and whatever documentation is
there is isn't very useful.

Meanwhile I've noticed another problem with the Jack Tunnel:
the port names on the Jack side are apparently taken from
the 'audio.position' property. Only a limited set of names
(the consumer surround channels) seem to be accepted, every
thing else results in 'PW:playback_UNK' and all following
ports fail to be created.

Assuming that all audio must be 'speaker signals' doesn't
make any sense - they could be anything. Maybe there is
a property to set these names, but only $DEITY knows.

And why the ports only appear when wireplumber is started
(but remain when it is terminated) is still a mystery.

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-16 Thread Fons Adriaensen
On Fri, Aug 16, 2024 at 10:03:48AM +0200, Fons Adriaensen wrote:
 
> As reported before, When I start pipewire with the modified config
> I get the Jack Source and Jack Sink modules, both in state 'Running'.
> But no PW ports in Jack. So what is missing ?

Some progress... (but still not there)

I compiled the PW audio-src test program (with the autoconnect 
option commented out).

Test:

- Jack is running.
- Run pipewire.
- Run audio-src.
- Run coppwr.

I have Jack Source, Jack Sink, Audio-src, Dummy and Freewheel modules.
But the modules have no ports and there are no PW ports on the Jack
side.

- Run wireplumber.

Now the ports, in both coppwr and Qjackctl, appear. I can connect
them in both places, and the 440 Hz signal from audio-src appears
in Jaaa.

- Terminate wireplumber.

The ports remain at both sides, but everything is disconnected. 
But when I reconnect them, things still work.

The disturbing part here is that terminating wireplumber also
removes the PW:playback_FL -> Jaaa:in_1 connection on the Jack
side. That should definitely NOT happen. 

So wireplumber is doing something that makes the ports 
appear. So the next questions are: what it is doing, and
can the same be done without wireplumber.

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-16 Thread Fons Adriaensen
On Thu, Aug 15, 2024 at 11:31:41AM -0400, Marc Lavallée wrote:

> The jack server was started automatically by pipewire, even if I disabled
> jackdbus in /usr/share/dbus-1/services/org.jackaudio.service (I commented
> out the last line: # Exec=/usr/bin/jackdbus auto)

In my case Jack is already running. Having to restart Jack (and
everything using it) just in order to start the PW-Jack tunnel 
shouldn't be necessary and is a definite no-go. 

As reported before, When I start pipewire with the modified config
I get the Jack Source and Jack Sink modules, both in state 'Running'.
But no PW ports in Jack. So what is missing ?

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-15 Thread Fons Adriaensen
On Wed, Aug 14, 2024 at 08:28:06PM -0400, Marc Lavallée wrote:
> 
> I tried and it works, but I had to upgrade Pipewire to a recent version;

I tried this:

- Copied pipewire.conf to ~/.config/pipewire
- Removed the jack-dbusdetect module from context.modules
- Added the jack-tunnel module to context.modules with 2 channels 
  in each direction.

Jack is running with the dbus interface disabled.
Rationale: I don't want any apps to control Jack, and dbus
is not needed to become a Jack client.

- Start pipewire
- Jack Source and Jack Sink nodes are shown in the coppwr
  'graph' window, with state = Running.
  
But no pipewire ports are shown in the Qjackctl connections
window. The 'Info' tabs in coppwr say Input Ports = 0 and
Output Ports = 0, even if the modules or configured for 2
channels each. 

No idea what to do next.

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-14 Thread Fons Adriaensen
On Wed, Aug 14, 2024 at 08:06:13AM +0100, Will Godfrey wrote:
 
> Not wishing to derail the original topic but are you aware that focusrite are
> now supporting an independent guy who is producing linux drivers?
> 
> For the whole story see here:
> https://linuxmusicians.com/viewtopic.php?t=23272

The most recent post there is from 2021. And reading it I don't
have the impression that Focusrite is 'supporting' this guy...


Ciao,

-- 
FA


___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-14 Thread Fons Adriaensen
On Tue, Aug 13, 2024 at 02:47:53PM -0700, Len Ovens wrote:
 
> These are not pipewire design goals,...
> ...
> This is not a design goal of PW.
> ...
> This is not a design goal of PW.

On  :

  "One of the design goals of PipeWire is to be able to closely
   control and configure all aspects of the processing graph."

If that is true, then nothing of what I mentioned before should
be a problem.

To 'closely control' things IMHO includes having the option to 
disable anything automatic. If for example I can't tell pipewire
to not use a particular sound card that is the exact opposite
of providing control.


> Config files are pretty much standard linux. 
> ...

That's another issue which I didn't even want to mention
originally.

As long as you consider only the files and assume that either
only one of them is used or able to completely override the
others the situation is clear. It's a good system, allowing
the distro to provide 'convenient' defaults, while still
allowing the admin or user to override them.

But what about the drop-in directories ? If for example I have a
~/.config/pipewire/pipewire.conf, indicating that I want to take
matters into my own hands, does that mean that the files in the
drop-in directories in /etc and /usr/share are ignored as well ?
Or do I have to override each and every one of them ? The real
semantics of this are not clear at all.

And then there is this:



  "Dictionary sections are merged, overriding properties if
   they already existed, and array sections are appended to."

So can a 'user' config really override the 'distro' one ?

This sort of thing has indeed become 'standard linux', it's
not just a pipewire issue. Systemd is an order of magnitude
worse.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pipewire

2024-08-13 Thread Fons Adriaensen
On Tue, Aug 13, 2024 at 05:36:08PM +0200, Robin Gareus wrote:
 
> So you finally made the switch to jack2?

Jack1 is still occasionally running clients in the wrong
order, and while that may not affect most users it is a 
real PITA for me. If such a fundamental problem still isn't
resolved more than 10 years after it was first reported
that doesn't inspire trust. Apart from that, Archlinux
moved Jack1 to the AUR, so there's no binary package for
it anymore.

> Then I use the following script to launch applications that i want to test
> with pipewire:
> 
> ```
> #!/bin/bash
> PW_SRC=$HOME/src/pipewire/
> export SPA_PLUGIN_DIR=$PW_SRC/builddir/spa/plugins
> export SPA_DATA_DIR=$PW_SRC/spa/plugins
> export PIPEWIRE_MODULE_DIR=$PW_SRC/builddir/src/modules
> export PIPEWIRE_CONFIG_DIR=$PW_SRC/builddir/src/daemon
> export ACP_PATHS_DIR=$PW_SRC/spa/plugins/alsa/mixer/paths
> export ACP_PROFILES_DIR=$PW_SRC/spa/plugins/alsa/mixer/profile-sets
> export LD_LIBRARY_PATH=$PW_SRC/builddir/pipewire-jack/src/
> export
> PATH=$PW_SRC/builddir/pipewire-jack/src/:$PW_SRC/builddir/src/tools:$PATH
> exec "$@"
> ```
> 
> e.g pw-src-env pw-jack Ardour8

All that to just run one application ?

And isn't this using PW as a Jack emulation ? That is not what I want
to do. All Jack apps should just use Jack. PW is there only for those
that for whatever reason do no support Jack natively - most browsers,
video conference apps, etc.

> > 7. I do not expect anything 'automatic' to happen when things
> > are plugged in or out.

> This is something where macOS' Coreaudio/MIDI shines. Unlike macOS
> Linux/ALSA has no persistent unique IDs for soundcards or MIDI devices. ALSA
> supports hotplug, and first come first server sequential numeric IDs. The
> best you^Wpipewire can do is keep track of cards by name.

> So this is not something pipewire can reliably address, until ALSA get
> support to identify cards by vendor and serial-number, and provide a UUID.

You may have misunderstood (7). If the hardware changes I have no
problem with having to modify the config and restart everything.
All I meant is that I'm not interested in having the entire system
automagically switching to another sound card when some USB device
is plugged in or out, or anything similar.

Ciao,

-- 
FA


___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Pipewire

2024-08-13 Thread Fons Adriaensen
Hello all,

I spent a lot a time reading whatever docs I could find for Pipewire,
and discuss things with some users, only to get frustrated more and
more.

Below is a description of the configuration I'd want. If anyone knows
how to do this (it shouldn't be that difficult) that person will
receive my eternal admiration and gratitude.


1. Jack2 and some clients are started manually after I login,
   and will be running all the time.

2. Currently the ALSA Jack plugin is used to route audio from
   web browsers etc. to Jack. PW may take over this role but
   that is not a strict requirement.

3. PW will be started manually when required, and I don't expect
   that will happen very often. It may remain running when no longer
   needed but shouldn't interfere. It will be used to connect apps
   to Jack as in (2), or those that even don't support ALSA, or
   maybe to route audio from Jack to Bluetooth etc.

4. All Jack ports created by PW should be permanent and exist
   as soon as PW is started, so they can be manually connected
   and remain connected even when not in active use. 

5. PW should never ever access the sound card used by Jack,
   not even if accidentally started when Jack is not running.
   It must not force Jack to use dbus in order to get access
   to that card. It may manage other sound cards, but preferably
   only those explicitly listed.

6. PW must never ever interfere with Jack in any way - making
   connections, trying to change the period size, etc. Its only
   role is to be a well-behaved Jack client.

7. I do not expect anything 'automatic' to happen when things
   are plugged in or out.

8. The PW configuration should be done in such a way such that
   it can't be modified by drop-in files from the system package
   manager. All configuration should be manual and explicit, and
   easy to verify without having to scan a myriad of files and/or
   directories and trying to understand how they interact. This
   is just basic security.

  
Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] problem with web.linuxaudio.org

2024-08-02 Thread Fons Adriaensen
Hello,

The linuxaudio websites seem out of order.

The host is online, but accessing /home/sites hangs forever...

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] retuning examples

2024-07-01 Thread Fons Adriaensen
Hello all,

I'm still working on a new autotuner, zita-at2.

Some examples can be checked here:



There's no autotune in these, just fixed pitch or formant
shifts - they can now be controlled separately.

Comments welcome, and of course I still need some more
vocal tracks to test...

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Linux Audio Conference (is back) 2025

2024-06-17 Thread Fons Adriaensen
On Sun, Jun 16, 2024 at 11:05:44AM +0200, Jörn Nettingsmeier wrote:
 
> Some experiences I've had with Belgian beers were more akin to memory
> barriers... stupid me was enjoying Duvel without parsing the metadata (the
> taste is so light and summer-y), matching my friends' cadence (who were
> drinking pilsener). One of the few evenings where I don't remember how I got
> home.

Duvel, at 8.5%, isn't even the strongest, some go up to 12%.

It played a role in my life as well...  My first ever job (long ago)
was as a sound engineer for Belgian radio and TV. One evening we were
recording Schubert's piano trio op.99 at the famous Flagey studio 4.

The producer, who also was a music teacher, had engaged one his
students to turn the pages for the piano player. After the recording
he asked me if I could bring her home, as that was on my way back to
Antwerp. So we ended up in the small village of Breendonk which has
no claims to fame except for being where the Duvel is brewed.
We had some Duvels at the small cafe opposite the brewery and ended
up waggling to her parents' home. A few days later we saw each other
again, and became 'an item' that lasted for almost ten years.

Cheers,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Linux Audio Conference (is back) 2025

2024-06-10 Thread Fons Adriaensen
On Mon, Jun 10, 2024 at 05:26:23PM +0200, Stéphane Letz wrote:

> This is just to let you know that after a couple of years of absence,
> Linux Audio Conference will take place in Lyon in 2025 on June 26-28.

That's great news !

When LAC 2020 was announced I wrote:

  I expect some decent wine.

I'll repeat that, hoping it won't be virtual this time !

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: PandaResampler 0.2.0

2024-06-10 Thread Fons Adriaensen
On Mon, Jun 10, 2024 at 02:31:40PM +0200, Stefan Westerfeld wrote:

> Well in that case you can simply download the final tarball:

Meanwhile I got things working with the changes that Marc suggested.

The filters only attenuate 6 dB at half the sample rate.
Is that intentional ?

Ciao,

-- 
FA


___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: PandaResampler 0.2.0

2024-06-09 Thread Fons Adriaensen
On Sun, Jun 09, 2024 at 11:55:41AM -0400, Marc Lavallée wrote:
 
> For the specific error you reported, the required macro is
> ax_cxx_compile_stdcxx.m4.

Thanks for the clear info.

Requiring C++11 seems like nothing really esoteric, so why
doesn't a standard autotools installation have this macro ?

And if it isn't standard, shouldn't it be provided in the m4
dir of the project (there are some others there) ?

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: PandaResampler 0.2.0

2024-06-09 Thread Fons Adriaensen
On Sun, Jun 09, 2024 at 09:23:28AM -0400, Marc Lavallée wrote:
> Le 2024-06-09 à 05 h 51, Fons Adriaensen a écrit :
> 
> > On Sun, Jun 09, 2024 at 09:50:27AM +0200, Stefan Westerfeld wrote:
> > 
> > > PandaResampler 0.2.0 has been released.
> > ./configure: line 16969: syntax error near unexpected token `11,'
> > ./configure: line 16969: `AX_CXX_COMPILE_STDCXX(11, noext, mandatory)'
> > 
> > I'm an absolute NOOB re. autotools...
> > 
> > Ciao,
> 
> Install old autoconf macros:
> 
> https://www.gnu.org/software/autoconf-archive/
> 
> https://pkgs.org/download/autoconf-archive

I should have a fully up-to-date autotools. It works with
all other things that I needed it for.

In what way should old macros solve the issue, taking into
account that this is not 'old' software ?

And what means 'installing' those macros - Where should
they go ?

Sorry if I seem a bit sceptical... Each time I try to
understand how the autotools are supposed to work I read
things like 'This file contains dirty hacks to make XXX
work' and similar statements. It really escapes me why
such a mess is still supposed to be the 'right way' to
do things...

Ciao,

-- 
FA



___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: PandaResampler 0.2.0

2024-06-09 Thread Fons Adriaensen
On Sun, Jun 09, 2024 at 09:50:27AM +0200, Stefan Westerfeld wrote:

> PandaResampler 0.2.0 has been released.

./configure: line 16969: syntax error near unexpected token `11,'
./configure: line 16969: `AX_CXX_COMPILE_STDCXX(11, noext, mandatory)'

I'm an absolute NOOB re. autotools...

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Update of zita-jclient library

2024-05-12 Thread Fons Adriaensen
Hello all,

zita-jclient-0.5.2 is now available at 



This version is required for the 'freewheeling' classes in
zita-jacktools to work. They will fail silently with older 
versions of zita-jclient.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pitch estimation

2024-04-27 Thread Fons Adriaensen
On Fri, Apr 26, 2024 at 12:34:03PM -0400, Tim wrote:

> Would you have any insight into how this product achieves this and the
> techniques used?

It must be a combination of a lot of different things, carefully 
tuned for the best results.

Their patent application provides some information on what is
likely going on.

It seems to be based on comparing the input to stored 
waveforms, and combinations thereof to detect chords. 

In the marketing blurb they call this 'AI', but that seems
to be a little bit over the top. Having a reference waveform
for each of the 6 * 22 single notes that can be played on a
guitar  can't really be called 'training' in the AI sense.
It's more like what would be called an 'expert system', the
procedures used seem to be explicit instead of being the
opaque result of 'training'.
 
Then there must separate algorithms to detect pitch bending,
glides, etc. 

It's certainly not simple, and an considerable achievement.

As to latency, it's reported to be quite low, but no hard
figures seem to be available. Nor of course has any of the
'reviewers' ever attempted to measure it.

Re. playgin KE's parts on guitar: that may be possible
for some of the monophonic synth lines, but I don't think
you could ever play the wonderful piano parts from e.g.
'Take a pebble' or 'Trilogy' on a guitar...

Ciao,

-- 
FA


___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pitch estimation

2024-04-26 Thread Fons Adriaensen
On Fri, Apr 26, 2024 at 02:26:26PM +0200, Dominique Michel wrote:
 
> Which is amazing because, with A=440 Hz, the low E of a guitar is at 82
> Hz, which is a period of 12.2 ms. And when tuned in drop C tuning, that
> low C is at 65 Hz, which correspond to 15.38 ms.

Note that in 0.8.1 the lowest pitch that will be accepted is
set to 75 Hz, it was 60 Hz in previous versions. You can
easily change that if you install from source.

Singers who can go below that are probably trained ones
and won't need autotuning :-)

Ciao,

-- 
FA


___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Pitch estimation

2024-04-26 Thread Fons Adriaensen
On Fri, Apr 26, 2024 at 12:23:45PM +0200, Dominique Michel wrote:
 
> From the start of a note, do You know how long it takes, or how much
> periods of the signal it takes, in order to get the pitch?

For the autocorrelation (AC) to produce a usable result you need
at least two periods - this will make the signal similar to itself
shifted by one period. Also if the autocorrelation is computed
using the FFT method (which is really the only efficient way), the
FFT needs to be longer and windowed in order to avoid 'circular'
effects [1].

Let's assume that the lowest frequency we want to detect is
75 Hz (between D2 and E2) or 640 samples at 48 khz sample rate. 
Twice the period is 1280, and to have a useful window you'd need
again more or less the double.

Zita-at1 uses a 2048 point FFT with a raised cosine window. So
the pitch estimate is 1024 samples or about 21 ms behind the
input. To have the most accurate pitch correction latency
would need to be the same. 

Still 21 ms is faster than humans will perceive pitch, so
one could reduce latency and accept the error. But this 
doesn't mean the error won't be perceived. 

[1] Assume a 1024 point FFT and a period of 300 samples.
Without windowing the AC will peak at 300, but also at
1024 - 2 * 300 = 424 samples. The two peaks can easily
merge into a single one somewhere in between.

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Pitch estimation

2024-04-25 Thread Fons Adriaensen
Hello all,

Several people have asked how the pitch estimation
in zita-at1 works.

The basic method is to look at the autocorrelation
of the signal. This is a measure of how similar a
signal is to a time-shifted version of itself. It
can be computed efficiently as the inverse FFT of
the power spectrum.

In many cases the strongest autocorrelation peak
corresponds to the fundamental period. But this can
easily get ambiguous as there will also be peaks at
integer multiples of that period, and for strong
harmonics. To avoid errors it is necessary to look
also at the signal spectrum and level, and combine
all that info in some way. How exactly is mostly a
matter of trial and error. Which is why I need more
examples.

Have a look at



This a test of the pitch detection algorithm used in
zita-at1.

The X-axis is time in seconds, a new pitch estimate is
made every 10.667 ms (512 samples at 48 kHz).

Vertically we have autocorrelation, the Y-axis is in
samples. Red is positive, blue negative. The green dots
are the detected pitch period, zero means unvoiced.
The blue line on top is signal level in dB.

Note how this singer has a habit of letting the pitch
'droop', by up to an octave, at the end of a note. He
is probably not aware of it. This happens at 28.7s,
again at 30.8s, and in fact during the entire track.

What should an autotuner do with this ? Turn the glide
into a chromatic scale ? The real solution here would
be to edit the recording, adding a fast fadeout just
before the 'droop'. Even a minimal amount of reverb
will hide this.

The fragment from 29.7 to 30.3s is an example of a
vowel with very strong harmonics which show up as
the red bands below the real pitch period. In this
case the 2nd and 3rd harmonic were actually about 20
dB stronger than the fundamental. This is resolved
because the autocorrelation is still strongest at
the fundamental pitch.

The very last estimate in the next fragment (at 30.85s)
is an example of where this goes wrong, the algorithm
selects twice the real pitch period, assuming the
first autocorrelation peak is the 2nd harmonic.
This happens because there was significant energy
at the subharmonic, actually leakage from another
track via the headphone used by singer.

The false 'voiced' detection at 30.39s is also the
result of a signal leaking via the headphone.

Ciao,

-- 
FA



___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: vocal tracks wanted

2024-04-25 Thread Fons Adriaensen
On Thu, Apr 25, 2024 at 08:02:19AM +0200, Lorenzo Sutton wrote:

A bit more info on some of the topics:

> Also I'm imagining one singer (i.e. not choir / multiple people)?

The occasion I referred to was a childrens choir, and they had 
recorded a four-part piece, each part separately. Without taking
care of consistent tuning or tempo. It was really an effort well
beyond their capability.
I spent most of a day retuning little pieces and re-aligning them
in Ardour. Managed to get something 'presentable' that the director
was happy with in the end.

> Are song parts with words/lyrics ok or just 'aaahs' or 'ooohs' preferred /
> relevant?

Some vowels can have harmonics that are much stronger than
the fundamental, resulting in strong peaks in the autocorrelation 
(used for pitch detection) corresponding to an harmonic. But the
autocorrelation also always has peaks at subharmonics of the
fundamental. So you can end up with very ambiguous hints at
what the actual fundamental is. Long vowels can be very useful
to test and optimise how this is handled.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: vocal tracks wanted

2024-04-25 Thread Fons Adriaensen
On Thu, Apr 25, 2024 at 08:02:19AM +0200, Lorenzo Sutton wrote:
 
> I think I can provide you with some female voice clips from a while ago but
> they are a bit short.
> Any ideal length of the phrases / clips?

Anything 10s or longer.

> For the male voices, how low is 'low'?

The real 'bass' register, down to F2 or so.

> In general what do you define as 'clean' (e.g. I'm imagining mono relatively
> closed mic with relatively little background noise?)

No reverb, the less background noise the better.

> Also I'm imagining one singer (i.e. not choir / multiple people)?

Yes. But I once used zita-at1 on a choir, and to my surprise it worked
quite welll...

> Are song parts with words/lyrics ok or just 'aaahs' or 'ooohs' preferred /
> relevant?

Both are useful. 

> I might have some of these to send and or quickly produce but might be
> helpful to know any preferred specs ;-)

Thanks for responding !

Ciao, 

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Update of zita-at1

2024-04-24 Thread Fons Adriaensen
On Thu, Apr 25, 2024 at 12:52:07AM +0200, Robin Gareus wrote:
 
> Correct, yet the effective signal delay (which can be measured)
> needs to be reported to the host to align the signal.

And that should be the average value, not the minimum which can be
achieved only during a small fraction of the time (or for unvoiced
signals by forcing them to have a different delay than the average
voiced ones).

> No claim is made that the pitch is corrected within that time.

Indeed, but that is not what I refer to.

Assume for a moment that you are retuning 'up'. That means
that the input is consumed faster than the output sample rate. 

So at some point, since you can't read past the end of input,
you will have to skip back by at least one cycle of the
fundamental frequency, which means the latency will increase
by the same time.

Latency is in fact changing all the time while retuning, and
the only meaningful value is the average one.

For a constant retuning ratio, the latency as a function of
time will be a rising or falling 'sawtooth'. 

The fundamental difference between at1-0.8.1 and previous
versions is that it will try to remain as close as possible
to a well-defined average delay, i.e. minimise the delay
jitter.

> That delay has to be reported to the host.

Of course. But that should be real value. Not some fake one
dreamt up to look good.

-- 
FA



___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Update of zita-at1

2024-04-24 Thread Fons Adriaensen
On Wed, Apr 24, 2024 at 10:02:01PM +0200, Robin Gareus wrote:

> > > Except the latency in zita-at1 0.8.1 is still around ~20ms,
> > 
> > It will be 10 ms when selecting low latency mode.
> 
> At 48kHz sample-rate, Retuner::set_lowlat(true) sets Retuner::_latency to
> 1024.

1024 samples at 96 kHz (the signal is upsampled) is 10.67 ms.
And it will maintain that +/- half a cycle of the retuned
frequency. The worst case would be at 75 Hz, +/- 6.7 ms.
At higher pitch the variation will be less. The average
value will always be 10.67 ms.

The way 'fast mode' works in the x42 plugin could give you
a momentary latency of 2 ms, but the average value will be
quite a bit higher and depend on signal content. So that
2 ms is just snake oil. 

Using the same misleading definition of 'latency' I could
claim whatever I like for zita-at1-0.8.1, even down to zero. 

Anyway, I'll make sure that no such false claims will be
made of at2. Even if that means I just can't release it
under a license that allows modifications. There's more
than enough people who are prepared to accept a more
restrictive license, and even pay for it. 

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Update of zita-at1

2024-04-24 Thread Fons Adriaensen
On Tue, Apr 23, 2024 at 09:24:20PM +0200, Robin Gareus wrote:
> On 2024-04-20 16:41, Fons Adriaensen wrote:
> 
> > With the new logic deciding on forward / backward jumps, the
> > low latency mode just came for free
> 
> Except the latency in zita-at1 0.8.1 is still around ~20ms,

It will be 10 ms when selecting low latency mode.

> compared 2ms when using the current plugin's "fast mode".

That must be BS. A typical male voice frequency is 125 Hz,
8 ms. So when the algorithm has to skip a cycle, will the
latency change to -6 ms ??

Did anyone ever verify this ? I did a few minutes ago, by
observing in and out on a scope.

Normaly latency for the x42-autotune seems to be around 20
ms. Selecting 'fast' doens't seem to make any difference.
Using the thingy in the left upper corner, I get latencies
up to more than half a second, even when this is set to
a few ms.

Sorry, but I've lost all confidence in whoever has been
dabbling with something that he/she clearly doesn't 
understand. If this is the standard of quality of Linux
audio, I'll just stop releasing anything.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Help with resampling algorithm

2024-04-23 Thread Fons Adriaensen
On Tue, Apr 23, 2024 at 10:47:43AM +0100, Marco Castorina wrote:
 
> > for each output sample you use a new filter from the set of 128 by
> > incrementing the index by a fixed value. This may not be an integer,
> > so you need to round.
 
> This is what I am struggling with: how do I determine by how much to
> advance?

It depends on the inverse of the resampling ratio, R = Fin / Fout.

Note that all filters have the same frequency response, they just
provide a different delay in the range of 0 to 1 input samples.

Let P be the position in the input stream where the center of the
filter has to be.

For each new output sample P increments by R.

The integer part of P determines what should be first input
sample. The fractional part determines which filter to use. 

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Update of zita-at1

2024-04-20 Thread Fons Adriaensen
On Sat, Apr 20, 2024 at 03:47:20PM +0200, Robin Gareus wrote:
 
> Either I have to backport your bug-fixes, or add the missing features to
> your new version.

With the new logic deciding on forward / backward jumps, the 
low latency mode just came for free - all it takes is just
changing the value of one variable. BTW, the 'correct' latency,
the one that would perfectly align the pitch detection with the
signal, would be 5/8, not 1/2 of the input size. But I kept
1/2 in order to match the latency of the original version.
For low latency, I use 1/4, around 10 ms.

The standard 'bugs' were not in the Retuner class, so they
won't affect the plugin code.

Re. microtuning individual notes: I don't think the retuning
is ever accurate enough for this to be of any use. Or maybe
I misunderstand what you refer to.

What would be great (in particular for zita-at2) would be
a mode were you can e.g. select a region in Ardour, have it
analysed, present the result graphically so it can be edited,
and finally apply the edited version to the region...

It would require separating the pitch detection, the logic
that decides on the amound of correction, and the actual
processing of the signal as separately usable components.
I'll keep this in mind when designing zita-at2.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Update of zita-at1

2024-04-20 Thread Fons Adriaensen
Hello all,

Zita-at1-0.8.1 is now available at the usual place:



Note: this is not (yet) the new zita-at2 that will have formant
correction. 

Changes:

- Bug fixes.
- Improved pitch estimation algorithm.
- Low latency mode, reduces latency to around 10 ms.


Note to Robin Gareus:

The new Retuner class can probably be used without changes
in the x42-autotuner plugin.

The logic that controls jumping forward/back while resampling
has been changed. Instead of just trying to avoid reading 
outside the available input range, it now tries to keep the
read index as close as possible to the ideal position, i.e.
'latency' samples behind the write index. That also means
that for unvoiced input it will be within +/- 1.3 ms of the
ideal position, so nothing special is required for this case.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] vocal tracks wanted

2024-04-15 Thread Fons Adriaensen
Hello all,

I'm working on an improved version if zita-at1 which most of you
probably know as the x42-autotune plugin. The update, zita-at2,
will preserve formants while retuning.

To test and develop this I need some clean vocal tracks, in 
particular of female singers and also very low (bass) males.
So if you have these available, I'd be very happy if you can
share them. They won't be used of course for any other purpose
than to improve the retuner algorithm.

TIA for anything you can provide !

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Help with resampling algorithm

2024-04-15 Thread Fons Adriaensen
On Sat, Apr 13, 2024 at 03:26:38PM +0100, Marco Castorina wrote:

> I have coded a prototype which is shared here:
> https://gist.github.com/theWatchmen/746f35c349748525b412cfd9466608ce

You'll probably get a better idea of what is going wrong if you
decrease the input frequency a bit, e.g. 50 instead of 400.

This produces something close to the wanted result, but there is
still something wrong. Don't know what it is (and you should find
out by yourself), but to me the logic in resample_window_sinc()
seems a lot more complex than it should be.

The principal way this is supposed to work is quite simple:

  - for each output sample you use a new filter from the set of
128 by incrementing the index by a fixed value. This may not
be an integer, so you need to round. 

  - if the new index moves out of range, subtract 128 and advance
by one sample in the input array until the index is within range.

So that should be a bit simpler than your current code.


Some other (unrelated) hints:

- Use the Bessel functions from scipy.special.

- If you change

sinc = [0]*34 + sinc # TODO(marco): compute correct number of zeros

to

sinc = [0]*Nzd + sinc

then you can replace

sincr = np.zeros((L, Nzd))
for l in range(L):
for i in range(Nzd):
sincr[l][i] = sinc[L*i + l]

by 

sincr = sinc.reshape (Nzd, L+1)
sincr = sincr.T

This will give you L+1 filters instead of L, but that shouldn't
matter. The extra one is just the first shifted by one sample,
and if you get the logic right it won't be used.

Ciao,

-- 
FA











___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Update of zita-jacktools

2024-03-25 Thread Fons Adriaensen
Hello all,

zita-jacktools 1.7.1 is now available at the usual place:



zita-jacktools is a collection of Jack audio processors
implemented as Python classes. These can be combined to
create complex graphs that are completely controlled by
a Python script and can be interfaced to anything that
Python can handle. Main applications are automated
measurements, sound installations, listening tests, etc.


New is this release:

class JackBw8filt

  Up to 100 8th order Butterworth bandpass filters.
  Each one can be configured separately and be lowpass,
  highpass or both in series. Mainly meant for measurements
  that require this sort of filtering.


classes JackFwplay amd JackFwcapt

  These two classes allow to process an audio file via any
  combination of Jack apps that function correctly in Jack's
  freewheeling mode, and record the result.
  

All examples having a GUI now use PyQt6

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] some Faust questions

2024-03-24 Thread Fons Adriaensen
Hello all,

Triggered by a recent post, I've been dabbling a bit with Faust,
and have some questions :-)

(For the jack-gtk architecture)

- When started the app prints some useless info (a nuisance) and
  it also autoconnects (a definite no-go).
  Is there any way to avoid this ?

- The faust2jack script says it adds 'preset' functionality
  to the GUI. But there seems to be no way to access this.

- The way the PRESETDIR is handled is weird. It is done by 
  prepending a single line temporary file to the merged cpp file
  using cat.

  What would be wrong with

# add preset management
if [ $PRESETDIR = "auto" ]; then
PRESETDEFS=-DPRESETDIR=\"/var/tmp/\"
else
PRESETDEFS=-DPRESETDIR=\"$PRESETDIR\"
fi

and then adding PRESETDEFS to the compiler options ?

(General)

- The various faust2*** programs work by merging the compiled dsp
  file with the architecture file. While doing this, also all
  include files from the /usr/include/faust directory are recursively
  expanded in-line. Now I wonder why this is done. Surely the C++
  preprocessor would do this anyway ?


TIA for any answers !

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Jack freewheeling question

2024-02-26 Thread Fons Adriaensen
Hello all,

A question for the Jack / PW developers:

If a Jack client calls jack_set_freewheel() to start or stop
freewheeling, is it guaranteed that all clients will see the
transition in the same period, i.e. at the same frame time ?

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Update of jnoisemeter

2023-10-24 Thread Fons Adriaensen
Hello all,

Release 0.4.1 of jnoisemeter is available at the usual place:



Jnoisemeter is a small Jack app for accurate measurement of
audio signals, in particular noise signals. See README for
the details.

New in this release: IEC class 0 octave band filters.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Update of tetraproc and tetrafile

2023-09-08 Thread Fons Adriaensen
Hello all,

Release 0.9.2 of the tetraproc and tetrafile processors for first
order Ambisonic microphones is now available at:



Main changes: tetrafile now defaults to the Ambix format for
both channel gains and order. It is still possible to create
old style 'Fuma' files, see README.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Update of zita-resampler library

2023-08-29 Thread Fons Adriaensen
Hello all,

Zita-resampler 1.11.2 is now available on 



This release adds Arm64 NEON code for the Resampler and Vresampler
classes, contributed by Nicolas Belin. On an Rpi3b this improves
throughput by a factor of around 2.5.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Jack error

2023-08-04 Thread Fons Adriaensen
On Fri, Aug 04, 2023 at 01:54:45PM -0400, Marc Lavallée wrote:

> To compile without libdb: ./waf configure --db=no && ./waf

So it's a compile time option, and not really available for
anyone who just installs a distro package.

Givven the stutus of BDB, I'd say the long term solution
would be for jackd to use e.g. LMDB instead of BDB. 
And also to make the metadata support a run-time option.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Jack error

2023-08-04 Thread Fons Adriaensen
Hello all,

I get this error when starting any program using jack (1.9.22):

  BDB2034 unable to allocate memory for mutex; resize mutex region
  Cannot open DB environment: Cannot allocate memory

This occurs only after the system and jackd have been running for
a long time (weeks or months). Restarting jackd doesn't help,
a reboot is required.

I assume this is related to jackd using a Berkeley Database to
store optional application and port data.

I don't need this functionality, but there seems to be no option
to disable it. Also it seems that BDB is no longer maintained.

Any hints as to how to avoid this problem ?

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: EQ Matching

2023-05-21 Thread Fons Adriaensen
On Sat, May 20, 2023 at 05:15:03PM +0200, Florian Paul Schmidt wrote:

> 1. Are there any glaring oversights with this approach?

The minimum phase step is not really necessary, you could just use the
linear phase version, the output of the IFFT rotated by half its size.

If you do the minphase operation, the best way to do this using an FFT
that is at least 4 times as long as your IR. In other words, take the
linear phase version in the time domain, add zeros, then do the
minphase calculation.

> 2. What is a sensible approach to regularize this approach. If spectrum
> 1 has any components near zero magnitude then the division is ill
> suited. It works fine for my experiments with distorted guitars, but I
> wonder whether I should e.g. just clamp the values of spectrum 1 to
> something like -70 dB?

This probably should not depend on the absolute values, just on the
ratio, i.e. the calculated gain. If r = s2 / s1 you could impose a
smooth maximum:

g = r / sqrt (1 + M * r * r)with M in the range 1e-3 to 1e-2

Or you could even reduce the gain if r is really very high (which
probably means there is nothing of value in s1 at that frequency:

g = r / (1 + M * r * r * r) with M in range 1e-5 to 1e-4

Try this in gnuplot:

plot [0:100] x / sqrt (1 + 1e-2 * x * x), x / sqrt (1 + 1e-3 * x * x)
plot [0:100] x / (1 + 1e-4 * x * x * x), x / (1 + 1e-5 * x * x * x)

to get an idea of what these will do.

Average levels in the two input files should be more or less
matched, before applying any of these.

> 3. If the user wants to smooth the spectrum before calculating the
> responses what would be sensible approaches? I thought about smoothing s
> with a filter that has a varying bandwidth when expressed in FFT bins or
> Hz, but constant, when expressed in e.g. octaves or decades. I'm not
> sure how to do that though. Any thoughts?

That is a good idea. Above say 500 Hz anything smaller than say 1/3
octave probably doesn't matter much.

There are many ways to do this. One would be to use the exact per-bin
levels at low frequencies, and a moving average that increases in 
length as frequency goes up. Use an odd number of bins to sum (to
avoid shifting peaks), and sum powers, not amplitudes.

The window used in the analysis will also provide some smoothing,
with a raised cosine anything at the center of a frequency bin will
also show up 6 dB lower in the two adjacent bins. That limits the 
resolution as low frequencies, but that probably not a bad thing.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Tempo change functions.

2023-02-24 Thread Fons Adriaensen
Hello all,

What exactly is meant by a linear or exponential tempo change ?
Is the tempo a lin/exp function of time, or of score position ?

A bit of algebra leads to this:

Let 

  t = time (seconds)
  p = score position (beats)
  v = tempo (beats / second) [1]

We have 

  v = dp / dt   # by definition


If v is a linear function of t, then

  v (p) = square root of a linear fuction of p


If v is a exponential function of t, then

  v (p)  = a linear function of p


if v is a linear function of p then
 
  v (t) = an exponential function of t   


If v is an exponential function of p, then

  v (t)  = inverse of a linear function of t


So there is plenty of room for confusion...


[1] Using SI units :-)  

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Calculating logarithmic curve for controller automation

2023-02-24 Thread Fons Adriaensen
On Fri, Feb 24, 2023 at 12:14:19PM +0100, Stefano D'Angelo wrote:

> > Given any three points (x1, y1), (x2, (y1+y3)/2), (x3, y3) where
> > y1>y2>y3 or y1 > through them as explained here:
> > https://www.orastron.com/blog/potentiometers-parameter-mapping-part-1
> > (+ "output scaling").
> 
> Sorry, typo: I meant points (x1,y1), ((x1+x3)/2,y2), (x3,y3).

That function may include some exponential term, but

If

  F (x) = A exp (B * x)   # A pure exponential

then

  F ((x1 + x2) / 2) = sqrt (F (x1) * F (x2))

So the mid point is fixed, you can't choose it.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Calculating logarithmic curve for controller automation

2023-02-23 Thread Fons Adriaensen
On Thu, Feb 23, 2023 at 08:43:52PM +0100, Jeanette C. wrote:

> In my example: I want to change the tempo from 120BPM to 150BPM over
> four bars of equal length.

What you want here is probably an exponential mapping from bars or
beats to tempo. 

For example if you want to change from 1 to 4, a linear mapping 
would have (1 + 4) / 2 = 2.5 halfway, and an exponential mapping
would have sqrt (1 * 4) = 2 halfway.

For 120 to 150 it would make very little difference, 135 vs 134.16.

So if the start and end values are A and B, you would make a linear
function from log (A) to log (B), and then use exp () on that to
find the tempo at any point.

Note that this is a exponential mapping from bars or beats to 
tempo, not from time to tempo (because as the tempo is changing,
so does the duration of a bars or beat).

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Calculating logarithmic curve for controller automation

2023-02-23 Thread Fons Adriaensen
On Thu, Feb 23, 2023 at 07:09:51PM +0100, Jeanette C. wrote:

> how is a logarithmic curve usually programmed in a DAW or sequencer?

It's not clear what exactly you are asking...

If a = log (b), b = exp (a), what are a and b in your case ?

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Admissable clock inaccuracy for MIDI

2023-02-23 Thread Fons Adriaensen
On Wed, Feb 22, 2023 at 03:42:00PM +0100, Fons Adriaensen wrote:
 
> BTW, just using 
> 
>sem_timedwait () 
> 
> in a RT thread I get
> 
> Min ~  60 us
> Max ~ 135 us
> Average ~ 74 us.

And using 

clock_nanosleep (CLOCK_MONOTONIC, TIMER_ABSTIME, &T, 0);

in a RT thread I get

Min ~   5 us
Max ~ 105 us
Average ~ 23 us

No need for any special libraries...


Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Admissable clock inaccuracy for MIDI

2023-02-22 Thread Fons Adriaensen
On Wed, Feb 22, 2023 at 01:28:21PM +0100, Jeanette C. wrote:

> Thanks Fons! I was really confused about that myself. Mixing up the
> mechanism of sleeping for a specified duration and sleeping until a
> specified point in time.

Only the latter is correct.

BTW, just using 

   sem_timedwait () 

in a RT thread I get

Min ~  60 us
Max ~ 135 us
Average ~ 74 us.

Ciao,

-- 
FA
___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Admissable clock inaccuracy for MIDI

2023-02-22 Thread Fons Adriaensen
On Wed, Feb 22, 2023 at 10:52:49AM +0100, Jeanette C. wrote:

> https://www.dropbox.com/s/b0bh9pxsqzmar5j/midiclock.cpp

This is very strange:

   miss_time = now - wakeup;
   wakeup = now + (delta - miss_time);

Which just means:

   wakeup += delta;

So it's not wrong, just confusing by suggesting this
somehow compensates for the previous tick being late.
It doesn't, nor should that be done.

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] update (again)

2023-02-17 Thread Fons Adriaensen
Hello all,

Just hours after the release of zita-resampler-1.10.0 yesterday,
a new bug was reported, not in the library but in the zresample
application.

This is now fixed in 1.10.1

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] updates

2023-02-16 Thread Fons Adriaensen
Hello all,

Updates now available on 



ebumeter-0.5.1.tar.xz
zita-resampler-1.10.0.tar.xz

Ciao,

-- 
FA


___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Status of Pipewire

2023-02-15 Thread Fons Adriaensen
On Tue, Feb 14, 2023 at 04:55:00PM +0100, Wim Taymans wrote:

> > BTW, what about the 'signed differences' issue I pointed
> > out earlier ?
> 
> Should be fixed with this:
> https://gitlab.freedesktop.org/pipewire/pipewire/-/commit/274b63e9723ec00dd413bb64b6650d2004f7e4c2

I don't think this is correct. But it may be a long time before
this becomes apparent.

Frame times are 32 bit. The required result of the subtraction is

   the difference modulo 2^32 and interpreted as an int32_t

i.e. including the wraparound resulting from the 'exact' value
being out of range for an int32_t.

The wraparound won't happen when frametimes are cast to 64 bit.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Status of Pipewire

2023-02-14 Thread Fons Adriaensen
On Tue, Feb 14, 2023 at 03:57:05PM +0100, Wim Taymans wrote:

> > The real difference between the two methods is 'sample count'
> > versus 'time' as the source of the event that starts a period.
> >
> > I always wondered why one would use a timer, it just amounts
> > to polling. Suppose you look every 1 ms to check if there
> 
> You don't need to use polling with timerfd, just set the timeout
> according to some clock,
> add the timerfd to some poll loop and it wakes up on time. 

It's of course not 'active polling' (spending all CPU time on
testing a condition), but it is still polling in the sense
that it is NOT the event you wait for (having enough samples
to start a Jack cycle) that wakes you up. When using a timer
you just test for that condition periodically, which means
you can be up to that period late.

To avoid loss of period processing time, the timer period
must a very small fraction of the Jack period time. And
then I wonder what is the advantage.

> Very much like how ALSA wakes you up when a period expires.

AFAIK, ALSA doesn't use timers for that.
For a sound card on e.g. a PCI bus the start of cycle would
be the indirect result of an hardware interrupt. For USB
or firewire cards, it would be triggered by an event from
the lower (USB/firewire) layers.

BTW, what about the 'signed differences' issue I pointed
out earlier ?

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Status of Pipewire

2023-02-10 Thread Fons Adriaensen
On Thu, Feb 09, 2023 at 01:34:52PM +0100, Wim Taymans wrote:

> real JACK is more mature and does things differently (mostly device
> wakeup with IRQ instead of timers)

The real difference between the two methods is 'sample count'
versus 'time' as the source of the event that starts a period.

I always wondered why one would use a timer, it just amounts
to polling. Suppose you look every 1 ms to check if there
are enough samples for a period. That means you can be up
to 1 ms late. Compare that to the period time of 1.33 ms
when using 64 samples / 48 kHz. Up to 3/4 of the available
time to compute a period could be lost...

Or am I missing something ?

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Re: Status of Pipewire

2023-02-09 Thread Fons Adriaensen
Hello Wim,

Thanks for the info.


> > Q5. Do all Jack clients see the same (and correct) info
> > regarding the state of the DLL in all cases ?
> 
> Yes, if they are using the same driver.

I have not yet looked at the actual DLL, but some of
the related functions seem to be wrong.


In pipewire-jack.c:


jack_time_t jack_frames_to_time():

   df = (frames - pos->clock.position) * ...


jack_nframes_t jack_time_to_frames():

   du = (usecs - pos->clock.nsec/SPA_NSEC_PER_USEC) * ...


In both cases, the value computed inside the () should be
signed. But since both arguments are unsigned, so will be
their difference. See the original jack sources for how
to handle this.

Similar considerations apply in all related functions,
so I do expect some other bugs.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


[LAD] Status of Pipewire

2023-02-08 Thread Fons Adriaensen
Hello all,

I've been contemplating trying out Pipewire as a replacement
for Jack. What is holding me back is a what seems to be a
serious lack of information. I'm not prepared to spend a lot
of time and risk breaking a perfectly working system just to
find out that it was a bad idea from the start. So I have a
lot of questions which maybe some of you reading this can
answer. Thanks in advance for all useful information.

A first thing to consider is that I actually *like* the
separation of the 'desktop' and 'pro audio' worlds that
using Jack provides. I don't want the former to interfere
(or just be able to do so) with the latter. Even so, it may
be useful in some cases to route e.g. browser audio or a
video conference to the Jack world. So the ideal solution
for me would be the have Pipewire as a Jack client.
So first question:

Q1. Is that still possible ? If not, why not ?

If the answer is no, then all of the following become
relevant.

Q2. Does Pipewire as a Jack replacement work, in a reliable
way [1], in real-life conditions, with tens of clients,
each maybe having up to a hundred ports ?

Q3. What overhead (memory, CPU) is incurred for such large
systems, compared to plain old Jack ?

A key feature of Jack is that all clients share a common idea
of what a 'period' is, including its timing. In particular
the information provided by jack_get_cycle_times(), which is
basically the state of the DLL and identical for all clients
in any particular period. Now if Pipewire allows (non-Jack)
clients with arbitrary periods (and even sample rates)

Q4. Where is the DLL and what does it lock to when Pipewire
is emulating Jack ?

Q5. Do all Jack clients see the same (and correct) info
regarding the state of the DLL in all cases ?

The only way I can see this being OK would be that the Jack
emulation is not just a collection of Pipewire clients which
happen to have compatible parameters, but actually a dedicated
subsystem that operates almost independently of what the rest
of Pipewire is up to. Which in turn means that having Pipewire
as a Jack client would be the simpler (and hence preferred)
solution.


[1] which means I won't fall flat on my face in front of
a customer or a concert audience because of some software
hickup.

Ciao,

-- 
FA




___
Linux-audio-dev mailing list -- linux-audio-dev@lists.linuxaudio.org
To unsubscribe send an email to linux-audio-dev-le...@lists.linuxaudio.org


Re: [LAD] relative_dynamics.lv2

2022-11-19 Thread Fons Adriaensen
On Thu, Nov 17, 2022 at 12:49:25PM +0100, Florian Paul Schmidt wrote:

> this is another "weird" plugin which I finally found the time to
> implement. It's a dynamics processor that doesn't care about absolute
> thresholds. It just computes two envelopes, env1 and env2, filtered with
> exponential smoothing filters with different time constants t1 and t2
> (usually t1 < t2). Their ratio r is computed and the audio signal is
> then multiplied by 1/r.

I wrote something similar a few months ago. It doesn't change the gain,
but replaces short (a few ms) high level fragments by a recent lower
level one, crossfading between them with some look-ahead.

The purpose was to remove the high level impulsive noise (caused by
little pebbles being moved in the nearby surf) from the audio of an
underwater webcam:



BTW, this is one of the places I go to for scuba diving in the summer. 
The 'landscape' is entirely artifical, created by the local diving
center by moving big and small rocks from another site about 1 km
away. This is the place they use for the 'discover scuba diving' program,
a 60-90 minutes excursion for people who don't yet have a scuba
certification (depth is less than 10 m). It has become very popular
with the local fish population (including lots of moray eels) who will
show up in big numbers as soon as they notice the divers.

Ciao,

-- 
FA

  

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] stereospread.lv2 - Stereo spreading plugin (conjugate random phase all-pass filter pair)

2022-11-18 Thread Fons Adriaensen
On Fri, Nov 18, 2022 at 05:28:15PM +0100, Fons Adriaensen wrote:
 
> Create a filter, using an inverse FFT, that doesn't modify the phase
> but just has some random gain between -1 and +1 in each frequency bin.

Things could work as well with some completely random FIR filter 
and without the delay.

The trick is is that adding something for L and subtracting the same
for R will always result in the sum being equal to 2 * the input.

Ciao,

-- 
FA


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] stereospread.lv2 - Stereo spreading plugin (conjugate random phase all-pass filter pair)

2022-11-18 Thread Fons Adriaensen
On Fri, Nov 18, 2022 at 02:10:04PM +0100, Florian Paul Schmidt wrote:

> This gain can be cancelled by scaling the magnitude of the fft bin by
> 1/(2*cos(theta)). This then ensures summing to the original signal.

You'd need quite a high (up to inifite) gain if the phase is near 90
degrees.
 
There is a much simpler solution:

Create a filter, using an inverse FFT, that doesn't modify the phase
but just has some random gain between -1 and +1 in each frequency bin.
Let F be that filter. It will be symmetric in the time domain, so have
a delay equal to half its lenght. Let D be that delay.

With X your input signal, compute

X1 = D (X)# X delayed
Y = G * F (X) # X filtered and delayed, gain G

L = 0.5 * (X1 + Y)
R = 0.5 * (X1 - Y)

where user paramter G controls the amount of effect.

Summing L and R will always give X1, the delayed input.

-- 
FA



___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] stereospread.lv2 - Stereo spreading plugin (conjugate random phase all-pass filter pair)

2022-11-16 Thread Fons Adriaensen
On Wed, Nov 16, 2022 at 12:51:44PM +0100, Florian Paul Schmidt wrote:
 
> This ensures that what is a phase theta in the first filter becomes a
> phase of -theta in the second filter, and summed that just gives a phase
> of 0.

1. If I understand this correctly the L and R outputs have opposite phase
   shifts. That means they will not sum to the input. Just assume the L
   shift is 90 degrees. then R is -90, and they will just cancel. 

2. If you measure this, you will also note amplitude differences between
   L and R outputs. This is to be expected. Even if the two filters have
   exact unity gain (and just a phase shift) at each frequency corresponding
   to an FFT bin, the resulting filter will not be all-pass.

3. At high frequencies (above 1 kHz or so), it's actually the amplitude
   differences and not the phase shifts that create the stereo effect.


Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Linear prediction

2022-09-28 Thread Fons Adriaensen
Hello all,

Any experts on linear prediction here ?

During my current holidays in Greece, apart from SCUBA diving,
I've been reading up and doing a lot of simulations (in Python)
of algorithms related to linear prediction. This has been - I
must adimit - a gaping hole in my practical DSP experience up
to now.

Things work well, but there is one thing that remains strange.

It is usually said (see e.g. JOS) that the required order of
the prediction should be a bit higher than twice the number
of resonances (or formants for voice processing) in the signal
to be predicted. This makes sense as each resonance is in
essence a second-order process.

From my experiments it seems that at a sample rate of e.g. 48 kHz
(indeed much higher than normally used to for speech processing),
a much higher order is required to model a low frequency formant
at e.g. 350 Hz.

What seems to happen at low order is that processing the prediction
error (residual) by the the synthesis filter produces and almost
perfect reconstruction of the input, but that the filter is 
actually just doing the -6dB/oct slope above the resonance,
while the resonance itself remains in the residual.

So e.g. trying to move the formant by modifying the filter will
fail. 

As far as I can see, a prediction order of around 50 is required
to correctly model a resonance at such low frequencies.

So my question now is this: is the rule mentioned above just
wrong, or am I missing something ?

Ciao,

-- 
FA



___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] installing python extensions

2022-09-22 Thread Fons Adriaensen
On Thu, Sep 22, 2022 at 09:49:41AM -0400, Marc Lavallée wrote:
 
> Le 2022-09-22 à 03 h 56, Fons Adriaensen a écrit :
> >error: invalid command 'bdist_wheel'
> It looks like wheel is not installed (locally or globally). Try installing
> it with "pip install wheel", or install it on the system (python3-wheel on
> Debian); it could be enough to fix the issue.

That was it, many thanks !

To it looked as if pip didn't know the bdist_wheel command, and indeed
pip help-commands didn't include it. No indication at all that something
else was missing... 

The relations and dependencies between the various python tools - pip,
setup, wheel, ... remain a mistery to me, and there seems to be little
up to date documentation. Examples all assume you want to make a package
for the PPI and no other use cases...
 
> > Have things changed again 
> Python is changing faster now, so testing on different versions is a good
> idea.

With Archlinux you always (only) get the latest and greatest :-)

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] installing python extensions

2022-09-22 Thread Fons Adriaensen
On Mon, Aug 15, 2022 at 05:48:34PM -0400, Marc Lavallée wrote:
 
> So the Makefile could be:
> 
> PIP = python3 -m pip
> PKG = zita_audiotools
> 
> build:
>     $(PIP) wheel .
> 
> install:
>     $(PIP) install --force-reinstall $(PKG)*.whl
> 
> uninstall:
>     $(PIP) uninstall $(PKG)
> 
> clean:
>     rm -rf build $(PKG)*.egg-info $(PKG)*.whl

This worked perfectly a month ago on my main system. Now I'm in holidays
with only my laptop, and it fails:

python3 -m pip wheel .
Processing /home/fons/python/jackimpfilt
  Preparing metadata (setup.py) ...  [?25ldone
 [?25hBuilding wheels for collected packages: jackimpfilt
  Building wheel for jackimpfilt (setup.py) ...  [?25lerror
  error: subprocess-exited-with-error
  
  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [6 lines of output]
  usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
 or: setup.py --help [cmd1 cmd2 ...]
 or: setup.py --help-commands
 or: setup.py cmd --help
  
  error: invalid command 'bdist_wheel'
  [end of output]


python version is 3.10.5

Another difference IIRC is that on my main system pip would set up
a virtual environment, while this doesn't seem to happen here.

Have things changed again 

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] installing python extensions

2022-08-15 Thread Fons Adriaensen
On Mon, Aug 15, 2022 at 10:54:06AM -0400, Marc Lavallée wrote:

> Christopher Arndt sent a long and detailed answer, here's a shorter one.

One (metric) unit of Eternal Gratitude to both of you.

So 
> pip install .
will install to ~/.local/lib

while
> sudo pip install .
will install to /usr/local/lib

I find the destination directory depending on who the user pretends
to be a bit strange, but it works !

Also tested this with a package that includes data files (*.npy)
that should be found by the installed code, and also this works.

The only minor problem is that the sudo version leaves two
directories (build and *.egg-info) that can only be cleaned up
by root. No problem on systems that allow sudo everything, but
I may keep the Makefile just to offer 'sudo make clean', 
assuming most system will allow this. Unless there is a
cleaner solution.

Ciao

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] installing python extensions

2022-08-15 Thread Fons Adriaensen
Hello all,

I have some mixed python/C++ packages, e.g. zita-audiotools
and zita-jacktools.

To install these I expect the following to happen:

1. The C++ parts are compiled and combined into a *.so
   file which is a python extension.
2. The *.so and the python parts, any data etc. get
   installed into the user's /usr/lib/python*.*/site-packages.

To make this as easy as possible I provide a setup.py and a 
Makefile, so that all that should be required is:

make; sudo make install

Originally this used distutils, when that got 'deprecated'
this changed to setuptools. So until recently the Makefile
was something like: 


PY = /usr/bin/python3

build:
$(PY) ./setup.py build

install:
$(PY) ./setup.py install

clean:
$(PY) ./setup.py clean
rm -rf build dist zita_jacktools.egg-info
---

Then I got warnings telling me that calling setup.py directly
is now  also deprecated, and that I should use 'official tools'
to build and install. What exactly that means I was unable to
find out, but the following seems to work:


PY = /usr/bin/python3

build:
$(PY) -m build -w

install:
pip install --force-reinstall dist/*.whl

clean:
rm -rf build dist *.egg-info *~


But this still produces a warning:

> WARNING: Running pip as the 'root' user can result in broken
> permissions and conflicting behaviour with the system package
> manager. It is recommended to use a virtual environment instead.

Now clearly installing things in site-packages requires root,
so what is then the recommended method ?? And why the virtual
environment (which is used by build anyway) ??

If anyone can shed some light on this mess he/she will deserve
my eternal gratitude.

Ciao,

-- 
FA


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] library soname, was Re: Rubber Band Library v3.0.0 released

2022-07-28 Thread Fons Adriaensen
On Thu, Jul 28, 2022 at 02:52:03PM +0100, Chris Cannam wrote:

> This implies that if you add a function, you need not change the soname.

In this case you didn't just add a function, but a completely new and
improved algorithm. That's reason enough to increment the major version,
even if only for 'marketing'. And more so if you also offer a commercial
license.

Ciao,

-- 
FA
 
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Rubber Band Library v3.0.0 released

2022-07-16 Thread Fons Adriaensen
On Sat, Jul 16, 2022 at 12:02:14AM +0200, Robin Gareus wrote:
 
> Congrats on the release and thanks for the very informative blog post.

n++;

From the blog:

'Time-stretching in contrast is often useful but marvellously ill-defined.' 

Indeed. And not only in terms of musical or aesthetic considerations as 
illustrated by the examples, but even in a fundamental mathematical way:
there isn’t enough information in the signal to specify what should be
the 'correct' result - this always involves a degree of subjective 
interpretation and choice.

Which puts developing a library such as Rubberband on a very different
level when compared to e.g. resampling or convolution for which at least
the expected output is exactly defined.

And that's one of the reasons why I consider Rubberband to be one
of the true gems of open source audio software.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] MIDI granularity

2022-06-16 Thread Fons Adriaensen
On Thu, Jun 16, 2022 at 08:37:49PM +0100, Will Godfrey wrote:

> Over a hardware DIN port this is of course approx. 1mS, but does anyone know
> if it's the same over a USB link?
> 
> Presumably, it would have to be if the source was also sending to the DIN
> route.

Not really, it could be much faster. When the source starts transmitting the
first byte on the standard MIDI port, it probably has the complete message
(3 bytes for a note on/off) ready. There is no reason why the USB message
should be derived from the serial MIDI data.
 
> Following on from that, what about the multiport adaptors that have 4 hardware
> ports going down one USB cable. I would guess that these could be interleaved
> so that (assuming the 1mS granularity holds) the overall rate is still 1mS 
> with
> the ports spaced up to 250uS apart.

AFAIK, USB transfers are not interleaved at byte level, doing that  would
create a lot of overhead. So even if you have a note event on all four ports
at the same time, that would probably be 4 separate USB transfers, one after
the other. Still this could take less than 1 ms, it just depends on the USB
data rate.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] A GUI request

2022-06-08 Thread Fons Adriaensen
On Tue, Jun 07, 2022 at 02:15:12PM +0100, Will Godfrey wrote:

> In audio software, the classic example being Phasex which seems to
> get just about everything wrong :(

:-) A good example of the 'identical items arranged in a line'
which I mentioned before. 

It's by no means the only one which gets about everything wrong.
Just have a look at .

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] A GUI request

2022-06-07 Thread Fons Adriaensen
On Tue, Jun 07, 2022 at 08:49:36AM +0200, Thorsten Wilms wrote:

> So far I thought the differences are all about much higher requirements
> on readability at a glance and stableness.

Not just 'at a glance' but also in exceptional circumstances
and under stress. For example imagine your aircraft is damaged
and shaking violently. Your eyeballs will be shaking as well.
Now if you have a series of identical items arranged in a line
on a display, it will be extremely difficult to read them, you
just won't know which is which. Also imagine having to use a
touch panel in those conditions, it would be just impossible.

> More contrast

Enough but not too much. Lighting conditions can be extreme in
a cockpit. If you're landing e.g. at LGIR (Herakleon, Crete)
on RWY 27 in the late afternoon you will be looking straight
into the sun for minutes and wearing sunglasses. Come a few
hours later and it will be pitch black. Displays must be
readable and not induce eye fatigue in both cases. BTW, since
most flat panel displays produce polarised light, pilot's
sunglasses must NOT be polarised.

> avoiding superfluous styling, no deep layering.

Yep. And no animation, popups, etc.

> Being able to rely on training much more.

You rely and training and professional knowledge instead of
random 'exploration'. Which also means function is indicated
using standard (English) words or acronyms, and not by icons 
(which can be much more ambiguous than most people imagine).
Also accessibility is not an issue.

> Fons, do you have examples of such guidelines that don‘t work
> for cockpits, that may surprise the layman? 

Many (if not most) computer applications are about 'editing'
some sort of document. Even a DAW fits into that category
when used to create music - but not e.g. when used 'live',
just as a mixer and/or playback device.

Controlling an aircraft (or any machine) is something very
different. So the whole set of standard menus like 'File',
'Edit', 'Tools' etc. doesn't make much sense.

Some of the requirements could be unexpected. For example
it needs to be unconditionally clear if some function is
'active' or just 'armed' (meaning it will automatically 
become 'active' later). Or if some displayed value is the
'actual' one or the 'target' one that e.g. an autopilot
will try to achieve. Usually this is done by consistent
use of colour.

That said, all controls that directly affect operation will
be hardware ones, not items in a menu or toolbar. Displays
such as the PFD and NAV panels are just displays and not
used for input. The only exception to that would be the
MCDUs - the things in the central pedestal between the
pilots that look like a 'calculator on steroids'. These 
provide an interface to almost everything in a modern
aircraft. Recent models actually have a trackball and
are used much like a conventional PC application. But
they are for setup and information lookup only. 

There are two important aspects of user interface design
in a modern 'glass cockpit', and they can be at odds:

* Maintain situational awareness. Automation is fine but
the pilots need to aware of what it is doing at all times.
This can be quite complex.

* Avoiding information overload in emergency conditions.
Pilots are trained to prioritise and divide their tasks,
but this can still be a problem, and a lot of research
is done to avoid it.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] A GUI request

2022-06-06 Thread Fons Adriaensen
On Sun, Jun 05, 2022 at 09:49:13PM +0100, Will Godfrey wrote:

> As a {cough} sprightly 73 year old I'm starting to have sight difficulties -
> especially dark colours and night vision. With the current trend of dark GUI
> shades this presents me with a problem when using new software. Some, but not
> all of these programs do provide alternatives. However they default to dark, 
> and
> I have a hard time finding how to change this, or indeed, even seeing which
> options are available.

Being a few but not so much years younger I do understand the problems you
may have. But I don't think the essence of this can be simplified to 'light'
vs 'dark' themes. There are good reasons for having dark themes - working in
a dark environment like a concert being the most evident one.

Even a dark theme can be perfectly readable if designed well. This may involve
more than just changing the colors of a light one. For example, you'll probably
need 'bolder' fonts as well. And the background should never be completely black
but provide an amount of brightness that allows your eyes to adjust to it. It
is the inability of your vision to adjust that makes many dark themes hard to
use.

Simple fact is that all popular GUI toolsets are targeted to developing 'office'
or 'social' type of applications and completely fail to address the needs for
anything outside that limited scope. There is much more to this than just the
choice of colors. 

I've been involved in creating displays used in aircraft cockpits and similar
technical environments. Almost all of the 'standard' GUI design guidelines
(as advocated by 'computer science' academics) have been shown to be either
irrelevant or just plain wrong for such applications. That probably includes
graphical interfaces for pro-audio systems. 

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Fw: Re: Deriving a steady MIDI clock crossplatform?

2022-05-19 Thread Fons Adriaensen
On Thu, May 19, 2022 at 11:17:28PM +0200, Jeanette C. wrote:

> Thanks both of you. Based on timed_wait, I looked at the boost libraries and
> found two candidates in the Thread library:
> conditional_varaible::timed_wait
> and
> thread::sleep_until
> which can both take an absolute time. I suppose both should be equally
> usable, since again absolute time can be calculated independent of the last
> wait/sleep?

Better check how this is implemented. It could just use usleep().
And then all precision is lost, even if the sleep times are adjusted to
compensate for the latency of the previous event.

Ciao,

-- 
FA



___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Fw: Re: Deriving a steady MIDI clock crossplatform?

2022-05-19 Thread Fons Adriaensen
On Thu, May 19, 2022 at 10:40:48PM +0200, Robin Gareus wrote:
 
> While there is a corresponding mach/clock.h, for the case at hand it is
> preferable to use Apple's Core Audio, CoreMIDI. MIDI Event scheduling is
> abstracted, and there is dedicated API to convert timestamped events
> with high precision:
> 
> AudioConvertNanosToHostTime() and AudioConvertHostTimeToNanos()

The problem here is not conversion, but what to wait for,

In zita-convolver.h there is an implementation of sem_t for OSX
(which only has a crippled implementation), using a condition
variable. It doesn't have sem_timedwait(), (since zita-convolver
doesn't need it) but that could be added quite easily.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Fw: Re: Deriving a steady MIDI clock crossplatform?

2022-05-19 Thread Fons Adriaensen
- Forwarded message from Fons Adriaensen  -

Date: Thu, 19 May 2022 22:22:12 +0200
From: Fons Adriaensen 
To: "Jeanette C." 

On Thu, May 19, 2022 at 07:52:33PM +0200, Jeanette C. wrote:
 
> I know about one or two applications that use the timeofday/sleep mechanism,
> but from first hand experience I know that these tend to drift and wobble.

The key to do this is to have a high priority thread waiting for an
*absolute* time, and then each time increment that time by the 
required delta.

Note that this is fundamentally different from using sleep or similar
functions. With those you wait for a certain time. So if your previous
event was late, the next one will be late as well just because you
start waiting for it too late. So all the errors will add up, and
you will *never* get the correct event frequency.

When you wait until an absolute time, any latency on the previous
event does not affect the following ones. The errors don't accumulate.

So what to wait for ? That could be any system call that takes an
absolute timeout rather than a maximum waiting time. On Linux I'd
use something like sem_timedwait(). To set the initial timeout,
the corresponding clock can be read with clock_gettime(), using the
CLOCK_MONOTONIC option.

Don't know about Apple. Last time I looked it didn't have clock_gettime(),
but it has gettimeofday(). Note that it is not gettimeofday() that is
the cause of the problem you mentioned, it is using sleep() or usleep().

Ciao,

-- 
FA

  

- End forwarded message -
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Aeolus update

2022-05-04 Thread Fons Adriaensen
Hello all,

Version 0.10.1 of Aeolus is now available at the usual place:



* Cleanup, maintenance, bug fixes.

The biggest bug was probably that the 'instability' and 'release
detune' parameters set in the stops editor were correctly stored
into the *.ae0 files which contain the stop definitions, but NOT
copied into the *.ae1 files which contain the precomputed wavetables
and run-time synthesis parameters.

So they would work only when the wavetables were recomputed
on a running Aeolus instance (e.g. by changing tuning or
temperament), and not when previously stored ones were reloaded.

This makes quite a difference, as without the random delay
modulation which is controlled by 'instability', the looped
parts of the wavetables just become a static sound.

You may also get stops-0.4.0. This includes some tweaks that I
have done on my local copy over the past years, but is probably
not much different from 0.3.0. You may need to modify your
~/.aeolusrc to use these.

---

Apart from bug fixes, this will be the last release using the
current Aeolus framework.

A completely new one is in the pipeline, but it still requires
a lot of new code, testing and tuning. This will provide:

* 'Chiff', the filtered noise that some pipes generate.
  I've finally found an algorithm that produces realistic
  results and that is efficient enough to work on lots
  of pipes.

* Using multiple CPU cores.

* Higher order Ambisonics output.

* Binaural output (with optional head tracking).

* Full separation of UI and synthesis processes,
  connected via a network connection.


Ciao,

-- 
FA




___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Updates of zita-audiotools and zita-jacktools

2022-04-22 Thread Fons Adriaensen
Hello all,

zita-audiotools-1.3.0 and zita-jacktools-1.6.0 are now available at
.

If you have previous versions, installing these updates will require
some extra work. This is for two reasons:

* The python package names have changed:
audiotools -> zita_audiotools
jacktools  -> zita_jacktools

* The setup.py files now use setuptools instead of the deprecated
  distutils, and will install a python 'egg' directory.

This requires the following actions:

* Before installation, remove all traces of any previous versions
  from your python site-packages directory. Then just make; sudo
  make install; sudo make clean.

* In existing applications, the import statements must be modified
  to refer to the new package names.


Thanks to Marc Lavallée for both suggesting the name change and
testing the new releases.


Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Csound question

2022-03-06 Thread Fons Adriaensen
Hi Jeanette,

> if (lastcycle() == 1) then

Thanks very much for all the suggestions !

Not having used Csound for at least 15 years, there's
a lot I will have to learn (again).

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Csound question

2022-03-06 Thread Fons Adriaensen
Hello all,

See below...

What I want to achieve is to take some action when instr 85 ends.
I naively tried using 'gidur' as p3 for instr 85 in the score, but
that doesn't work.

So how do I trigger instr 86 at the right time ?




instr 84  
gidur   filelen $INPFILE
print   gidur
endin
 

instr 85; set its duration from the value found in instrument 84
p3 = gidur
; process input from $INPFILE
endin


instr 86
; Should do something when instr 85 ends.
endin





i84 0  0.1 
i85 +  1   ; p3 is just a dummy


TIA,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Jack Transport Question

2022-02-04 Thread Fons Adriaensen
Hello all,

Looking at the Jack Transport state machine on 
, there is only
one 'reposition' transition, and it goes to the 'Starting' state
which then sooner or later will go to 'Rolling'.

Q1: Does this mean it is impossible to reposition without starting ?

Or is there just a transition missing in the diagram from 'Stopped'
to itself ?

Q2: Is there any way to find out, while 'Stopped', if all clients
are ready to start immediately without actually starting ?

I'd say at least one more state would be required.

Ciao,

-- 
FA







___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Any csound experts here ?

2022-02-01 Thread Fons Adriaensen
Hello all,

I'm trying to help someone (OSX user trying out Linux) use Csound
with Jack. What I'd need to know is which are the Csound command
line options to 

* run Csound with Jack,
* using Ninp input ports and Nout output ports,
* not autoconnecting any ports,

if that is possible at all...

TIA,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] pipewire

2022-01-20 Thread Fons Adriaensen
Hello, Wim,

> Sorry, git for now. I just started to implement the last bits to make a
> session manager optional.

OK, I'll wait until this is available via Arch (don't want to mix up
two potential problems, build/install and configure...)
 

> All alsa devices are wrapped in an adapter. This contains:
> 
>-> channelmix -> resample -> convert -> alsa-device
> 
> channelmix and resample are disabled when the graph sample rate
> and  device channels all match the adapter ports, you would configure
> the same number of channels on the output as the alsa-device channels
> and set the graph rate to something the hw supports.

In the example config, node.param.Portconfig.format.channels is
hardcoded. Is there a way to obtain the number of channels the
device supports (to make sure the values match) ? ALSA does
provide this info once the devide is opened...

What will happen if the configured number of channels does not
match ? What sort of channelmix will I get ? 

> The channelmix is mostly to support making a 5.1 sink that can downmix
> to dolby or some other tricks.

This can be a very devious thing. I remember an occcasion some
years ago (when I was in Parma) when some students were doing
measurements in a rented anechoic room during an entire week.
Later, when the measurements were processed, they discovered
that all of them were useless because their (Windows) system
had been trying to be clever and had applied gain changes and
channel mixing without them being aware. Now work out the cost
of renting an anechoic room for 60 hours. Plus, if they hadn't
been students and 'free labour', the consultancy fees for the
same period.

For any serious work, there are things that need to be
disabled without any chance of them ever be re-enabled by
accident. The required result when something does not match
is to fail and report, not to try and be clever and 'fix'
things behind the user's back.

The idea of having the daemon do the 'plumbing' and the 
session manager to define 'policies' is a very good one. 
But to take that to its logical consequence, the defaults
for any optional processing (channel gains and mixing,
resampling,...) should be off and disabled. If any other
defaults make sense for the 'average user', they should
be defaults defined by the session manager, not by the
plumbing daemon. 


There is one feature that would be very desirable and
for which I would even be prepared to write an ad-hoc
session manager if that is the only place it can be
done: if a sound card becomes unavailable while in use,
substitute a dummy device with the same sample rate,
period, and number of channels, so the entire processing
graph remains intact and running. Then, when the device
becomes available again, allow the user to reconnect
to it (this must NOT be automatic). This is to minimise
'down time' when someone accidentally pulls a cable
during a concert or recording (a classical orchestra
is orders of magnitude more expensive than an anechoic
room).

Ciao,

-- 
FA







  

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] pipewire

2022-01-19 Thread Fons Adriaensen
On Tue, Jan 18, 2022 at 07:16:39PM +0100, Wim Taymans wrote:

> As a bare minimum you would need pipewire (the daemon) and
> pipewire-jack (the libjack.so client implementation). With a custom
> config file you can make this work exactly like jack (see below).

Thanks, will try this, but mnay questions remain (see below).
 
> All the system integration (dbus, systemd and the automatic stuff)
> happens in the session manager. You don't need to run this.

Aha, that is good news. 
 
> You'll need the pipewire git version

Will things work with the current Arch packages and do I
need git just becuase it has the minimal.conf file ?
If yes I'd prefer to use the Arch packages for now.

Questions:

* A lot of lines in the minimal.conf are commented. Can one
  assume that these correspond to the defaults ? If not, what
  are the defaults for all these parameters ? 

  What concerns me here is things like 

#channelmix.normalize  = true
  
  I certainly do not want any normalisation, so do I need
  to set this to false explicitly ?


* The sample rate (48000) is in many places, most of them
  commented out. What is the relation between all of these ?
  Why, for example, is 'default.clock.rate' commented ?

* 'quantum', 'period-size' and 'period-num' are commented
  out everywhere (except in 'vm.override'). So where is
  the period size defined ?

* If things like sample rate, period size, etc. are set
  to some fixed values in the config, can they still be
  modified by e.g. pw-metadata ? I hope not...


Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] pipewire

2022-01-17 Thread Fons Adriaensen
Hello all,

I'd like to test pipewire as a replacement for Jack (on Arch),
and have been reading most (I think) of the available docs.

What is clear is that I will need to install the pipewire
and pipewire-jack packages.

And then ?

How do I tell pipewire to use e.g. hw:3,0 and make all of
its 64 channels appear as capture/playback ports in qjackctl ?

Note: I do not have anything PulseAudio (like pavucontrol)
installed and don't want to either. If that would be a
requirement then I'll just forget about using pipewire.

TIA,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Quantise MIDI note/frequency to musical scale: algorithm?

2021-12-31 Thread Fons Adriaensen
On Fri, Dec 31, 2021 at 12:58:31AM +0100, Jeanette C. wrote:

> OK, the project I'm working on is a monophonic step sequencer. You will
> find similar functionality in some master control keyboards, softsynths
> and other DAWs. It's mostly for convenience's sake or to help people
> with less knowledge

In that case, one option would be to just disable the unwanted notes.
This provides immediate feedback, so people will actually learn the
set of allowed notes, and I guess that will happen quite fast.
If instead you replace the unwanted ones, the user will learn
either nothing, or the wrong things, e.g. that C# is a valid note
in a C-major scale.

> Initial thoughts on the MIDI note case included creating a 127 element
> array and fill it with notes only in the scale and then use it as a
> lookup table. So element 60 (middle C) would map to 60, whereas element
> 61 (C3) might map to 62 (D). Such a table could relatively easily be
> defined from some kind of scale definition and root note number. Though
> the process did seem unellegant.

It isn't. I don't think you could find a general-purpose algorithmic
approach taking less than 127 bytes to code it.

> I think I once wrote a quantiser that did quantise any frequency to the
> nearest note in the western chromatic scale, which wasn't too difficult,
> but I can't see a way to perform the same feat with any kind of diatonic
> scale, eventhough finding the relevant frequencies in that scale is
> almost as easy as setting up the MIDI scales above.

There is code doing this in zita-at1 (the autotuner). It has some 
refinements such as an optional preference for the previous note.
I will look this up and isolate it - it may be difficult to find
as it is integrated with other functionality.

Ciao,

-- 
FA


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Quantise MIDI note/frequency to musical scale: algorithm?

2021-12-30 Thread Fons Adriaensen
On Thu, Dec 30, 2021 at 02:49:45PM -0800, Yuri wrote:

> Mapping is strictly logarithmical, i.e. log(F) would have notes equally
> distributed. One note and other note begins in the middle of such interval.
> The rest is simple math.

That would be true for 'equal temperament', which is more or less
tne standard for electronic instruments. But there are hundreds of
other temperaments. 

In equal temperament a musical fifth would be the ratio 2^(7/12)
= 1.498307 and a third would be 2^(4/12) = 1.259921.

In natural or pythagorean temperament those would be 3/2 resp.
5/4 exactly. But of course such simple ratios are possible only
in a limited set of keys.  

Which is why a lot of other temperaments exist. All of them are
some compromise between exact musical intervals and the ability
to play in any key. Organs for example are almost never tuned
in equal temperament.

-- 
FA








___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Programming LV2 plugin from scratch tutorial video series

2021-10-20 Thread Fons Adriaensen
On Wed, Oct 20, 2021 at 10:26:41AM -0400, David Robillard wrote:
 
> > That C isn't trying to describe the entire world.
> 
> This is a glaring straw man.  I'm not sure what you're arguing against,
> but it certainly isn't LV2.

No, it isn't. Sorry if I gave that impression. It's more likely what
you refer to writing

> and the W3C and much of the semantic web community deserves a ton
> of criticism

Ciao,

-- 
FA


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Programming LV2 plugin from scratch tutorial video series

2021-10-20 Thread Fons Adriaensen
On Tue, Oct 19, 2021 at 05:32:59PM -0400, David Robillard wrote:

> life is hard.

And complex, having real and imaginary parts.

> The only reason you can understand a C header that defines a struct
> with filter coefficients or whatever is the same.

True.

> What's the difference?

That C isn't trying to describe the entire world.

> If there's an impression here that somehow there is a machine-readable
> description of everything down to first principles that could be used
> to construct an implementation or whatever, that is /not at all/ the
> idea or intent.

Nor would it work (IMHO). But the impression is given that it would,
in particular when discussing the alleged advantages. 

When some new plugin feature is defined, then in the end some human
programmer has to understand it using his/her domain-specific knowledge.
And if that is the intent, things could be a bit simpler. Even the very
restricted C vocabulary (and some common sense) would be enough.

A a programmer I like to avoid dependencies and keep things as simple
as possible, which probably explains why my brain goes in self-defence
mode whenever I try to get any closer to LV2... Please don't take that
personally, I'm very well aware of how useful LV2 has been to the 
Linux Audio world.

I just can't help having the impression that most of the 'semantic
web' stuff is just a load of hype. And that is something I try to
keep my distance from.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Programming LV2 plugin from scratch tutorial video series

2021-10-19 Thread Fons Adriaensen
On Tue, Oct 19, 2021 at 01:00:24PM -0400, David Robillard wrote:

First of all, thanks to all who responded !

Hi David, long time no see...

> The reason you can't "just" use the short one everywhere is that they
> are not globally unique (being the whole point). 

But why should it be globally unique ? Is a 'my-pluign.ttl' supposed
to have any meaning outside the context it is normally used in ?

I can't help but having the impression (which may be completely wrong)
that all these ontologies and the way they refer to each other are
somehow supposed to create 'meaning' out of nothing. Which I think is
an illusion - and a far more dangerous one than the one I referred to
earlier.

Reading all the ontologies that relate to e.g. LV2, the only reason
why I can understand and use them is because, being a audio engineer
and a programmer, I know what a 'plugin', 'host', 'port', etc. are.
Without that knowledge, there would be no meaning. And of course 
one could add more and more and maybe even be able to somehow define
'plugin' while only referring to much more general concepts. But at
the lowest level one would always have to refer to something that
can understood just by itself.

Ciao,

-- 
FA



___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Programming LV2 plugin from scratch tutorial video series

2021-10-19 Thread Fons Adriaensen
On Tue, Oct 19, 2021 at 12:13:17AM +0200, Robin Gareus wrote:

> https://www.w3.org/TR/turtle/ to the rescue :)

Been there of course...

> Instead of e.g.   http://lv2plug.in/ns/lv2core#ControlPort
> you can just writelv2:ControlPort

That I understand. But:

1. The logic that allows this is hard coded in the LV2 host,
   it is not the result of 'including' the @prefix. Checking
   that the @prefix is present does not mean that whatever
   is hard coded corresponds to what the @prefix is supposed
   to imply. This is what I mean when saying that all this
   just provides 'an illusion of conformity'.

2. If the intention is that people use the short form, why
   bother with the long one at all ? The code that reads 
   the ttl files can simply accept the short one without
   even being aware of the equivalence to the long one.
   Which then has no reason to exist at all. So to me
   this looks like a solution in search for a problem.


Ciao,

-- 
FA



___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Programming LV2 plugin from scratch tutorial video series

2021-10-18 Thread Fons Adriaensen
Sorry to _low_show - this was meant for the list.

On Mon, Oct 11, 2021 at 03:19:45PM +0200, _low_show wrote:

> Looks great, will make some time to try it out! Thanks for making this!
 
I somehow deleted the original post, but refer to

> https://youtu.be/51eHCA4oCEI
> https://lv2plug.in/book

I never got to grips with turtle. In particular not with
things like:

@prefix doap:   .
@prefix lv2:    .
@prefix rdf:    .
@prefix rdfs:   .
@prefix units:  .

All docs and tutorials I found mention that the URLs do NOT
mean that an application reading a file that contains them
would actually need to read them from the web (which would
be a unacceptable security risk anyway).

But that means that whatever is defined by those URLs
must actually be hard-coded in any LV2 host that reads
the 'manifest.ttl' or 'my-plugin.ttl' files.

Which raises the question why those @prefix lines are 
required at all. They could be used in theory to check
that what is hard-coded corresponds to what is defined
in those URLs. But to do that the application would 
need to access them.

So all that these lines seem to provide is some illusion
of conformity which isn't enforced or checked at all. 

So the conclusion is that this isn't any better than any
ad-hoc way of encoding the plugin metadata.

Or am I missing something essential ?

TIA for any reply that would enlighten me...

-- 
FA
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] zita-a2j and zita-j2a zombie processes

2021-09-24 Thread Fons Adriaensen
On Thu, Sep 23, 2021 at 02:45:34PM -0700, Ethan Funk wrote:

> I am using ubuntustudio-controls (and thus autojack) to manage multi-
> USB audio interface scenarios as I continue to test my radio automation
> software port/upgrade.

Can't answer your original question, as I'm not at all familiar with
autojack.

As an alternive way of implementing this sort of thing, you could
have a look at zita-jacktools. It contains a player (with resampling
if necessary), mixer, equaliser, limiter, and some other tools.

All are a jack client and a Python class, so you can control them
easily from a python program. There would be no nead at all to
start new processes etc. for each track. 

Jacktools was originally meant to automate complex audio measurements,
but I see no reason why it couldn't do broadcast automation either.

And if you would need addional modules, just drop me a line...

Ciao,

-- 
FA


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Phase rotation

2021-09-07 Thread Fons Adriaensen
On Tue, Sep 07, 2021 at 04:02:19AM +0200, Robin Gareus wrote:

> I did some listening tests, both on individual samples as well as using
> the plugin on the master-bus with various performances. In many cases it
> is audibly transparent.

Then the next question is of course: in those cases where it is audibly
transparent, what reduction in peak level does it provide ?

And second question: why should peaks be a problem ? Just reduce the
level. The listener can compensate by increasing his/her volume :-)

Ciao,

-- 
FA


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Phase rotation

2021-08-31 Thread Fons Adriaensen
On Tue, Aug 31, 2021 at 04:24:43AM +0200, Robin Gareus wrote:
 
> > Now don't believe that phase shifting a signal will always result
> > in a waveform with a lower peak/RMS ratio. It could very well
> > have the opposite effect.
> 
> Well, there is a minimum. So far I just brute force detect it, trying
> all angles in 1 deg steps on a file.

Brute force indeed...

Now there is something else to consider. Using this method of course 
makes complete fools of those listeners who have spent k$ on e.g.
speakers with a good transient resonse, or the recording engineers
who are using expensive mics for the same reason. In other words,
this really kills whatever snappy transient response you may have
had. And in some cases you *can* hear it quite clearly. Like
everything else in the loudness wars, it kills quality,

Ciao,

-- 
FA




 
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Phase rotation

2021-08-31 Thread Fons Adriaensen
On Tue, Aug 31, 2021 at 04:24:43AM +0200, Robin Gareus wrote:

> I hoped to sidestep that because the phase-angle should be a sweepable
> parameter. I can probably make this work by cross-fading the computed
> FIR when the parameter changes.

Provided that what you want is the same phase angle on all frequencies,
you can easily make it 'sweepable' without recomputing the IR.

The N-point hilbert IR will give you 90 degrees plus a delay of N/2
samples [1]. So in a second channel make a delay of N/2 samples.
Then by combining both in the right proportions you can make any 
phase angle you want. For A degrees, just do

out = cos (A) * D + sin (A) * H, where D and H are the delay and
hilbert convolution outputs respectively.

[1] It's not possible to do the phase shift without additional
delay. It's more or less the opposite of a linear phase filter:
for 90 degrees the IR must be anti-symmetric. The lenght of the
hilbert IR determines its bandwidth, the 3 dB points will be
near FS / N and FS / 2 - FS / N. So ideally instead of a delay
for the second channel you should use a FIR with the same 
magnitude response as the Hilbert IR.

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Phase rotation

2021-08-30 Thread Fons Adriaensen
On Sun, Aug 29, 2021 at 10:03:20PM +0200, Robin Gareus wrote:
 
> This works well, except for the first FFT bin: 0 Hz, DC offset. If the
> phase-shift changes the average DC level of the signal there is a
> discontinuity.

To understand why you can't chage the phase of DC at an even more
fundamental level, imagine the complex plane, and a vector starting
at the origin and rotating anti-clockwise with frequency F.

So its angle will be

A(t) = 2 * pi * F + P for some F and P, where P (or sometime A(t),
depending on context) is called the phase.

Your 'signal' is the endpoint of the vector, clearly a complex value,

To get a real-valued signal you need a second vector, the mirror image
of the first one w,r.t. the real axis, and so rotating clockwise, i.e.
with a negative frequency,

The sum of the two vectors is then always purely real.

That's why it is said that real-valued signals always contain both
positive and negative frequencies with equal magnitude.

The angle of the second vector is -A(t). So its phase is -P

So the condition for having a real-valued signal is that the phase
for the negative frequency is minus the phase of the positive one,
and both have the same amplitude. The only way to satisfy this
condition for 0 Hz is that the phase must be zero, 

-- 
FA
 
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Phase rotation

2021-08-30 Thread Fons Adriaensen
On Sun, Aug 29, 2021 at 10:03:20PM +0200, Robin Gareus wrote:
 
> During the last days, I looked into phase-ration: components of a signal
> are delayed differently depending on their frequency.
 
> This works well, except for the first FFT bin: 0 Hz, DC offset. If the
> phase-shift changes the average DC level of the signal there is a
> discontinuity.

If you are working with real-valued signals, you can't change the phase
of DC or the Nyquist frequency (FS/2).
You can if you are using complex-valued signals, as often used in SDR.

To understand this, look at the real FFT. Assume the input is 256 samples.
The result is also 256 real values, which are in fact 129 complex ones.
Two of those, for DC and FS/2 will be purely real, with the imaginary
part zero. They have to be, as both sin (i * 0) and sin (i * pi) is zero
for all integer values of i. So we get in fact 256 and not 258  
independent real values. This must be so as the FFT has an exact
inverse, so no information can be lost nor added.

To implement this you need more than just FFT and IFFT. Using only
those on each block would amount to circular convolution, while
what you need is linear convolution.  Changing only the phase of
a signal is just a special case of filtering, which means the output
will be longer than the input.

You could use jconvolver to do this. Define the phase shift in the
frequency domain (i.e. as if it were the result of an FFT), do the
inverse FFT and use the result as the IR for the convolver.

In fact jconvolver can generate 90 degree phase shifters for you,
see the 'hilbert' command in the README. Combined with an equivalent
delay this can be used to obtain any phase shift you want.

Now don't believe that phase shifting a signal will always result
in a waveform with a lower peak/RMS ratio. It could very well
have the opposite effect. Any phase shift can be undone by
just another one. If one of those decreases the peak/RMS
ratio the the other will increase it... There is a bit more
to it that they won't tell you in ads.


Greetings to all from sunny Crete.

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] web expert advice wanted

2021-08-16 Thread Fons Adriaensen
Hello all,

I need some advice from a web protocols expert...

I want to record the _audio_ part of 



during a long time without wasting bandwidth on the video part,
and avoiding being 'timed out' or blocked by 'please whitelist
our ads' popups.

Anyone has an idea of how to do this ?

TIA,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] update of zita-jacktools

2021-07-25 Thread Fons Adriaensen
Hello all,

zita-jacktools-1.5.3 is now available at the usual place:



Changes:

Added missing method set_rotation() to JackAmbrot.


Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev


  1   2   3   4   5   6   7   8   9   10   >