Re: [LAD] AVB not so dead after all

2015-07-21 Thread Thomas Vecchione
Related to this topic, I would recommend reading through this...

https://groups.google.com/forum/#!topic/theatre-sound-list/WbysqMHs6iw

AVB isn't dead no, but it certainly isn't close to dominant at this point,
at least on my side of the pond.  It may be a different situation on the
other side, no idea.  That being said, it has a very uphill battle to
displace Dante at this point on my side of the pond and get decent usage
professionally.

Then again if AES67 interoperability comes into play, then is may be a moot
point as ideally you would be able to communicate between the two protocols.

Seablade

On Mon, Jun 15, 2015 at 7:05 PM, Len Ovens  wrote:

>
>
> Looking at the MOTU AVB endpoints, I see MIDI ports on them. None of the
> AVB docs I have read (yet) show MIDI transport. Is this then just RTP-MIDI
> on the same network? It almost seems that the midi is visible to the USB
> part only.
>
> Motu recommends connecting one of the AVB boxes to the computer via USB or
> Thunderbolt and streaming all avb channels through that connection. So this
> would mean that the BOX closest to the computer is the audio interface.
> With Thunderbolt the maximum channel count is 128 with any mix of i/o from
> that (example 64/64 i/o).
>
> Connection to the computer via AVB:
>
> http://www.motu.com/avb/using-your-motu-avb-device-as-a-mac-audio-interface-over-avb-ethernet/
>
> shows some limitations:
>  - SR can be 48k and multiples but not 44.1k and multiples
>  - The Mac will insist on being the master clock
>  - The Mac locks each enabled AVB device for exclusive access.
> (The mac can talk to more than one AVB device but they can't
> talk to each over or be connected to each other while the Mac
> has control)
>  - Maximum channels is still 128 at least on a late 2013 Mac Pro. earlier
> models should not expect more than 32 total channels (mix of i/o)
>  - Motu AVB devices set all streams to 8 channels, no 2 ch streams allowed.
>  - Because the AVB network driver on Mac looks like a sound card, Audio SW
> needs to be stopped before changing channel counts. (adding or
> removing IF boxes)
>
> I think that a Linux driver has the potential to do better in at least
> some cases. I personally would be quite happy with 48k SR only, but I am
> sure someone will make it better. Linux does not have to be the Master
> Clock unless it must sync to an internal card that only has some kind of
> sync out but can't lock to anything (like some of the on board AIs that
> have a s/pdif out). In the Linux case, the AVB AI may well be the only used
> AI and the internal AI can't be synced to anyway. With Jack, channels can
> come and go with no ill effect except a connection vanishes. Channels can
> be added and removed even within a jack client. This _should_ (logically)
> be possible in a Jack backend, but maybe not wise. A sync only backend may
> be better that takes it's media clock from the AVB clock as this would add
> stability in case of an avb box being disconnected. I do not know if jack
> backends can deal with 0 or more channels with their number changing, but a
> client dying because it's remote AI vanished would not crash jack. The
> problem with using clients for the AI is that auto-connecting APPs look for
> system/playback_1 and _2. Even more jack aware apps like Ardour would have
> you looking in "other" for more inputs.
>
> Anyway, getting AVB working with Linux is first (even two channels).
>
> --
> Len Ovens
> www.ovenwerks.net
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Details about Mackie Control or Mackie Human User Interface

2015-07-21 Thread Len Ovens

On Tue, 21 Jul 2015, Takashi Sakamoto wrote:


MCP and HUI both consist of "usual MIDI messages".

There are a couple of sysex messages used for hand-shaking +
discovery; everything else is just normal CC and note messages.


I also know the MCP and HUI is a combination of MIDI messages. What I
concern about is the sequence. If the seeuqnce requires device drivers
to keep state (i.e. current message has different meaning according to
previous messages), I should have much work for it.
In this meaning, I use the 'rule'.


The mcp surface is generally pretty "dumb". Each button sends a note on 
and note off (note on vel 0) for each press. Each LED takes the same note 
number with 127 for on 1 for flash and 0 for off. The PB are just what 
they seem and take the same info in to operate the motor sliders. The 
surface does not really keep any state information at all. The encoders 
give direction and delta not CC value. the encoder display should only be 
sent as 4 bits and the cc number is offset by 0x20 (it looks like).


There are buttons labled bankup/down but they are really just midi 
messages and expect the sw to figure out any banking scheme or not. Each 
unit needs a separate midi port... this is true even for units that have 
16 faders... they are really two units with two midi ports.


here is a link. Yes the manual is old, but the spec is still valid. I 
would judge that if this manual did not exist, the MCP surfaces would have 
gone the way of the mackie C4, which could have been a nice box... but 
open protocols are a must for adoption... even in the windows/osx world.


http://stash.reaper.fm/2063/LogicControl_EN.pdf

There are some reasons not to use MCP to control an audio card:
 - if I spend over $1k for a surface I will not be using it for the audio 
card. It will be for the DAW. Switching a surface from one application to 
another in the middle of a session is just bad.
 - There are only 8 channels, ever. Banking becomes a must. Including 
banking in an audio interface control is a pain for any coder who wants 
to make sw to control the AI (That is everyone). Many common AIs are 
18 or more channels in and out... 36 faders plus required.
 - DAWs do not include audio interface control (levels etc) anyway, 
because they are all different and the IA channel being used for any one 
DAW channel may be shared or changed during the session making a mess 
unless the AI control is a separate window... in which case a separate app 
is easier.


I think one midi CC per gain (use nrpn if you must but really 127 
divisions is enough if mapped correctly and smoothed). One note on/off 
per switchable. All assigned sequencially from 0 up (starting at 1 may 
make things easier, there is some poorly written code that does not see 
note 0... maybe that was mine :) ).


While it would seem possible to use note off as more switches, be aware 
that some SW internally saves note on vel 0 as note off events (this is 
not wrong or a bug).



--
Len Ovens
www.ovenwerks.net

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Details about Mackie Control or Mackie Human User Interface

2015-07-21 Thread Paul Davis
On Tue, Jul 21, 2015 at 9:53 AM, Takashi Sakamoto
 wrote:

> I also know the MCP and HUI is a combination of MIDI messages. What I
> concern about is the sequence. If the seeuqnce requires device drivers
> to keep state (i.e. current message has different meaning according to
> previous messages), I should have much work for it.
> In this meaning, I use the 'rule'.

First of all, building support for *interpreting* incoming MIDI into a
device driver is probably a bad idea on a *nix-like OS. It is just the
wrong place for it. If there's a desire to have something act on
incoming MCP or HUI messages, that should be a user-space demon
receiving data from the driver.

This means that the driver doesn't care about anything other than
receiving a stream of data from the hardware and passing it on, in the
same order as it was received, to any processes that are reading from
the device. The device driver does not "keep state" with respect to
incoming data, only the state of the hardware.

>
> Well, when DAWs and devices successfully establish the 'hand-shaking',
> they must maintain the state, such as TCP?

Discovery in MCP and HUI is a much simpler system. In many
implementations, there is no hand-shake or discovery at all: the
device just powers up and can start receiving and transmitting
immediately. There is no state to maintain, no keep-alive protocol.
About the only thing that can sometimes be gained from the handshake
is determining the type/name of the device, but this is actually
rarely delivered.

> Currently, ALSA middleware has no framework for Open Sound Control.

Lets hope it remains that way.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Details about Mackie Control or Mackie Human User Interface

2015-07-21 Thread Takashi Sakamoto
Hi Paul and Len,

Thanks for your replies.

On Jul 20 2015 22:41, Paul Davis wrote:
> On Mon, Jul 20, 2015 at 9:33 AM, Takashi Sakamoto
>  wrote:
> 
>> Well, are there some developers who have enough knowledgement about MIDI
>> messaging rule for Mackie Control or Mackie Human User Interface(HUI)?
> 
> not sure what you mean by "rule". I'm intimately familiar with both MCP and 
> HUI.

Great.

>> As long as I know, for these models, there're three types of the
>> converter; usual MIDI messages such as Control Change (CC), Mackie
>> Control and Mackie Human User Interface, while I have a little
>> knowledgement about the latter two types.
> 
> MCP and HUI both consist of "usual MIDI messages".
> 
> There are a couple of sysex messages used for hand-shaking +
> discovery; everything else is just normal CC and note messages.

I also know the MCP and HUI is a combination of MIDI messages. What I
concern about is the sequence. If the seeuqnce requires device drivers
to keep state (i.e. current message has different meaning according to
previous messages), I should have much work for it.
In this meaning, I use the 'rule'.

Well, when DAWs and devices successfully establish the 'hand-shaking',
they must maintain the state, such as TCP? And in the 'discovery',
devices must retrieve informations from DAWs? Furthermore, in the
'rule', transactions (a set of requests/responses) are used?

On Jul 21 2015 01:25, Len Ovens wrote:
> It is (as Paul has said) straight MIDI. The best guide I know of is the
> "Logic Control User's Manual" from 2002. The MIDI implementation starts
> on page 105. The only thing maybe a bit odd is that there are encoders
> that use CC increment and decrement instead of straight values, but any
> sw written for these surfaces is aware of it.

It's noce, thanks. But the metering is one of my headaches...

On Jul 21 2015 01:25, Len Ovens wrote:
> You will note the use of pitchbend for levels. CC has only 127 values
> which can give "zipper" artifacts. If using CC, the values need to be
> mapped to DB per tick and/or have SW smoothing. The top 50db of the
> range are most important.

I think you mean that rough approximation fomula in acoustics
engineering for human perception (i.e. ISO 226:2003).

On 2015年07月21日 01:25, Len Ovens wrote:
> You get to make up your own midi map is what it finally comes down to.
> OSC might be a better option as the values can be floats and there is no
> limit to number of controls (Midi has only 127 CCs and some of those are
> reserved).

Currently, ALSA middleware has no framework for Open Sound Control. It
just has implementations for MIDI like messages. In this time, I use
rawmidi interface for my purpose. The MIDI messages will be available
for userspace applications to read from ALSA sequencer functionality.


Thanks

Takashi Sakamoto
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev