[linux-audio-dev] Re: Audio synchronization, MIDI API

2004-09-13 Thread John Lazzaro
On Sep 13, 2004, at 9:01 AM, Eric Rz wrote:
So at what level in the tcp/ip stack does a collision get detected? 
From
what I understand, if there is a collision on a network segment each 
end
will backoff for a randomly chosen time and then retransmit. Is this at
the ethernet, IP, or TCP level?
If you're designing the interface between Layer-2 (Ethernet,
Wi-Fi, what have you ...) and IP, as a rule, the right thing to
do is to pass through packet loss rates in the 1-2% range
to the IP layer.  If the Layer-2 sees loss rates significantly
above that on a regular basis, IP applications are known
to not cope well, and so the right thing to do is to make
the Layer-2 appear to have a 1-2% packet loss rate, by
using techniques like retransmission or FEC.
Modern Ethernet (what you buy new from Linksys or Netgear
or Cisco in 2004) is switched, not shared.  It achieves 1-2% loss
rates extremely easily.  So, stacks usually pass through
the tiny loss rates of switched Ethernet up to the IP layer.
This means that yes, occasionally you will see lost packets
if you a UDP application (UDP is a thin layer on top of IP,
one IP packet to the OS == one UDP packet to an app)
on a local switched Ethernet.  I've seen it with real hardware.
Usually, the network is having a burst of traffic, and
something -- probably the receiving network stack --
gives up and throws away a packet.  But, its very
rare -- 0.1% or less, if I had to put a number on it.
But if that 0.1% was a NoteOff sent to an Hammond
organ patch, you care :-).  Thus, the recovery journal
technology in RTP MIDI.
Shared media wired Ethernet technology got us through the
80's.  Which was a good thing :-).  But it really is a technology
for the history books now ... its really good history, its good
to know about it because it was such a classic design, but
its not what people mean anymore when they say wired
Ethernet.  All that is left from that era is the bit-field -- the
pattern of bits in the packet -- and the semantics of the bits.
Modern wired Ethernet is switched Ethernet.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


[linux-audio-dev] Re: linux-audio-dev Digest, Vol 11, Issue 35

2004-08-20 Thread John Lazzaro
On Aug 20, 2004, at 6:43 PM, Paul David wrote:
there may be people who are sight-impaired who manage to make a living
as an audio engineer, but i would guess that i could count them all on
the fingers of one hand.
Sound on Sound (or maybe it was Mix ...) did a feature article on two 
blind
engineers who built their own studio to work out of ... it was a really 
interesting
article, but I can't seem to locate on the web at the moment ... maybe 
someone
can find it and post the link (it was a free article, I'm not a 
subscriber to either).

---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


Re: [linux-audio-dev] Audio synchronization, MIDI API

2004-08-18 Thread John Lazzaro
On Aug 18, 2004, at 2:15 AM, Paul Davis wrote:
and in fact, jlc and i have done some tentative experiments with
*live network audio* using jackd and ices w/jack support using only
our DSL connectivity. the model that ices uses is more or less
perfect, i think. just a client with some ports that happen to send
audio to the rest of the world. only the user knows that, other apps
just think its a regular client. jack doesn't care either, so everyone
who has a different idea of how to actually connect the endpoints can
do their own thing and everyone can coexist.
I'd really suggest considering the pros of integrating IETF tools
(SIP, RTSP, RTP) into this scheme.  You could use still use jack
as your application later, but instead of engineering your own
transport layers for session management (SIP, RTSP) and media
(RTP), you'd use IETF protocols -- just like you use TCP instead of
re-inventing it for each app that needs a reliable bytestream.
We're seeing the IETF stack used this way more and more in the
commercial world -- the wireless audio servers (Apple Airport
Express, etc) use RTSP and RTP.
Good reasons to do this:
  -- You may think you're trying to solve a small well-defined problem,
  but if Jack is a success, people are going to extend it to work in
  all sorts of domains.  The IETF toolset has been stretched in lots
  of ways by now -- interactive and content-streaming, unicast and
  multicast, LAN and WAN, lossy and lossless networks, etc -- and
  its known to adapt well.  Traditional go-it-alone companies, like 
Apple,
  use it all over the place -- iChat AV and Quicktime both use RTP,
  iChat AV uses SIP, Quicktime uses RTSP.

  -- Modern live-on-stage applications use video, and RTP has a
  collection of video codecs ready to go.  Ditto for whatever other
  sort of uncompressed or compressed media flow you need.
  -- There are tools for synchronization (RTCP mappings of NTP
  and RTP timestamps), tools for security (SRTP), tools for
  all sorts of things someone might need to do someday.
  -- The IPR situation is relatively transparent -- you can go to the 
IETF
  website and look at IPR filings people have made on each
  protocol, and at least see the non-submarine IPR of the people
  who actually developed the protocols -- you can't be a WG member
  and keep submarine patents away from the IETF.

  -- Most of the smart people who work on media networking in all of
  its forms do not subscribe to LAD.  The easiest way to tap into 
their
  knowledge is to use their protocols. And likewise, the smart 
people
  here can take their results and turn them into standards-track 
IETF
  working group items, and help make all media apps work better.

---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


Re: [linux-audio-dev] Audio synchronization, MIDI API

2004-08-18 Thread John Lazzaro
On Aug 18, 2004, at 12:38 PM, 
[EMAIL PROTECTED] wrote:
  -- There are tools for synchronization (RTCP mappings of NTP
  and RTP timestamps), tools for security (SRTP), tools for
  all sorts of things someone might need to do someday.
this does seem very useful. there's no way to transport time between 2
jackd instances right now, and it would be wise to reuse existing
technology whenever this is added. otoh, it has to be bit more
extensive since we need music and/or smpte time rather than just
wallclock time.
One way to do this is to have a multi-stream session, with one of
the sessions being RTP MIDI, that uses System Exclusive commands
to do MTC, or MIDI sequencer, or MMC to do your timing.  So,
this would recreate the current hardwired world, but using RTP MIDI
to do pseudo-wire emulation of the MIDI cable carrying MTC ... the
RTP RTCP stream would have NTP, as would all the audio RTP streams,
and the receiver uses these common NTP timestamps to derive
cross-sync between the MTC sync information in RTP MIDI and the RTP
timestamps on the audio stream.
Of course, this only works as well as your NTP sync ... in an ideal 
world,
a single server generates these streams off of a single NTP clock, or
at least you have a very good NTP daemon keeping things in sync.
Roger Dannenberg gave a good talk on these issues at the OSC-fest
here at Berkeley last month ...

---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


Re: [linux-audio-dev] Audio synchronization, MIDI API

2004-08-16 Thread John Lazzaro
On Aug 16, 2004, at 12:58 AM, 
[EMAIL PROTECTED] wrote:
Juan Linietsky [EMAIL PROTECTED] writes:
I tried this myself, on a 100mbit ethernet switch.. while for single
instruments it seems okay, and latency is fine, playing full complex 
midi
pieces in realtime had a lot of jittering...
Small playout buffers help a lot ... it doesn't take many milliseconds 
of
buffering (small single-digit) to make a big difference.

Oh, time for the obligatory RTP MIDI plug:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-rtp-midi.txt
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt
Coming closer to Last Call, Dominique Fober recently reviewed
the documents for AVT, and I'm in the process of revising -05.txt
to take his advice into account. That revision might actually be the
Last Call, we shall see ... subscribe to [EMAIL PROTECTED] if you
want to follow along.
Also, our AES presentation got in:
http://www.aes.org/events/117/papers/E.cfm
So we'll be talking in San Francisco in October, if anyone is in
the neighborhood ... AES only comes to San Francisco once
every 5 years, and so there's a lot of fun things going on at
the conference --
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


[linux-audio-dev] Re: Audio over Ethernet / Livewire

2004-06-23 Thread John Lazzaro
From: Steve Harris [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] Audio over Ethernet / Livewire
[...]  RTP-MIDI [...]
Thanks, Steve.
A lot of work has gone into RTP MIDI, to get consensus across
a wide range of interests -- the IETF, MPEG, the MMA, and the
computer music community.  I should note we're not there yet
(i.e. we haven't started the vetting process that ends, if successful,
in RFCs) , but we're getting ever closer to Last Call (the start of the
process).  The most recent documents are here:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-rtp-midi.txt
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt
Also note that the RTP-MIDI network stack in sfront:
http://www.cs.berkeley.edu/~lazzaro/sa/index.html
has been re-licensed under the BSD license, the stack
resides in sfront/src/lib/nsys/.  I also have the stack in a
tar file that only contains BSD-licensed files -- let me know
if you need a copy and I'll send it along.  The stack is more
for reading than for using -- I think the best RTP MIDI
implementations will start with a clean slate.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


[linux-audio-dev] Re: Audio over Ethernet

2004-04-14 Thread John Lazzaro
On Apr 14, 2004, Anders Torger wrote:
Thus, I think it is necessary to implement something operating on the
ethernet level to get best performance in terms of throughput and
latency.
/Anders Torger
If what you mean by operating at the ethernet level means
no Cobra-like hardware to help, but putting data directly
into Etherframes w/o IP/RTP headers, then its unclear to me that
working at the RTP/IP level is going to hurt you much.  The
simplest implementation would have RTP/IP header overhead,
but there are nice header compression schemes that get rid of it:
http://www.ietf.org/rfc/rfc2508.txt

and its improved versions.  By using RTP, you get a lot of
protocol design you might otherwise need to do,
within RTP (like RTP MIDI) and surrounding it
(session management, etc).
One big thing you need to worry about are clocks -- unlike
a protocol like AES/EBU or SPIDF, packet-based media is
not sending an implicit clock along with the data.  So, the
nominal sender sampling rate can't be precisely linked to
the nominal receiver sampling rate in a simple way.  The
consequence is either too much data piles up at the receiver,
or not enough.  One solution to this problem is to continuously
running a sample-rate converter at the receiver in software,
to keep the two sampling rates locked.  See:
http://www1.ietf.org/mail-archive/working-groups/avt/current/ 
msg00569.html

and use the Thead Next to cycle through the discussion, it
goes on a ways and lots of interesting folks drop in with info.
A separate issue for your many streams case is synchronizing
the streams to each other, in the case where not all share the
same nominal clock.  RTP has tools for this, based on
associating NTP timestamps from a common clock to each
independent stream, that get used for audio/video lipsync,
and can be repurposed here as well.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


[linux-audio-dev] Re: Audio over Ethernet

2004-04-13 Thread John Lazzaro
 Anders Torger [EMAIL PROTECTED] writes:
 Is there any work done for transporting digital audio over ethernet? 
For
 example a library, an open standard or something?

http://www.ietf.org/html.charters/avt-charter.html

---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


[linux-audio-dev] Re: is this, or this not, RTP

2004-02-26 Thread John Lazzaro
 can
be safely frozen, saving both processing power and disk bandwidth.
sure, but thats a subset of all busses. its not a bus per se.

When I finish comping the vocals for a chorus, I want to be left 
with 1
fader, and 1 editable audio track, for the chorus.  If I need to make 
one of
the voices softer, I can bring up the underlying tracks within a 
second
(which is *at least* how long it usually takes me to find a single 
fader in a
48-channel mix).  While I'm making adjustments, Tinara will read all 
the
separate chorus tracks off the disk, mixing them in RT.  When I move 
back one
layer in the mix hierarchy (thereby indicating that I'm finish 
adjusting
things), Tinara will begin re-rendering the submix in the background 
whenever
the transport is idle.
have you actually experienced how long it takes to re-render?
steve's suggestion is an interesting one (use regular playback to
render), but it seems to assume that the user will play the session
from start to finish. if you're mixing, the chances are that you will
be playing bits and pieces of of the session. so when do you get a
chance to re-render? are you going to tie up disk bandwidth and CPU
cycles while the user thinks they are just editing? OK, so you do it
when the transport is idle - my experience is that you won't be done
rendering for a long time, and you're also going to create a suprising
experience for the user at some point - CPU utilization will vary
notable over time, in ways that the user can't predict.
you also seem to assume that the transport being stopped implies no
audio streaming by the program. in ardour (and most other DAWs), this
simply isn't true. ardour's CPU utilization doesn't vary very much
whether the transport is idle or not, unless you have a lot of track
automation, in which case it will go up a bit when rolling.
The basic idea is to turn mixing into a process of 
simplification.  When
I'm finishing up a mix, I don't want to deal with a mess of tracks 
and buses,
with CPU power and disk bandwidth being given to things I haven't 
changed in
days.  I want to be able to focus on the particular element or submix 
that
I'm fine-tuning - and have as much DSP power to throw at it as 
possible.
the focusing part seems great, but seems to be more of a GUI issue
than a fundamental backend one. it would be quite easy in ardour, for
example, to have a way to easily toggle track+strip views rather than
display them all.
the DSP power part seems like a good idea, but i think its much, much
more difficult than you are anticipating. i've been wrong many times
before though.
and btw, the reason Ardour looks a lot like PT is that it makes it
accessible to many existing users. whether or not ardour's internal
design looks like PT, i don't know. i would hope that ardour's
development process has allowed us to end up with a much more powerful
and flexible set of internal objects that can allow many different
models for editing, mixing and so forth to be constructed. the backend
isn't particularly closely connected in any sense, including the
object level, to the GUI.
--p



--

___
linux-audio-dev mailing list
[EMAIL PROTECTED]
http://music.columbia.edu/mailman/listinfo/linux-audio-dev
End of linux-audio-dev Digest, Vol 5, Issue 57
**

---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


[linux-audio-dev] RTP + Sorry about last posting ...

2004-02-26 Thread John Lazzaro
Hi everyone,

Sorry about sending random ASCII to the list, I
was editing a is this, or is this not, RTP reply and I
clicked the wrong button (oops).  Basically, I should
note that
[1] Low-latency itself is not a problem with RTP --
if one runs RTP over a transport with low latency,
it can fully utilize the latency, see:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/nossdav01.pdf

[2] RTP came out of the multi-cast WAN videoconferencing
world (early 1990s) and then slowly migrated into other
domains (like content-streaming, etc).  The Peak folks, like
the MIDI Show control folks, came to the problem from the
LAN direction.   That said, any LAN that is running IP can
set up a link-local multicast group and run RTP on top of it,
and access the broadcast nature of the LAN environment.
[3] The problem Peak solves in hardware is akin to the
problem SPDIF and AES/EBU solves in hardware -- if the
sender and receiver have free-running clocks, sooner or
later underflow or overflow occurs, and so if your goal is
to never click, you have to address the issue somehow.
Note this isn't an issue with non-continuous-media RTP
payload formats like RTP MIDI, as the command nature
of MIDI lets you slip time without artifacts.  Neither is it an
issue for voice codecs used in  conversational speech,
because you can resync at the start of each voice-spurt
(packets don't get sent for the side not talking -- this is part
of the efficiency advantage of VoIP over switched-circuit telephony).
For continuous-stream audio over RTP, the state of the art
to avoid this problem is a software sample-rate converter on
the receiver end, which speeds up or slows down the
sample rate of the sent stream by tiny amounts to null out
the tiny differences from nominal in the sender and receiver
sample-rate clocks.  Quicktime does this, according to a thread
on the AVT mailing list a few years ago that discussed this issue.
Note this method isn't modifying the actual sender's sample rate;
on the contrary, its modifying the receiver's actual sample rate
to match the intentions of the sender.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


Re: [linux-audio-dev] .mid files playing in Linux games

2004-02-01 Thread John Lazzaro
On Feb 1, 2004, at 4:01 AM, [EMAIL PROTECTED] 
wrote:
Ryan Underwood [EMAIL PROTECTED] wrote:

Module files are usually a reasonable compromise between quality and
size for soundtracks.  The disadvantage of tracker files compared to
MIDI is that they are larger since they contain the samples.  The
advantage is that you know they will sound identical no matter where
they are played and whether or not the end user has MIDI hardware or 
not.
MPEG 4 Structured Audio (SA) was designed to solve this problem, in 
that its
normative (sounds the same everywhere), but if you have algorithmic
synthesis techniques you like, there's no need for samples, and so the
files can be very small. SA been through a few Corrigenda (i.e.
bug-fixes to the standard), so its a pretty stable standard now. See:

http://www.cs.berkeley.edu/~lazzaro/sa/index.html

Note: I'm not an MPEG member, the paragraph below is my own
personal opinion, and doesn't reflect MPEG's view on the topic:
   I'm starting to think that what could help SA find its way into
   applications is a strict sub-setting of the language -- pick a simple
   to implement subset of keywords and opcodes that solve a lot
   of useful problems, and code up interpreters and compilers that
   accept only the subset.  The content would be upward-compatible
   with full SA decoders (like sfront), but if the subset was well 
chosen,
   the complexity of implementing SA would shrink to the point where
   a motivated undergrad could do it as a senior project.  The hope
   would be that once there was momentum, the people-power to
   do full implementations would appear, or the will to standardize
   the subset in MPEG would appear.

---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


[linux-audio-dev] OSC, service discovery ...

2004-01-26 Thread John Lazzaro
Hi everyone,

Just wanted to put in a plug for IETF-based solutions
to doing service discovery for OSC -- basically, this would
entail using SIP:
http://www.ietf.org/rfc/rfc3261.txt

	for session management, and SDP:

http://www.ietf.org/internet-drafts/draft-ietf-mmusic-sdp-new-15.txt

to describe the sessions themselves.  To specify OSC
in SDP, you'd use UDP or TCP as a transport specifier on
the media line, and make up (and eventually, register) a fmt
parameter for OSC.
One of the things I'm planning to do now that RTP MIDI
is finally nearing Last Call is to look at session management
frameworks for RTP MIDI that would use SIP + SDP.  If OSC
went this route too, applications could use the Offer/Answer
protocol:
http://www.ietf.org/rfc/rfc3264.txt

to negotiate to use MIDI (for baseline support) or OSC
(to do more sophisticated things).  This was one of the
motivations behind RTP MIDI -- to offer a migration path
from MIDI, without requiring backward-compatibility between
control languages ...
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---


Re: [linux-audio-dev] GMPI?

2003-09-29 Thread John Lazzaro

 Steve Harris [EMAIL PROTECTED] writes:

 Standards processes that I've been exposed to generally mandate regular
 weekly meetings (teleconference and/or irc), with less frequent
 face-to-face meetings and most business is sorted out during the meetings.
 Email is only used for tying up loose ends and exchanging text, minuites
 etc.

The IETF is a successful counter-example -- email is the only way
any decision can be made, meetings are optional, and no decision
made at any meeting is binding until consensus occurs on the mailing
list to confirm it.

I'm hesitant to comment further about GMPI and its chances for 
success or failure, because I've been too busy trying to finish
RTP MIDI to keep a close eye on it.  My only worry stems from a
common IETF belief -- that the standards process is a great way
to polish and reach consensus on a substantially complete design,
but using the standards process as the vehicle to do the design
is a much harder road to hoe.  A good example of this is 801.11,
which was an incredibly long and painful experience because many
parties brought bits and pieces of wireless Ethernet to the IEEE
table.  Only the inherent goodness of the core idea (packet radio)
kept everyone at the table to eventually produce a standard that
could be interoperably deployed (801.11b, aka Wi-Fi, and its
lettered follow-ons). 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-


Re: [linux-audio-dev] softsynth SDK

2003-08-19 Thread John Lazzaro

 Frank Barknecht [EMAIL PROTECTED] writes:

 For a compiled language SFront might be used, where in the end the
 synth description is actually compiled with a C compiler into a synth. 

Yes, unfortunately no jack support yet ... 

More generally:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/wemp01.pdf

sketches out the tricks sfront uses to do the compilation, most
of these would be applicable to specification languages other
than Structured Audio.  Doing audio engine compilation for an
API or language that has critical mass would be a good project
for someone to do ... 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-


[linux-audio-dev] RTP MIDI update ...

2003-06-28 Thread John Lazzaro

Hi everyone,

Just sent in an updated version of the RTP MIDI normative I-D
off to [EMAIL PROTECTED] You can download a copy now from:

  http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-rtp-midi.txt

In a hopeful sign, the Change Log is sufficiently short to
reproduce in full below:

  Chapter M (Appendix A.9) has been redesigned, to follow the semantic
  design of Chapters C and E.  Several definitions in Appendix A.1 have
  been changed to reflect this change, as have the chapter inclusion
  semantics for Chapter M in Appendix C.1.3.

  Many small editorial changes throughout the document, to correct
  grammatical errors and improve phrasing.

I'm actually starting to re-code sfront networking to be compliant
with the I-D (it has fallen out of date since the AVT RTP MIDI effort
began), in the hopes of gcc catching bugs that peer-review may miss.
Once that coding is complete, Last Call is probably not too far away
... so if you've been planning to spend a few hours to reading over
the I-D and sending along comments, now would be a good time to do it.
You might also want to download the non-normative Implementation Guide
for RTP MIDI:

  http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-


[linux-audio-dev] non-standard uses of MIDI reserved system opcodes

2003-03-27 Thread John Lazzaro

Hi everyone,

There are 2 MIDI System Common opcodes (0xF4 and 0xF5) and 2 MIDI
System Realtime opcodes (0xF9 and 0xFD) that the official MIDI
standard reserves for future use.

I'm currently collecting examples of non-standard uses of these
opcodes in hardware and software (on MIDI 1.0 DIN cables and in other
hardware and software contexts). Examples I've collected so far are:

 -- 0xF9 as MIDI Tick

 -- 0xF5 for both MIDI endpoint and virtual cable selection

I'm doing this as part of the process of finishing:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-mwpp.txt
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt

It uses the semantics of the commands as part of its resiliency
scheme, and so knowledge of non-standard uses of the reserved commands
is needed to help craft resiliency logic for the opcodes.  Probably
best to send the info directly to [EMAIL PROTECTED], the topic
is too mundane for the list ... thanks in advance!

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-


[linux-audio-dev] MWPP status for SF IETF meeting ...

2003-03-09 Thread John Lazzaro

Hi everyone,

The pages fly off the calendar, and its time for
another IETF meeting -- MWPP is getting pretty close to
Last Call, I posted a few outstanding timing issues to:

http://www1.ietf.org/mail-archive/working-groups/avt/current/msg02221.html

that might be of interest to some LAD folks ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-


Re: [linux-audio-dev] MIDI sync and ALSA sequencer

2003-03-08 Thread John Lazzaro

  Arthur Peters [EMAIL PROTECTED]

 MIDI Clock mainly but MIDI Tick would be cool too

MIDI Tick is a rogue use of an undefined System Realtime command;
see:  

http://www.midi.org/about-midi/table1.shtml

and notice that the System Real-Time Messages has undefined
opcode where the MIDI Tick opcode should be. The MMA liaison
to MWPP (Jim Wright) spotted this a few weeks ago, the next
rev of the MWPP document will excise MIDI Tick completely.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-


[linux-audio-dev] MWPP I-Ds ...

2003-03-03 Thread John Lazzaro

Hi everyone,

The months fly off the calendar, and the time for the
next IETF meeting arrives (in San Francisco later this month).
Pick up the latest version of the MWPP I-D's at:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-mwpp.txt
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt

Back at the end of the last meeting, I had predicted these
documents would be candidate Last Call (the end of the writing
process and the beginning of the vetting process ...), but alas,
it was not meant to be. But I really think we're only a month 
away or so ... the things left to do are doable and not overwhelming.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-


Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-24 Thread John Lazzaro
 we don't care about non-power-of-two periods, i think. however, i do
 consider periods defined in units of times and not frames to be broken
 hardware design. it forces inefficiencies into the software that are
 totally unnecessary.

For what its worth, the SAOL standard has to deal with this problem
too, since the user gets to specify an integer k-rate and an 
integer a-rate. Eric went for this solution:


5.8.5.2.2  krate parameter

global parameter - krate int;

The krate global parameter specifies the control rate of the
orchestra.  [...]

The krate parameter shall be an integer value between 1 and the
sampling rate inclusive, specifying the control rate in Hz.  [...]

If the control rate as determined by the previous paragraph is not an
even divisor of the sampling rate, then the control rate is the next
larger integer that does evenly divide the sampling rate.  The control
period of the orchestra is the number of samples, or amount of time
represented by these samples, in one control cycle.


---

This has been controversial, since it limits the ability to use
SAOL to emulate existing coding standards that have non-integer
relationships betweens frames and the sample rate. The win has
been decoder implementation simplicity. 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-


[linux-audio-dev] MWPP implementation guide ...

2003-01-14 Thread John Lazzaro


Hi everyone,

I sent the draft of the complete MWPP implementation guide off
to [EMAIL PROTECTED] today. You can download it now from:

  http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt

See the abstract below for details, as well as the I-D change
log.  Comments are welcome. I'll turn the document around one more
time before the March 3 San Francisco cutoff date, and can incorporate
your feedback into the revision.

Writing this 77-page (!) document added a few more open issues
to draft-ietf-avt-mwpp-midi-rtp-05.txt.  Next, I'll spend a few days
writing the RTP over TCP I-D, and then I'll start working through the
open issue list for draft-ietf-avt-mwpp-midi-rtp-05.txt. I expect to
submit an -06.txt in time for March 3 deadline.

---

INTERNET-DRAFT  John Lazzaro
January 15, 2003  John Wawrzynek
Expires: July 15, 2003   UC Berkeley


 An Implementation Guide to the MIDI Wire Protocol Packetization (MWPP)

   draft-lazzaro-avt-mwpp-coding-guidelines-01.txt

Abstract

 This memo offers non-normative implementation guidance for the MIDI
 Wire Protocol Packetization (MWPP), an RTP packetization for the
 MIDI command language. In the main body of the memo, we discuss one
 MWPP application in detail: an interactive, two-party, single-
 stream session over unicast UDP transport that uses RTCP. In the
 Appendices, we discuss specialized implementation issues: MWPP
 without RTCP, MWPP with TCP, multi-stream sessions, multi-party
 sessions, and content streaming.

---

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-






Re: [linux-audio-dev] Blockless processing

2002-12-13 Thread John Lazzaro
 Steve Harris [EMAIL PROTECTED] writes:

 SAOL is still block based AFAIK. 

See:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/wemp01.pdf

Sfront does no block-based optimizations. And for many
purposes, sfront is fast enough to do the job. 

It may very well be that sfront could go even faster
with blocking, although the analysis is quite subtle --
in a machine with a large cache, and a moderate-sized
SAOL program, you're running your code and your data 
in the cache most of the time.   

Remember, blocking doesn't save you any operations, it
only improves memory access and overhead costs. If those
costs are minimal for a given decoder implementation,
there is not as much to gain.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Plugin APIs (again)

2002-12-10 Thread John Lazzaro
 Steve Harris [EMAIL PROTECTED] writes:

 Yeah, please do that would be damn useful. For rapid prototyping if
 nothing else
 
FYI, making sfront produce code suitable for .so's is at the
top of the list of things to do these days, because AudioUnits
support awaits it. But, that's the sfront enhancements list
of things to do, which is kind of subordinate to the get MWPP
to Last Call in the IETF list of things to do ... so it may
take a while. 

Basically, many Logic users would like to use SAOL as a scripting
language for their own plugins ... thus, AudioUnits support.
This could actually be a catalyst for SAOL becoming more popular
generally, if it works out ... 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Plugin APIs (again)

2002-12-07 Thread John Lazzaro
 David Olofson writes:

 The point I think you're missing is that a control change event is 
 *exactly* the same thing as a voice start event on the bits and 
 bytes level. 

Lossy MIDI filters will prune away two MIDI Control Change commands
in a row for the same controller value with the same data value,
apart from controller numbers (like All Notes Off) whose semantics
have meaning in this case. And the assumption underlying the behavior
of these filters are present in subtle ways in other MIDI gear and
usages too. For example, a programming language that presents an
array with 128 members, holding the last-received (or default) value
of each MIDI controller, presents an API that implicitly does this
filtering, no matter how frequently the program samples the array.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Re: image problem [was Re: [Alsa-devel] help for a levelmeter]

2002-11-09 Thread John Lazzaro
 Paul Davis [EMAIL PROTECTED] writes:

 OS X is a major challenge to the linux audio religious faithful.

It's an opportunity, too, though -- there's a segment of the Mac
population that can barely justify the cost premium for Mac hardware,
because they use the hardware for recreation or avocation, not as
a tool to make more money. If the Linux folks:

 -- Port the popular apps to OS X, and improve them so that the
honest and budget-conscious Mac folks adopt the free Linux
apps rather than pirate commercial apps.

 -- Keep those apps running just as well (or better) under Linux,
and evangelize the hardware cost differential.

Linux audio could probably carve out the budget-conscious Mac 
subset over a period of 3-5 years, which (random guess) is 100,000
users or so. 

That's not my motivation for putting sfront on OS X -- that has
more to do with just reaching more users to popularize the underlying
standards -- but it might be a motivation for the mainstream Linux
GUI audio apps to start a serious OS X porting effort.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Interesting USB MIDI master keyboards, do they

2002-11-03 Thread John Lazzaro
 work with Linux ?

 Tim Goetze [EMAIL PROTECTED] writes:

 there's a difference though: the usb 1 ms is jitter, there's no way
 to reconstruct the original timing. this in contrast to midi, where 
 you can extrapolate the exact event time quite well (provided you're
 looking at a stream of mostly isolated events). oh well, most people
 think 1 ms is below human perception limits anyway.

People who are into this topic might want to take a few minutes
to review Appendix C.1 (pp 66-70) of the MWPP draft:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-mwpp.txt

It basically describes the three methods MWPP provides for coding
timestamps onto MIDI commands when packetizing a MIDI stream. The
goal is that whatever your MIDI source is (a MIDI 1.0 DIN jack,
an algorithmic composition program, etc), one of these three
methods will be the right one to capture the timing as best as
can be, into an RTP packet. Feedback is welcome on the topic, 
we're starting to close in Last Call for the document, but 
there is still a few months left ... 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] I think that what he means is even more important than recognition...

2002-10-31 Thread John Lazzaro
 Paul Davis [EMAIL PROTECTED] writes:

 its finally dawning on some people that whatever
 benefits specific plugin APIs bring to particular products and
 hardware, the proliferation of them is actually *more* harmful. i am
 skeptical that anyone in the commercial side of things really has the
 will to do anything about this, even though they may say they do.

Watching the Apple Emagic acquisition, followed by news that Logic
would transition to AudioUnits, followed by a mass influx of new
AudioUnits developers into the coreaudio-api list, was very
enlightening. The economics of commercial plug-in development
is such that once Apple owned Logic and made its intentions clear,
developers could not help but be interested in supporting the
plug-in architecture.

There's a natural follow-on move here -- Microsoft buying one or
more of the PC flagship applications, and moving them all to 
support one new or existing standard, that Microsoft licenses
freely to all comers (with an anti-GPL poison pill). Then we're
back to familiar territory, Microsoft owning one way to
do it, Apple owning a second way to do it, and everyone else  
supporting one or both. The natural reason to expect this not
to happen is the small absolute size of the audio content market
to a company like Microsoft. However, strategic issues may come
into play in Redmond to make this happen --

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] I think that what he means is even more important than recognition...

2002-10-31 Thread John Lazzaro
 Paul Davis [EMAIL PROTECTED]

 at what point do you expect Digidesign TDM plugins
 to fall by the wayside?

DigiDesign may be the special case here, assuming that
Avid stays independent -- the ProTools business model
offers them refuge from consolidation forces, being
hardware-restricted and embraced by the high end.

Avid's market cap is 370M -- an order of magnitude
higher than what Apple paid for Emagic, but still
digestable for the likely suitors (Microsoft and
Adobe). I use likely as a relative term -- the
hardware-centric nature of Avid makes it an
unnatural fit for both of those companies ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



[linux-audio-dev] MWPP implementation guide ...

2002-10-27 Thread John Lazzaro


Hi everyone,

Just sent off the first draft of the Implementation Guide
for MWPP, in time for the deadline for the next IETF meeting. You
can download it as:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-guide.txt

See below for the abstract. A few sections in the main body,
as well as the Appendices, remain to be written, but I decided to send
it off because the completed parts are in pretty good shape, and early
feedback on the direction the document is taking (as the non-normative
companion to the main MWPP document) would be helpful.

The current plan is to make some minor changes to the
normative MWPP document, in response to comments received since its
September 22 submission, and resubmit it in time for the meeting
deadline for updated documents.  So if you're holding onto any
comments on this document, now is a good time to send them along --
this document can be downloaded as:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-mwpp.txt

---

INTERNET-DRAFT  John Lazzaro
October 27, 2002  John Wawrzynek
Expires: April 27, 2003  UC Berkeley

 An Implementation Guide to the MIDI Wire Protocol Packetization (MWPP)

   draft-lazzaro-avt-mwpp-coding-guidelines-00.txt

Abstract

 This memo offers non-normative implementation guidance for the MIDI
 Wire Protocol Packetization (MWPP), an RTP packetization for the
 MIDI command language.  The memo provides a detailed description of
 a sample MWPP application: an interactive MIDI session between two
 parties that send and receive RTP and RTCP flows over unicast UDP
 transport. The Appendices focus on special issues that arise in
 other types of applications: content-streaming applications, multi-
 party applications, applications that use reliable transport such
 as TCP, applications that do not use RTCP, and applications that
 send several MWPP RTP streams in a single session.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-





Re: [linux-audio-dev] Re: The Image (usablity) problem from a Musicians point of view

2002-10-22 Thread John Lazzaro
 Lea Anthony [EMAIL PROTECTED] writes

 Wishing for people to write native apps for a system with no market is
 like wishing Windows would die. It might happen, but it's not bloody
 likely.

However, if you port a novel Linux application to Windows or OS X,
the users on those platforms are quite happy to add the free tool
to their workflow if it helps them do their work better. This is
how the GNU project got its start, after all -- free, usable software
that ran on popular commercial UNIX platforms. Sfront has taken
this route -- most of my users are non-Linux users now. 

I think there's a migration path to Linux that could be based on
this strategy -- if the free software community comes up with a 
set of audio content-creation tools that Windows or OS X users 
are willing to use as a complete workflow, the case for switching
over to Linux to run the workflow more efficiently (or to avoid
OS license upgrade fees, etc) is easier to make. Certainly on the
CLI side, many people started out at Cygwin users to run emacs
and gcc and TeX under Windows, and then decided to add a dual-boot
option for Linux to get the real thing.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] soft synth as a plugin

2002-10-17 Thread John Lazzaro
 Paul Davis [EMAIL PROTECTED] writes

 switching between contexts is massively more
 expensive under windows and macos (at least pre-X),

As a data point, I ran two different sa.c files (the audio
engines sfront produces) set up as softsynths using different
patches under OS X (using CoreAudio + CoreMIDI, not the
AudioUnits API), and it worked -- two patches doubling together,
both looking at the same MIDI stream from my keyboard, both
creating different audio outputs into CoreAudio that were
mixed together by the HAL. So, for N=2 at least, OS X seems
to handle N low-latency softsynth apps in different processes
OK ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



[linux-audio-dev] sfront 0.85 10/13/02 released

2002-10-14 Thread John Lazzaro



Pick up sfront 0.85 -- 10/13/02 at:

  http://www.cs.berkeley.edu/~lazzaro/sa/index.html

   [1] Mac OS X support for
   real-time MIDI control, 
   using the -cin coremidi
   control driver. Up to four
   external MIDI sources are
   recognized. Virtual sources
   are ignored; expect virtual
   source support in a future 
   release.

   [2] Mac OS X memory locking
   now works in normal user 
   processes, and is no longer 
   limited to root.

-

All the changes in 0.85 are OS X specific, but thought I'd post this
here in case people are curious about OS X porting ...

With this release, all of the real-time examples in the sfront
distribution run under Mac OS X. Specifically, its now it's possible
to use OS X as a Structured Audio softsynth -- I've been running my
PowerBook this way with 2ms CoreAudio buffers, with MIDI input from my
controller via an Edirol UM-1S USB MIDI interface, and audio output
via the headphone jack on the Powerbook, and things work glitch-free.

Also, because audio and MIDI are both virtualized under OS X, its
possible to run multiple ./sa softsynths in parallel (i.e. from
different Terminal windows) and get useable layering ... although in
most cases, you'd be better off doing your layering inside a single SA
engine.

To see the -cin coremidi control driver in action, run the
sfront/examples/rtime/linbuzz softsynth, it will find external MIDI
sources (up to 4, no virtual source support ...) and use them to drive
the SA program in real-time. In the linbuzz example, the pitch wheel
(set up to do vibrato) mod wheel (spectral envelope) and channel
volume controllers are all active -- you can look at the linbuzz.saol
SAOL program to see how they are used.

The actual CoreMIDI code is in:

sfront/src/lib/csys/coremidi.c

The most interesting aspect of this code is that a single
AF_UNIX SOCK_DGRAM socketpair pipe (named csysi_readproc_pipepair) is
used for communication between an arbitrary number of CoreMIDI
readprocs (one for each active source) and the SA sound engine (which
runs inside the CoreAudio callback -- the actual main thread sleeps
and does nothing). Writing the pipe is blocking (but should rarely
block, and never for significant time), but reading the pipe is
non-blocking.

The semantics of the AF_UNIX SOCK_DGRAM (AF_UNIX is reliable,
SOCK_DGRAM guarantees the messages from the CoreMIDI readprocs don't
mix) makes it a good choice for doing the multi-source MIDI merge. The
actual messages sent in the pipe consists of a preamble to identify
the readproc, and the (error-checked for SA semantics) MIDI commands
in each MIDIPacket.

At this point, the Linux and OS X real-time implementations
support all of the same features (audio input, audio output, MIDI In,
RTP networking) ... I'm not sure if AudioUnits support makes sense for
sfront, I'll probably take a closer look at the issue soon ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-





RE: [linux-audio-dev] Needing advice on implementing MIDI in my app (RTMix)

2002-10-11 Thread John Lazzaro

 2) Can I use DMIDI in combination with TCP/IP (I saw you mentioned UDP
 on your site)?

If you're sending over IP, you should also look at MWPP, an RTP
packetization for MIDI under development in the IETF. See:

http://www.cs.berkeley.edu/~lazzaro/sa/pubs/txt/current-mwpp.txt

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



[linux-audio-dev] MWPP I-D -0.5.txt released

2002-09-24 Thread John Lazzaro


Hi everyone,

Just released the latest revision of the IETF Internet-Draft
to standardize the RTP packetization for MIDI. The document reflects
the comments submitted on the last document revision by several
LAD-folks (see the document change log for details ...).

While this draft isn't the Last Call document, we're
probably only a few revisions away from Last Call, so if anyone has
been holding off on the reviewing the memo and sending comments, now
is a good time to do so ... below is the abstract and document
download info:

---

From: [EMAIL PROTECTED]
Subject: I-D ACTION:draft-ietf-avt-mwpp-midi-rtp-05.txt
Date: Tue, 24 Sep 2002 07:52:50 -0400

A New Internet-Draft is available from the on-line Internet-Drafts
directories.  This draft is a work item of the Audio/Video Transport
Working Group of the IETF.

Title   : The MIDI Wire Protocol Packetization (MWPP)
Author(s)   : J. Lazzaro, J. Wawrzynek
Filename: draft-ietf-avt-mwpp-midi-rtp-05.txt
Pages   : 94
Date: 2002-9-23

The MIDI Wire Protocol Packetization (MWPP) is a general-purpose
RTP packetization for the MIDI command language. MWPP is suitable
for use in both interactive applications (such as the remote
operation of musical instruments) and content-delivery applications
(such as MIDI file streaming). MWPP is suitable for use over
unicast and multicast UDP, and defines tools that support the
graceful recovery from packet loss. MWPP may also be used over
reliable transport such as TCP. The SDP parameters defined for MWPP
support the customization of stream behavior (including the MIDI
rendering method) during session setup. MWPP is compatible with the
MPEG-4 generic RTP payload format, to support the MPEG 4 Audio
object types for General MIDI, DLS2, and Structured Audio.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-ietf-avt-mwpp-midi-rtp-05.txt

---

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-
___
linux-audio-dev mailing list
[EMAIL PROTECTED]
http://music.columbia.edu/mailman/listinfo/linux-audio-dev



Re: [linux-audio-dev] Re: SysEx interface for softsynths [was Re: [linux-audio-dev] LADSPA and TiMidity++]

2002-09-19 Thread John Lazzaro

 I didn't see a way to contact them via email, otherwise I would inquire
 as to how to proceed in getting an ID.

http://www.midi.org/about-mma/mfr_id_app.pdf

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-
___
linux-audio-dev mailing list
[EMAIL PROTECTED]
http://music.columbia.edu/mailman/listinfo/linux-audio-dev



Re: [linux-audio-dev] LADSPA and TiMidity++

2002-09-17 Thread John Lazzaro

 [EMAIL PROTECTED] writes:
 you can't ask units about things, 
 unless you have two cables - and that's not really part of the
 standard

Actually, it is. The System-Exclusive Universal ID command
space is general-purpose functionality that, in many cases,
assumes units talking to each other via cable pairs. See:

http://crystal.apana.org.au/~ghansper/midi_introduction/midi_sysex_universal.html

for details. The simplest example is the Generic Handshaking
instructions, which do flow-control for big Sample Dumps via
a set of commands implementing ACK, NAK, WAIT, EOF, etc.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-
___
linux-audio-dev mailing list
[EMAIL PROTECTED]
http://music.columbia.edu/mailman/listinfo/linux-audio-dev



Re: [linux-audio-dev] LADSPA and TiMidity++

2002-09-17 Thread John Lazzaro

 [EMAIL PROTECTED] writes:
 I was thinking about CCs, Program Change and stuff. Is there a 
 standard way of asking a synth for the names of it's patches, or 
 which CCs are assigned to what, for example?

This does seem like the sort of device-independent info that
could be a part of the General Information Univeral ID 
(0x06 Sub-ID), but I've never seen it -- maybe someone with
the MMA document handy could check for sure. This sort of
request-reply transaction is the sort of thing the GI
UIDs do, such as the Identity Request/Identity Reply:

http://www.borg.com/~jglatt/tech/midispec/identity.htm

Identity Request/Identity Reply returns enough information
for a patch editor to know what Manufacturer SysEx commands
the device understands. Although a major undertaking, it is
probably is in the realm of the doable to:

 -- Look at a bunch of synth manuals, and see the general
device-dependent commands it uses to send back items
like the name of the patch.

 -- Make a small seed database of SysEx commands for a
few types, and write code that reads the database
format.

 -- Make a networked system for motivated users to send 
you back updates for the database with their own synth.

In other words, distribute the legwork of finding out all
SysEx command variants for every type of synth, to the 
vast userbase :-). Sort of like CDDB, but on a smaller
scale ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-
___
linux-audio-dev mailing list
[EMAIL PROTECTED]
http://music.columbia.edu/mailman/listinfo/linux-audio-dev



Re: [linux-audio-dev] emagic (logic) drops VST support under OS X

2002-09-01 Thread John Lazzaro

 if anyone has any more information on this (john l?

Doug Wyatt posted something on friday to coreaudio-api, that
seemed to indicate that a candidate build for sample code for
audio units exists, and should be out the door in a week if  
things go well. See this thread:

http://lists.apple.com/mhonarc/coreaudio-api/msg02136.html

He replies a few levels into it. You need to validate with
archives/archives to read it ...

Personally, I've been doing the OS X port bottom up for sfront:
first the CoreAudio HAL, now CoreMIDI is in progress. So I 
actually haven't looked at AudioUnits myself yet ... I had 
assumed it was ready for sfront to use, but maybe I'm wrong.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] saol question: adjusting delay line time?

2002-08-21 Thread John Lazzaro


 Will Benton writes
 How can I get around this? 

You want to use an interpolated delay line structure to do
adjustable delays, you create the delay line in the ipass,
make it large enough to cover the reasonable range of delays,
and then pick the tap on the delay line that matches the
current delay you want (or, if you want the smoothest changes
in delays, do interpolation).

One way to do this is the fracdelay core opcode:

http://www.cs.berkeley.edu/~lazzaro/sa/book/opcodes/filter/index.html#frac

Although honestly, I wouldn't use it, but instead I would
build my own custom delay line opcode, and get a nicer
API than the psuedo-OO fracdelay API ... it will also be
easier to benchmark and tune the delay line structure to
run efficiently, since you'll be able to make micro changes
in the SAOL and see the result in the C code in the sa.c
file.  

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] saol question: adjusting delay line time?

2002-08-21 Thread John Lazzaro

 Here's my basic architecture (criticism welcome):

One comment, remember that effects instruments can't have
their state updated at the k-rate via labelled control
statements, because there is no label on the effects
instr! Instead, for maximum portability, what you want
to do is to have a ksig global variable, and your control
driver write to it, and have your effects instrs import
it. Also, if you're using MIDI, the MIDI Master Channel's
standard name appears in the effects instr. And, you might
consider using the params[] standard name too:

imports exports ksig params[128]

that your control driver can update each kcycle, as described:

http://www.cs.berkeley.edu/~lazzaro/sa/sfman/devel/cdriver/data/index.html#bifs

although this has portability issues. 

 Right, I guess my question is, if I have an effects instrument dly
 that is in route and send statements, will I lose that routing if I
 release and then re-instantiate it, or is there a way to ensure that a
 new dly takes the place of the old one on the bus?  I thought that
 the only way to instantiate effects instruments was in send statements.

No, effects instruments get instantiated once, at startup, and the
bussing topology is fixed. As I mentioned in the earlier reply, for
your specific problem, a tapped delay line is the solution, that
is instantiated once at its maximum length, and then tapped at the
appropriate place to get the dynamic delay you want at the moment.

In general, though, you might find yourself in a situation where you
have a series of send/route statements that set up and audio signal
processing chain from input_bus to output_bus, but you want to tap
off that audio into an arbitrary instrument in the system. In this
case, the best thing to do is to either use a global wavetable or
a global ksig array as a buffer region, the basic architecture would
look something like this:

globals {
ksig lastkcyc[100]; // 100 audio samples per control period
krate 441;
srate 44100;
send(dispatch; ; input_bus);
sequence(dispatch, receiver);
}

instr dispatch ()

{
  ksig holding[100];
  ksig kidx;
  asig aidx;
  imports exports lastkcyc[100];

  // pushd last cycle into globals

  kidx = 0;
  while (kidx  100)
   {
 lastkcyc[kidx] = holding[kidx];
 kidx = kidx + 1;
   }

  // at a-rate, buffer next cycle up

  holding[aidx] = input[0];  // mono input_bus assumed here
  aidx = (aidx == 99) ? 0 : aidx + 1;

}

instr receiver ()

{
   ksig imports lastkcyc[100];

   // at kcycle, read in lastcyc and prepare for
   // the arate, no need to make a local copy

   // note sequence() statement ensures ordering
   // is correct for this to work
}

This was all written on the fly and not tested or optimized.
But its rare that you'll need to use a technique like this,
only in situations where you want your SAOL code to behave
like a patch bay for effects that you can plug and unplug
on the fly, and even then you might be better off just using
a single effects instr, and doing the patch bay inside of
it, using user-defined aopcodes()'s to do the different
effects. 

Also, note the subtle way the code above is written, with
respect to the sequence order and such. The SAOL execution
ordering is quite explicit about what happenes when and
how when it comes to ksig imports and exports, this was
all purposely done so that compilers can do block coding
on the a-rate section without worrying about the effects
of inter-instrument communications. see:

http://www.cs.berkeley.edu/~lazzaro/sa/book/append/rules/index.html

all of the links under Decoder Execution Order, to see
the logic behind this.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] analysis/resynthesis environment: python?

2002-08-17 Thread John Lazzaro

 Will Benton writes:

 [...] especially since sfront supports multiple targets.

Speaking of which, sfront 0.83:

http://www.cs.berkeley.edu/~lazzaro/sa/sfman/user/install/index.html#download

which came out last week, supports CoreAudio directly under
OS X, and I'm currently starting on adding CoreMIDI and then
AudioUnits support -- 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] App intercomunication issues, some views. GSOUND ????

2002-08-02 Thread John Lazzaro

 Joshua Haberman writes:
 
 Building modular, reusable software is
 a noble goal, but it's extremely difficult.  Things that may seem like
 they should be logically separable often require tighter coupling than
 you would like if they are to be efficient and usable.

The Propellerhead folks have been doing interviews lately promoting
the latest Reason, and do a good job of stating the case for this 
viewpoint -- unfortunately I can't seem to find the link to the 
interview, maybe someone else can post it, it was worth a read ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] App intercomunication issues, some views.

2002-07-22 Thread John Lazzaro

 Bob Ham writes:

 Unfortunately, audio work
 isn't one of them.  There is no such framework to provide similar
 functionality to audio programs, yet.

The IETF multimedia stack:

http://www.ietf.org/html.charters/avt-charter.html
http://www.ietf.org/html.charters/mmusic-charter.html
http://www.ietf.org/html.charters/sip-charter.html

As Phil noted, MIDI is now in development:

http://www.ietf.org/internet-drafts/draft-ietf-avt-mwpp-midi-rtp-04.txt

Scroll down the list of RTP I-D's and RFCs for RTP:

http://www.ietf.org/html.charters/avt-charter.html

Look at the breadth of coverage, and the number of experts 
contributing in every speciality, from SMPTE video through
DTMF tones. Buy into RTP and SIP and many, many things come
for free ... 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Gibson MaGIC

2002-07-03 Thread John Lazzaro

 There was some discussion about the legal situation a while back,
 but it is cool to use it in opensource products?

If you're goal is to interoperate with a MaGIC product, then you
may have to address this issue ... but if you're just looking
for real-time protocols for media over IP, an open standard like
RTP seems a better match to an open source product:

http://www.cs.columbia.edu/~hgs/rtp/faq.html

To get started ... then see:

http://www.ietf.org/html.charters/avt-charter.html

--jl



RE: [linux-audio-dev] digigram's ethersound

2002-06-21 Thread John Lazzaro

 From: Men Muheim [EMAIL PROTECTED]

  I wrote
  http://www.cs.columbia.edu/~hgs/rtp/history.html

 this link seems a bit out of date! Six years is eternity in this
 business...

 -- men

Yes, but the expired patent mentioned is still expired:

April 1977 
J. Flanagan (of BTL) applied for a patent on Packet Transmission of Speech. 

July 1978 
US patent 4,100,377 granted to J. Flanagan.

And this classic patent is still unexpired:

May 1988 

US patent 4,748,620, Time stamp and packet virtual sequence numbering
for reconstructing information signals from packets granted to Harry
W. Adelmann and James D. Tomcik

I haven't read either of these, but I'd guess these are the baseline
intellectual property for packetizing audio, and newer patents build
on them -- if you're worried about IP on this issue, start worrying
with these, not Gibson or Digigram ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] digigram's ethersound

2002-06-20 Thread John Lazzaro

 Digigram claims its patented. Wonder what they are claiming patent on? 
 USPTO database searches come up with only 4 patents listed to digigram, 
 and nothing regarding audio over ethernet.

A useful reference for the history of patents and prior art on
packetized audio:

http://www.cs.columbia.edu/~hgs/rtp/history.html

--jl



Re: [linux-audio-dev] Audio stream setup

2002-06-15 Thread John Lazzaro

 What are your experiences and which setup would you recommend?

Here's how jwz does it for the DNA lounge:

http://www.dnalounge.com/backstage/src/icecast/

Perhaps its different in Eastern Europe, but in the US today
multicast is rarely an option on the commercial Internet, so
the practical approach is to put up a server that sends out
as many unicast streams as you have the bandwidth to provide.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread John Lazzaro

 Joern Nettingsmeier writes

 not *hearing_oneself_in_time* is a completely different thing.

Yes, I agree its a completely different thing, but ...

 if i try to groove on a softsynth, 10
 ms response time feels ugly on the verge of unusable (provided my
 calculations and measurements on latency are correct), and i'm not even
 very good.

This doesn't match my own personal experience -- I can play with 10ms
of constant latency in the loop, and not be terribly bothered by it.
I notice it, but I can play through it, and I didn't grow up playing
instruments with big delays ... just the normal collection of 
Wuzlizers, acoustic and electro-acoustic ...

But that's just me ... maybe I'm an outlier ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread John Lazzaro

 You certainly can't play an instrument with 10ms latency.

See:

http://ccrma-www.stanford.edu/groups/soundwire/delay_p.html
http://www-ccrma.stanford.edu/groups/soundwire/delay.html
http://www-ccrma.stanford.edu/groups/soundwire/WAN_aug17.html

These experiments show the limits of musical latency compensation --
the full story is complicated but in general compensating for 10ms
is quite doable ...

Also, to address another part of this thread, I'd really recommend
using RTP as an underlying technology -- don't underestimate the
amount of thought and work that has been put into these standards
over the past decade, start by reading:

http://www.ietf.org/internet-drafts/draft-ietf-avt-rtp-new-11.ps

and then browse the I-D's and RFC's at:

http://www.ietf.org/html.charters/avt-charter.html 

--jl

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Does someone knows about RTP ??

2002-05-28 Thread John Lazzaro

Hi Nicolas,

There are advantages to writing your own RTP library
customized to the application -- sfront does this in its 
implementation of MWPP, the MIDI RTP packetization we're 
standardizing through the IETF. You can get a sense of the
complexity involved in writing a custom RTP library, by 
looking as the sfront/src/lib/nsys/ directory in the latest
sfront distribution:

http://www.cs.berkeley.edu/~lazzaro/sa

This implements both SIP and RTP, customized to sfront;
most of the complexity is in the MIDI packetization payload, not
the RTP header processing. 

Also, you might want to consider adding support for the
MIDI RTP packetization as part of your project, if so see:

http://www.ietf.org/internet-drafts/draft-ietf-avt-mwpp-midi-rtp-03.txt

for the latest version, although you probably would want
to join the IETF AVT working group mailing list:

http://www.ietf.org/html.charters/avt-charter.html

and track the changes in MWPP as it heads towards 
standardization ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] developer issues in sfront 0.80 ...

2002-05-09 Thread John Lazzaro

 Maarten de Boer writes:

 Question: should sfront support both 0.5 and 0.9, are should we
 force 0.9? In case of supporting both, which should be the default,

I'm happy to include whatever you decide is best into sfront, it   
is important to remember that some users, given the choice of 
updating their OS to ALSA 0.9 or to not use a tool that only   
supports 0.5, will choose to not use the 0.5-only tool, out of
inertia ... on the other hand, I understand how much work it is
support multiple incompatible versions of an API, and if there's
only time to support one version of ALSA, 0.9 is the right version
to keep supporting ... 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] SuperClonider

2002-04-27 Thread John Lazzaro


 --- james mccartney writes

 But things have a way of changing in life, so you might sit tight and 
 see what happens.

Taking the long view, its more interesting to submit the SuperCollider
language to be an open standard, and to permit royalty-free licensing
for any patents you may have related to the language, than it is to
GPL your own implementation. You can do this acting independently, 
like Adobe with PDF or Pixar with the Renderman language, or you can
hook up with an independent standards body of some sort (in both 
Adobe and Pixar cases, though, I don't know what actions they took
in terms of patents). 

Without taking this step, you're condemning the SC language to a 
lifetime limited to the practical lifetime of your implementation(s).
Whereas, Grace Hopper is dead, but COBOL lives on :-). 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] SuperClonider

2002-04-27 Thread John Lazzaro

 james mccartney writes

 Which of these languages has this been done for: Python Ruby Perl?
 I think that there is only one code tree for each of these languages. 
 Are they condemned?

Yes. 

Larry Rosler makes the case for Perl standardization:

http://www.perl.com/pub/a/2000/06/rosler.html

Without a standards document that precisely defines the semantics of
a language, there's no way to know what the language is. Even _you_
don't really know without the document -- every change to the existing
codebase is a decision made on the fly, as to what is normative and
what is not, without a documentation trail to back it up. 

Note that this argument is true even if you're planning to copyright
the language and sue anyone who makes a compatible implementation --
the standards document acts as a contract between the language 
designer and the programmers in the language, a contract that insures
that code written to the standard will act the same from revision to
revision of the compiler, as well as from compiler to compiler. 

Without this semantic guarantee, the only way to make a system with
confidence written in the language is to lock down the system to 
run with a particular binary version of the compiler forever. Do
you want to be riding on a train if the train braking system runs
in a system built that way?

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] SuperClonider

2002-04-20 Thread John Lazzaro

Hi Paul,

 John, if you're already working on one, or know anybody else who is,
 give a holler real quick so I don't waste my whole afternoon. :)

No, I've been holding off on JACK until I update my OS from the 2.2
series ... and no one else I know is actively working on it either.
So, your JACK driver is the only one ... thanks for working on it!

I'm actually working on adding the March 2002 MPEG Corrigenda changes
to sfront at the moment ... 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] SuperClonider

2002-04-20 Thread John Lazzaro


On Sat, Apr 20, 2002 at 11:21:19PM +0200, Peter Hanappe wrote:

 MusicN/Csound/SAOL is also not [...]

One could have taken a look at the MPEG 2 Audio standard when
it came out, played with the reference standard encoder and
decoder, and come to the conclusion that an application like
iTunes was impossible because the reference software looked
nothing like it :-). It took a decade to get to iTunes -- but
underlying it all, the bitstreams look like the MPEG 2 Audio
standard. 

SAOL is the same way -- the language is fundamentally capable
of acting as the interchange layer for everything you want to
do. The applications don't exist today, but when MPEG 2 Audio
came out iTunes didn't appear two years later either. What
you see today with SAOL application software is basically 
existence proofs, for example:

On Sat, Apr 20, 2002 at 11:21:19PM +0200, Peter Hanappe wrote:

 MusicN/Csound/SAOL is also not an interactive music system

Sfront is pretty interactive ... it supports, under Linux:

  -- low latency audio input and output
  -- low-latency MIDI input from local sources
  -- low-latency networked MIDI input via RTP

It has control driver APIs to add SASL low-latency control as
well. Internal projects I have going add an RTP SASL driver,
so that you can write (for example) a GSM encoder and decoder
in SAOL, and use it for VoIP over a WAN. You could take the
sfront control driver API:

http://www.cs.berkeley.edu/~lazzaro/sa/sfman/devel/cdriver/intro/index.html

write an interactive controller for any device you have, and
inject SASL or MIDI into the SAOL engine every control cycle.
SASL has commands for both instantiating new instruments
and dumping in audio or control table data. So, I think SAOL itself
is built to handle interaction pretty well, and sfront acts
as a proof of concept that it is a usable standard in practice.
As a command-line tool, it may not have the GUI you want to 
use it, but that's not sfront's purpose in life ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] SuperClonider

2002-04-19 Thread John Lazzaro

 So rather than steal all the ideas from SuperCollider, why don't you invent
 something better?! Design a new language and create a new musical paradigm!

No -- come work on Structured Audio instead. We're happy to have multiple
implementations of the language, as an MPEG standard for audio synthesis,
the _goal_ is to have multiple interoperable implementations. See:

http://www.cs.berkeley.edu/~lazzaro/sa/index.html

to get started, and for the actual standards documents see:

http://www.cs.berkeley.edu/~lazzaro/sa/book/append/index.html

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] SuperClonider

2002-04-19 Thread John Lazzaro

 This is obviously missing in MP4-SA. Is there work being done on 
 creating an SAOL interpreter?

There's a really slow interpeter that's the reference implementation
(saolc, by Eric Schierer:

http://web.media.mit.edu/~eds/mpeg4-old/sa-decoder.html

And there's a VM interpeter project out of EPFL, which runs much
faster:

http://profs.sci.univr.it/~dafx/Final-Papers/pdf/Zoia_final_vDSP_dafx00.pdf
http://citeseer.nj.nec.com/476015.html

But at the moment, I believe the only downloadable software from
this project runs under Windows, as a technology demo:

http://lsiwww.epfl.ch/saog/proj_saint/demo.html

All of these projects (including sfront) live in a world where the
program is presented as a complete SAOL program to the decoder, and
only SASL and MIDI control streams are presentable dynamically. You
could use Structured Audio as a starting point for something that
dynamically added and subtracted SAOL instrs on the fly, but no one
has gone down that path yet ... 

--jl

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



[linux-audio-dev] New MWPP Internet-Draft

2002-04-15 Thread John Lazzaro



  [EMAIL PROTECTED] writes:


Title   : The MIDI Wire Protocol Packetization (MWPP)
Author(s)   : J. Lazzaro, J. Wawrzynek
Filename: draft-ietf-avt-mwpp-midi-rtp-03.txt
Pages   : 41
Date: 12-Apr-02

The MIDI Wire Protocol Packetization (MWPP) is a general-purpose
RTP packetization for the MIDI command language. MWPP is suitable
for use in both interactive applications (such as pseudo-wire
emulation of MIDI cables) and content-delivery applications (such
as MIDI file streaming). MWPP is designed for use over unicast and
multicast UDP, and defines MIDI-specific resiliency tools for the
graceful recovery from packet loss. A lightweight configuration of
MWPP supports efficient use over TCP.  MWPP is compatible with the
MPEG-4 generic RTP payload format, to support MPEG 4 Audio codecs
that accept MIDI control input.

 A URL for this Internet-Draft is:

http://www.ietf.org/internet-drafts/draft-ietf-avt-mwpp-midi-rtp-03.txt

The change log is a full page, but to summarize:

  -- We believe the packetization is feature-complete.
  -- Most sections rewritten, to reflect the general-purpose refocus of MWPP.


We anticipate the following trajectory:

  [1] About 30% of the packetization in -03 is new or reworked: all of
  the MIDI Systems journaling is new, SDP format parameters for 
  timestamp semantics and journal customization are new, etc. So,
  we're expecting feedback on -03 on the changes, with a -04 that
  fine-tunes the new features as a result.

  [2] After -04, three parallel streams:

  -- Recoding sfront's MWPP implementation to reflect the changes
 in the I-D in the past 6 months, to reality-test the I-D.
  
  -- Widening the circle of review and comment beyond the current
 mailing lists (AVT, linux-audio-developers, and saol-dev) to
 get comments from the broader MIDI community.

  -- Work begins on ancilliary informative I-D's to explain how to
 write senders and receivers for MWPP.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-




Re: [linux-audio-dev] Cubic and n-points interpolation

2002-03-22 Thread John Lazzaro

 Nasca Octavian Paul writes

 Where can I find algorithms to make cubic, n-point and other types of  
 interpolations?

If you want to see these in an audio context, the Structured Audio
wavetable generators are heavy on these interpolators:

http://www.cs.berkeley.edu/~lazzaro/sa/book/saol/wave/index.html#env

--jl

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Watchdogs?

2002-03-19 Thread John Lazzaro


 So, what do we do? Watchdog? In hardware or software?

Sfront uses two software timers: one to catch infinite loops, and
a second to monitor the MIDI In buffer during overruns -- sfront
only senses the /dev/midi input at k-cycle boundaries, and so an
audio buffer overrun might have the secondary effect of overfilling
the MIDI In buffer the device driver keeps (depends on the size of
buffer chosen by the driver). We added this because some users were
driving sfront via a sequencers attached via the MIDI In jack, and
pushing through dense sequences that drove MIDI at the line rate ...

--jl



Re: [linux-audio-dev] Searching for midi-specification

2002-03-05 Thread John Lazzaro

   I'm seraching for a free of charge midi-specification 

As Paul noted, the official spec only comes from the MMA, the 
closest you can get on-line is to look over these websites:

http://wwwborgcom/~jglatt/tech/miditechhtm
http://wwwhintondemoncouk/midi/promidihtml

If you're only looking for the answer to a specific question,
you might find it at one of those URLs 

--jl



[linux-audio-dev] FWD: MWPP status report for IETF53

2002-03-05 Thread John Lazzaro


[LAD folks -- here's the status report for MWPP I just posted to the
IETF AVT group, complete with a URL for the new MWPP document --jl]

---

Hi everyone,

The latest revision of MWPP is out:


Title   : The MIDI Wire Protocol Packetization (MWPP)
Author(s)   : J Lazzaro, J Wawrzynek
Filename: draft-ietf-avt-mwpp-midi-rtp-02txt

 A URL for this Internet-Draft is:
 http://wwwietforg/internet-drafts/draft-ietf-avt-mwpp-midi-rtp-02txt

New items include resiliency support for all MIDI Control Change
controllers (including RPN and NRPN), reset semantics for the recovery
journal system, and clarified delta-time semantics for pseudo-wire
emulation

Unfortunately, I'm not going to be making the trip to Minneapolis,
below is the status report I would have presented  comments welcome

---

In Salt Lake, audience concerns were summed up as two questions:

Q1: What is normative and what is not?
Q2: What is MPEG-specific and what is not?

The versions of draft-ietf-avt-mwpp-midi-rtp released since Salt Lake
provide these answers:

Q1: What is normative and what is not?

A1: The format of the bits on wire, and the optional adherence to
a sending policy that guarantees graceful recovery from packet
loss, are the essential normative parts of MWPP -- these
elements remain from the document presented at Salt Lake

The rest of the Salt Lake document -- most notably, the detailed
sending and receiving algorithms -- do not need to be normative
These were stripped out of the draft-ietf-avt-mwpp-midi-rtp series

Q2: What is MPEG-specific and what is not?

A2: MPEG's requirement for mpeg4-generic support is the only 
MPEG-specific text left in draft-ietf-avt-mwpp-midi-rtp
The MPEG 4 Structured Audio document is no longer a normative 
reference

After releasing the first post-Salt-Lake document, developers from
several applications areas contacted me, who thought an IETF-approved
MIDI protocol would be potentially interesting for:

  *  MIDI pseudo-wire emulation, to augment MIDI cables with Cat 5 on
 stage or in a studio

  *  Resiliently streaming standard MIDI files, rather than the 
 current practice of downloading the whole performance first

But MWPP was only a starting point, and needed additional functionality
to work in these contexts Some of these features were added into
draft-ietf-avt-mwpp-midi-rtp versions, including:

  *  A payload format that can carry many commands per packet, with
 a delta time system suitable for use in both pseudo-wire emulation
 and streaming MIDI file application domains

  *  A payload section that can (non-resiliently) code the entire MIDI
 wire protocol, including support for embedded System Realtime 
 commands, arbitrarily-large System Exclusive commands, and 
 System Exclusive commands that code information in the relative
 timing of internal data octets

  *  Resiliency support for all 128 MIDI Control Change numbers, 
 including the MIDI RPN and NRPN systems, and semantics for
 the entire resiliency system in the case of MIDI reset 
 events

The main focus going forward is to complete the list of additions MWPP
needs to be a general MIDI transport Work items include:

  *  Resiliency support for all MIDI System commands (including MIDI
 System Exclusive commands) where it makes sense to do so -- ie
 commands sufficiently short and real-time oriented that the 
 recovery journal mechanism makes sense

  *  For bulk-data MIDI Systems commands, where the recovery
 journal mechanisms are not appropriate, define an alternative
 mechanism (as simple as protect messages X Y and Z by other means,
 or perhaps something more detailed)


-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu wwwcsberkeleyedu/~lazzaro
-



Re: [linux-audio-dev] Choosing a distribution

2002-03-02 Thread John Lazzaro

 Does anyone have a success/horror stories from using Debian? []
 The software that I want [] sfront,

Enrique Robledo Arnucio has done a really great job with the official
sfront Debian package:

http://packagesdebianorg/testing/sound/sfronthtml

I'll probably stop doing my own alien-generated deb's soon, and just
rely on his releases for Debian support 

--jl



Re: [linux-audio-dev] Choosing a distribution

2002-03-02 Thread John Lazzaro

 Is there something different that he is doing? 

My (alien-created, on a Redhat machine) deb's don't get the 
dependencies right, and don't put things in the directories 
that good Debian packages are supposed to put things Whereas
his deb's are correct in both ways  

 Also, just to mention here, from the tests that I have done with the
 Intel-C compiler (icc), I have been getting about a 3/4 speed increase
 over using gcc!

Not surprising, sfront does its best to generate embarrassingly
parallel code on many types of SAOL code, it could be that icc
can vectorize these in a way that gcc can't  I've never tested
icc myself, though, so I don't know what its doing 

--jl



[linux-audio-dev] MWPP-01.txt notes ...

2002-02-21 Thread John Lazzaro


Hi lad-folk,

The latest rev of the MWPP spec is out:

http://www.ietf.org/internet-drafts/draft-ietf-avt-mwpp-midi-rtp-01.txt


A quick synopsis of the changes:

  o  The MIDI command section now encodes all legal MIDI
 commands, including all MIDI Systems commands.

  o  The MIDI command section header has new features to
 support the efficient streaming of pre-generated MIDI
 performances.

  o  A new SDP parameter (pwe) indicates that a stream
 is suitable for use in pseudo-wire emulation.

  o  Changes in response to Dominique Fober's AVT postings.

Basically, at this point, for applications where the recovery
journal isn't needed (TCP transport) MWPP is done --
if you see any MIDI functionality which is unencodable in
the MIDI Command Section of MWPP, its a bug we need to fix.

Major remaining work items include:

  o  A redesign of Chapter C of the recovery journal, to 
 handle the semantics of all 128 controllers of the
 MIDI Control Change command. Will probably include
 the creation of a new recovery journal chapter for
 Registered/Non-Registered Parameter Numbers, since
 the Chapter C format is unsuitable for resiliency
 for this feature.

  o  Resiliency guidelines for MIDI Systems. A significant 
 subset is amenable to recovery journal techniques, 
 the rest requires the hooks for an other means 
 resiliency, which for unicast could be as simple as
 a separate TCP MWPP link dedicated to the bulk transport
 aspects of MIDI Systems which are unsuitable for UDP
 streaming.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



[linux-audio-dev] new MIDI RTP packetization I-D released ...

2002-02-12 Thread John Lazzaro


Hi everyone,

New version of MWPP (MIDI Wire Protocol Packetization) is out,
here's the IETF blurb:




A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Audio/Video Transport Working Group of
the IETF.

Title   : The MIDI Wire Protocol Packetization (MWPP)
Author(s)   : J. Lazzaro, J. Wawrzynek
Filename: draft-ietf-avt-mwpp-midi-rtp-00.txt
Pages   : 21
Date: 11-Feb-02

This memo describes the MIDI Wire Protocol Packetization (MWPP).
MWPP is a resilient RTP packetization for the MIDI wire protocol.
MWPP defines a multicast-compatible recovery journal format, to
support the graceful recovery from lost packets during a MIDI
session. MWPP is compatible with the MPEG-4 generic RTP payload
format, to support MPEG 4 Audio codecs that accept MIDI control
input.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-ietf-avt-mwpp-midi-rtp-00.txt

---

A few things to note:

[1] We're an official working group item now!

[2] The document has been rewritten, to take into account both the
IETF feedback at the meeting at Salt Lake in Decemeber, and the
feedback from MIDI transport developers in the Linux-Audio-Developers
group. These changes are too numerous to list here -- see section 0 of
the draft for details on what's new and what's left to do.

[3] As always, the best way to send in comments is to subscribe to the
IETF Audio Video Transport working group mailing list -- send mail to
[EMAIL PROTECTED] with subscribe in the body.


-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Re: tutorial

2002-01-19 Thread John Lazzaro

 Are there any online documents or tutorial or Master/PhD thesis available
 describing audio compression for MP3, AAC and CELP?

Follow the links off of:

http://www.mpeg.org/MPEG/starting-points.html

Some are non-technical, but many are highly-detailed and at the level
you're looking for ...

For MP3, though, David Pan's tutorial in ACM Multimedia is the best
I've seen, when I teach MP3 lectures I teach off of it ... its worth
a trip to your local academic library to make a copy of it ... 

--jl



Re: [linux-audio-dev] Introducing DMIDI

2001-12-20 Thread John Lazzaro

 martijn sipkema writes

 A (variable length?) delta time would enable the receiver to schedule the
 events in the packet, but adding this to MWPP isn't trivial, at least the
 MIDI payload would have to be parsed to know where a MIDI message
 ends, it can't be just sent. This is not a problem I think.

My plans at this point is to read over the RTESP-Wedel.pdf document 
carefully, and then take a shot at a redesign of the MIDI Command Payload
encoding to provide relative timing information for latency variation
compensation to work for multi-command packets. There are various tricks
other RTP payloads have used for this sort of thing, and in addition
the MIDI File encoding has its own delta representation which might be
worth borrowing ...  

 MWPP as it is now has the possibility of combining several MIDI events
 into a packet. MIDI events rarely occur at the same time exactly, unless
 they are produced by a sequencer application. So for someone playing
 a MIDI keyboard this funcionality doesn't make that much sense, or
 I at least don't get it.

One example is startup, you might want the first packet sent to have
a bunch of MIDI Program Change and MIDI Control Change Bank Select
commands, to set up presets. The loss of relative timing information
doesn't matter in that case ... 

 One last comment about the RTESP-Wedel.pdf. Millisecond timestamps
 are used in it.

MWPP uses the audio sampling rate (set in the Session Description 
Protocol descriptor that creates the session), coded as a 32-bit
integer ... any delta-t encoding would work off of that timebase.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Introducing DMIDI

2001-12-17 Thread John Lazzaro

 But from what I understand of RTP the same
 thing would/could happen if the protocols are switched. 

Yes, using RTP isn't about getting QoS for free -- 

BTW, some LAD-folk may not be aware that sfront networking:

http://www.cs.berkeley.edu/~lazzaro/nmp/index.html

uses RTP for MIDI, we presented our Internet-Draft at IETF
52 in Salt Lake a few weeks ago:

http://www.ietf.org/internet-drafts/draft-lazzaro-avt-mwpp-midi-nmp-00.txt

and it received a good reception -- the odds are good that
it will become a working-group item for AVT. The bulk of
this I-D describes how to use the information in RTP to 
handle lost and late packets gracefully in the context of
network musical performance using MIDI ... 

--jl

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Still I cannot understand why...

2001-12-17 Thread John Lazzaro

 When I first came here (1997?), the best soundfile editors I
 could find were DAP, MiXViews, and Snd. IIRC, *all* of those were
 developed on SGI or some other non-linux system.

I wrote sfront on a HPUX workstation, and the first audio driver
was for its hardware -- UCB CS were given hundreds of these machines
when our new building opened by HP, and so for a few years a lot
of the experimental systems in the building first ran on HP's.

Those machines have mostly left their original owners now, and
the desks they were sitting on are mostly populated by PC hardware
running a mix of NT and Linux, although there are still many Sun's
on chip-design desks, and SGI on graphics desks ... the server
rooms are incredibly diverse, though, one of everything and many
custom things too ... 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



[linux-audio-dev] a few CoreAudio-related pointers

2001-12-12 Thread John Lazzaro


Hi everyone,

With CoreAudio coming up on the list lately, I thought
I'd post two links that might clarify what (if anything) LAD
folks can learn from the implementation details. This brand
new document:

http://developer.apple.com/techpubs/macosx/Darwin/IOKit/DeviceDrivers/WritingAudioDrivers/WritingAudioDrivers.pdf

tells hardware folks how to write a device driver that
works for OS X. In the process, it shows how CoreAudio interconnects
with the kernel from the kernel side -- if you combine this document
with the earlier CoreAudio userland document, its pretty clear how
the whole system works.

A second difference under OS X it handles the equivalent
of our SCHED_FIFO. Basically, the highest priorities in OS X are
reserved for isochronous priorities -- you figure out the Mach
thread associated with your process or your Pthread, and you then
set the mach thread policy to be THREAD_TIME_CONSTRAINT_POLICY -- 
this policy is parameterized, you tell the OS about the periodicity
of your app and such, and the scheduler does its best to meet your
deadlines. The documentation for this is kinda murky right now,
if you look on the darwin-developer mailing list archives for a
set of posts starting on 8 Oct 2001 by Benjamin Golinvaux, and
followups by David A. Gatwood, and a more recent thread on 
Coreaudio about mpg123 porting by David A. Gatwood, you can piece
together the Mach approach to real-time scheduling as implemented
in OS X. 

This is posted just for comparative systems purposes --
it's not clear to me that these mechanisms are suitable for Linux
to use, but its always good to see how other people try to solve
a problem ... 

--jl

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] SCHED_FIFO versus SCHED_RR

2001-11-26 Thread John Lazzaro

  sufficient for what?

 Sufficient to not lock people out of their machine.

Sfront is more paranoid than this -- it uses signals to implement a 
watchdog timer which, if it goes off, relinquishes SCHED_FIFO. The
timer gets reset every time sfront returns from blocking I/O. This
catches the case of your program computing audio just fast enough
to avoid overruns, but not fast enough to cause blocking on audio
reads or writes. It also catches naive SAOL programmers who write
infinite loops ... 

See the code in sfront/src/lib/asys/linux.c for more details, search
for the SCHED_FIFO function primitives to see what's going on ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] Audio streaming over network

2001-10-23 Thread John Lazzaro

 On Tue, 23 Oct 2001, Ryan Mitchley wrote:

 Hi all

 Does anyone here know of a library or API for streaming audio over a
 network?

 M. Edward (Ed) Borasky writes

 Check out sfront at

 http://www.cs.berkeley.edu/~lazzaro/sa/


Soon, but not quite yet -- at the moment, sfront networking does
MIDI resilently for low-latency situations, and while you could
concievably hack this to do audio (sending samples encoded as
MIDI change-control events, and reassmbling on the other side),
you really don't want to go there.

I'm actively working on SASL networking for sfront at the moment --
SASL is the more general-purpose control language for Structured
Audio, provided as a companion to MIDI control. SASL is well 
suited for writing custom audio codecs in Structured Audio, so
once the SASL packetization is ready, sfront should be a viable
platform for these sorts of experiments. But not for another few
months ... 

--jl



Re: [linux-audio-dev] single-instance LADSPA plugins useful?

2001-10-15 Thread John Lazzaro

 Paul Winkler [EMAIL PROTECTED] wrote

 On Sat, Oct 13, 2001 at 06:24:24PM -0700, John Lazzaro wrote:
 Let me know, if there's no user interest in
 a single-instanced LADSPA, I won't spend cycles 
 doing one ... there are enough audio and control
 drivers in sfront that no one uses already :-).

 Well, as the guy who asked for this (at least most recently), I will
 say this: It's not ideal, certainly. I'd prefer plugins that I can run
 multiple instances of. But:

 1) we could do some pretty cool stuff anyway.

OK, good enough, as long as someone will actually be using it I'll
look into it further ... 

--jl



[linux-audio-dev] single-instance LADSPA plugins useful?

2001-10-13 Thread John Lazzaro


Hi everyone,

Finally got a chance to look at ladspa.h, to
evaluate the feasiblity of modifying sfront to produce
sa.c files that would function as LADSPA plugins.

The easily-doable approach would let sfront
create plugins that could only create a single 
instance -- that is, the first call to:

LADSPA_Handle (*instantiate)

would return a non-NULL value, but all 
subsequent calls would return a NULL value; the
void (*activate) and void (*deactivate) methods
would be set to NULL.

This restriction is necessary because at 
present, the sa.c files sfront creates manipulate
state stored as global variables; this was done  
purposely, to remove one layer of indirection on
memory accesses. There's no technical hurdle to
optionally encapsulating this state in a dynamic
form, but supporting this option would be a 
multi-month effort, and at present no other sfront
application domain requires it. So, I doubt it will
happen in the near term ... 

So, the question I have for LADSPA users is,
is an sfront option that creates a plugin that can
only be instantiated once a useful thing? Or is the
consensus don't bother, its too limited? The 
general idea is, a user would build up a signal 
processing chain in SAOL, then use sfront to create
a plugin for the entire chain, and then instantiate
once in the application.

Let me know, if there's no user interest in
a single-instanced LADSPA, I won't spend cycles 
doing one ... there are enough audio and control
drivers in sfront that no one uses already :-).


--jl

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



[linux-audio-dev] sfront 0.766 9/23/01

2001-09-23 Thread John Lazzaro


Hi la{d,u}-folk,


New sfront release (0.76) [see change log  pickup info
below], the main item of interest to LAD folk is that 
the audio driver API in sfront has been enhanced -- its
now relatively straightforward to use sfront with 
low-level audio API's that use a callback approach. 
See the active driver section of the sfront reference
manual chapter on audio drivers:

http://www.cs.berkeley.edu/~lazzaro/sa/sfman/devel/adriver/index.html#active

for details. I would think this API should be sufficient
for a LAAGA driver ... 

To test out the active driver API, I added PortAudio-based
support for Windows to sfront (via both WMME and DSound, 
see www.portaudio.com for more details on PortAudio). This
went pretty smoothly, I was able to do a complete test of
PortAudio integration w/o ever touching a Windows machine,
by using the PortAudio Linux driver as a stand-in during
testing ... and reports from users doing alpha-testing on 
various Windows platforms were positive.

--jl

-

Pick up sfront 0.76 9/23/01, at:

  http://www.cs.berkeley.edu/~lazzaro/sa/index.html


Change log message:


[1] Enhanced audio
driver API to support
soundcard APIs that 
require callback functions. 

[2] New ps_win_wmme
and ps_win_ds audio
drivers, providing PortAudio 
audio drivers for Windows
WMME and DirectSound APIs.
Thanks to the PortAudio
team, and testers Richard
Dobson, Peter Maas, Kees van
Prooijen, and Tim Thompson.

[3] Audio drivers now control 
default time option selection.

[4] Many bugfixes in
SAOL audio bus system.
Thanks to Robert Sherry.

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] what's wrong with glame

2001-07-26 Thread John Lazzaro

 who said a shallow learning curve was a goal?

 In a word - users! 

I don't think this is realistic for professional media tools.
If it were, there wouldn't be complete course tracks in 
commercial art school for learning how to use commercial 
software packages -- Maya, Photoshop, etc. These folks are
getting trained to spend 40-60 hours a week in front of
monitors, with the goal of being maximally productive. A
semester spent learning how to use the tool is a good 
tradeoff. 

And in fact, I just noticed that SFSU (San Francisco State
University, which specializes in media education here in
the Bay Area), teaches courses like this for Pro Tools now
too. If I had a spare $1600 lying around, I'd take it to
see if it was really worth a Grammy :-). 

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] user lowlatency kernel experience

2001-06-13 Thread John Lazzaro

 Joakim Verona writes:

 my experience was that it became very easy to hang the system.if csound 
 got to many notes, it would render forever end never return, thus 
 effectively hanging my machine.

Sfront compares the total number of bytes written to the D/A before and
after each write() to the soundcard (via the SNDCTL_DSP_GETOPTR ioctl),
and uses this as a heuristic to detect if the write() blocked or not.
One would think a simple select() would be a better way to determine
this, but I can't recall if I didn't use select() purposely, or if
there was a bug in a driver or some secondary issue. 

At any rate, if it notices that too long of a time period has passed
without a write() block, it lets go of SCHED_FIFO, under the assumption
that it is computing just fast enough so that a block doesn't happen for
an extended period of time, thereby locking up the screen to the user.

In addition, a watchdog timer is set up to catch SAOL infinite loops
(the write() check above would not suffice in this case), and the
priority level (sched_get_priority_max(SCHED_FIFO) - 1) is chosen so
that an emergency application can use sched_get_priority_max(SCHED_FIFO)
[although I agree with Benno that even sched_get_priority_max(SCHED_FIFO)
- 1 is too high].

However, its unclear to me that, even with all of these precautions,
running sfront (and by sfront in this email, I always mean the sa.c
file sfront creates) with another SCHED_FIFO application is a safe
thing to do -- I never tested it in this way, since there has never
been a second application people have asked me to test. I would
strongly recommend not using a production machine if you try to use
sfront with another SCHED_FIFO application, back up your disks, etc --
the worst could happen, I almost lost my disk contents a few times
when I was developing the linux driver for sfront, although since we
keep all important files on our NetApp boxes here, there's little
long-term risk from losing a local disk (just the hassle of a reinstall).

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] CSL-0.1.2 Release

2001-06-11 Thread John Lazzaro


 John Lazzaro writes

 
I have the Audio and MIDI on Mac OS X document sitting in my list
of things to read stack, but its a pretty tall stack these days --
have any LAD-folks considered the pros and cons of simply adopting
this spec for Linux, and doing a compatible implementation? If there


 Paul Davis [EMAIL PROTECTED] writes

 I did. But:

1) its playback only (!!!)

3) there's no support for data sharing between applications

I see ... clearly not suitable for LAD's needs ...

--jl