RE: [linux-audio-dev] Hard-drives and soundcard support

2001-12-19 Thread STEFFL, ERIK *Internet* (SBCSI)

 -Original Message-
 From: Wa Ditti [mailto:[EMAIL PROTECTED]]
...
 is an obstacle for some PIII motherboard BIOS's which can 
 only handle up to 32
 Gigs, I don't know if there's a way to go straight to the 
 operating system
 without the BIOS being involved, if there is I like to know 
 about it, as I
 have 16 Gigs of storage languishing in my machine.

  you can disable the HD in bios but you cannot boot from it then. so you
might want to get one more HD that would be recognized (smaller, I guess you
can get those very cheap) and use the bigger HD as non-boot drive.

  I had the same problem with 60GB Matrox (but they said it straight away -
if you have problems, make the disk appear smaller), luckily I still had an
original 2GB HD that I used for booting.

erik



RE: [linux-audio-dev] Still I cannot understand why...

2001-12-19 Thread STEFFL, ERIK *Internet* (SBCSI)

 -Original Message-
 From: Ivica Bukvic [mailto:[EMAIL PROTECTED]]
 
that's very shortsighted. also selfish (in contrast to 
 sharing ideas
  that
  are behind linux).
 
 I do not see it being that way since there is no way that I could, for
 instance, write a good Alsa port for crapple architecture 
 when I do not
 even own one. I see nothing selfish or shortsighted about 
 that, neither
 do I think it contrasts with the idea behind Linux. Linux never stood
 for selected few should write a driver for everything and everyone,
 while the rest of us should then just use it and complain how the
 implementation sucked. It stands for here's my code that 
 works for me,
 hack away and port it so that it works for you and others who use
 architecture like you, and while doing so, ask them to help 
 you, just as
 people who have architecture like me have helped me.

  that's not relevant to what I wrote. I don't think I wrote (or implied)
that you should write a good Alsa port for crapple or that linux stands for
selected few should write driver

  so I am quite confused why you have chosen to answer in this way. Please
think about how other systems are important for linux instead, I am pretty
sure you can come up with the reasons easily.

erik



Re: [linux-audio-dev] Introducing DMIDI

2001-12-19 Thread Dominique Fober

There are some interesting ideas in these works and I would like to add a  
contribution to the discussion: 
from my point of view, preserving the transmitted events scheduling with a maximum 
accuracy is important as this scheduling is part of the musical information itself. 
However, except considering that the protocol is intented to run on a fast local 
dedicated network, the transport latency will introduce an important time distortion. 
Therefore, a mecanism to compensate for the latency variation seems to me to be 
necessary.
Another point is the efficient use of the transmitted packets: sending one packet for 
each event is probably not the best solution. In this case and due to the underlying 
protocols overhead, the useful information part of a packet may become less than 10% 
of the packet size. Moreover, hardware layers such as Ethernet for example, often 
require a minimum packet size to operate correctly.
There are solutions to these problems: I've recently presented a paper at WedelMusic 
2001 which take account of efficiency, scheduling and clock skew. Maybe, combining the 
different things may result in a improved solution.
You can temporarely get the paper at http://www.grame.fr/~fober/RTESP-Wedel.pdf
-df

--
Dominique Fober   [EMAIL PROTECTED]
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr



 But from what I understand of RTP the same
 thing would/could happen if the protocols are switched. 

Yes, using RTP isn't about getting QoS for free -- 

BTW, some LAD-folk may not be aware that sfront networking:

http://www.cs.berkeley.edu/~lazzaro/nmp/index.html

uses RTP for MIDI, we presented our Internet-Draft at IETF
52 in Salt Lake a few weeks ago:

http://www.ietf.org/internet-drafts/draft-lazzaro-avt-mwpp-midi-nmp-00.txt

and it received a good reception -- the odds are good that
it will become a working-group item for AVT. The bulk of
this I-D describes how to use the information in RTP to 
handle lost and late packets gracefully in the context of
network musical performance using MIDI ... 

   --jl

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-






Re: [linux-audio-dev] Introducing DMIDI

2001-12-19 Thread Dominique Fober

 from my point of view, preserving the transmitted events scheduling with a
maximum accuracy is important as this scheduling is part of the musical
information itself. However, except considering that the protocol is
intented to run on a fast local dedicated network, the transport latency
will introduce an important time distortion. Therefore, a mecanism to
compensate for the latency variation seems to me to be necessary.

RTP timestamps are used for this.

Sure, it may be used for this but it isn't: MIDI events are played at reception time. 
In MWPP for example, the timestamp is only used to determine wether a packet is too 
late or not. 


 Another point is the efficient use of the transmitted packets: sending one
packet for each event is probably not the best solution. In this case and
due to the underlying protocols overhead, the useful information part of a
packet may become less than 10% of the packet size. Moreover, hardware
layers such as Ethernet for example, often require a minimum packet size to
operate correctly.

I think a protocol for realtime MIDI over UDP will always have significant
protocol overhead, I
don't see this as a problem however.

Considering a 44 bytes overhead (IP + UDP) + the 4 DMIDI header bytes intended to 
address a specific node and device, sending a full MIDI data flow (about 1000 3-bytes 
events per second) requires nearly 400 kbs when the MIDI rate is 31.25 kbs. It's not a 
problem as long as the corresponding bandwith is available to you. But if you plan to 
address different devices on the same node (for example using a multiport interface), 
you should be able to provide each device with an equivalent full MIDI data flow and 
then the problem seriously increases with the number of devices.

--df

--
Dominique Fober   [EMAIL PROTECTED]
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr





Re: [linux-audio-dev] So, what to do about it? (was: Still I cannot understand why...)

2001-12-19 Thread Nick Bailey

Steve Harris wrote:


 Compressors and reverbs are hard. I plan to take a shot at a compressor
 (after I have a gate that works smoothly - for practice), but (classic)
 reverbs are a whole area in themselves that I don't really want to get
 into. Juhana (who wrote gverb) was working on a new version, but I havn't
 heard form him for a while.


Amen to reverbs being hard!

I wrote a compressor for Sox.  It wasn't that difficult, but I hesitate even
to mention it in present company in case you laugh at me

http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/sox/sox/compand.c?rev=1.10content-type=text/vnd.viewcvs-markup

Nick/





Re: [linux-audio-dev] Still I cannot understand why...

2001-12-19 Thread Jussi Laako

Dan Hollis wrote:
 
  I don't have any problems with my Delta 1010 using OSS. I've been using 
 
 And what ways are you using it with OSS?

Recording 2 or 8 channels of audio using 44.1 or 96 kHz samplerate and 16 or
24 bit resolution. Usually playing just 2 channels via DA or S/PDIF.

I still use Windows for making music because there is no suitable software
available.


 - Jussi Laako

-- 
PGP key fingerprint: 161D 6FED 6A92 39E2 EB5B  39DD A4DE 63EB C216 1E4B
Available at PGP keyservers



RE: [linux-audio-dev] Hard-drives and soundcard support

2001-12-19 Thread Joe Pfeiffer

I haven't run into the problem you're describing, but would the
standard ``put a small boot partition at the beginning of your disk''
fix work?
-- 
Joseph J. Pfeiffer, Jr., Ph.D.   Phone -- (505) 646-1605
Department of Computer Science   FAX   -- (505) 646-1002
New Mexico State University  http://www.cs.nmsu.edu/~pfeiffer
Southwestern NM Regional Science and Engr Fair:  http://www.nmsu.edu/~scifair



Re: [linux-audio-dev] Introducing DMIDI

2001-12-19 Thread martijn sipkema

 RTP timestamps are used for this.

 Sure, it may be used for this but it isn't: MIDI events are played at
reception time. In MWPP for example, the timestamp is only used to determine
wether a packet is too late or not.

oh, well it should be used for that, doesn't need to be in the protocol
probably. as long
as there is a timestamp the receiving end can use it to avoid jitter.

 I think a protocol for realtime MIDI over UDP will always have
significant
 protocol overhead, I
 don't see this as a problem however.

 Considering a 44 bytes overhead (IP + UDP) + the 4 DMIDI header bytes
intended to address a specific node and device, sending a full MIDI data
flow (about 1000 3-bytes events per second) requires nearly 400 kbs when the
MIDI rate is 31.25 kbs. It's not a problem as long as the corresponding
bandwith is available to you. But if you plan to address different devices
on the same node (for example using a multiport interface), you should be
able to provide each device with an equivalent full MIDI data flow and then
the problem seriously increases with the number of devices.

normally the full midi bandwidth isn't used except when doing a sysex dump.
if events are to be
scheduled at exactly the same time they could probably be in the same
packet. if not then the
RTP timestamp can not be used. when transmitting realtime one doesn't know
the events ahead
of time so they cannot be combined anyway. for very low bandwidth links
compression could be
used on top of the protocol.

--martijn




Re: [linux-audio-dev] Still I cannot understand why...

2001-12-19 Thread David Gerard Matthews Jr.

Jussi Laako wrote:

 I still use Windows for making music because there is no suitable software
 available.
 
  - Jussi Laako

Obviously not, if you insist on using OSS.
-dgm



Re: [linux-audio-dev] So, what to do about it? (was: Still I cannot understand why...)

2001-12-19 Thread Miha Tomi

Hello!

On Wed, 19 Dec 2001, Nick Bailey wrote:
  Compressors and reverbs are hard. I plan to take a shot at a compressor
  (after I have a gate that works smoothly - for practice), but (classic)
  reverbs are a whole area in themselves that I don't really want to get
  into. Juhana (who wrote gverb) was working on a new version, but I havn't
  heard form him for a while.
 Amen to reverbs being hard!

What reverb effect is included into snd 4.1. That one sounds quite nice...

Take care,

Miha...

 - Miha Tomi --- C. na postajo 55 -- SI-1351 Brezovica pri Lj. --- SLOVENIA -




Re: [linux-audio-dev] Still I cannot understand why...

2001-12-19 Thread Paul Davis

Only latency sensitive program I made is rtEq where biggest latency source
is FFT size (usually 75% overlap and 8192 point FFT).

i don't believe its common practice to use FFT for real time EQ. its
perfectly possible to use delay lines to accomplish high quality EQ
with much lower latency than FFT. thats certainly what 90%+ of all
VST/DirectX EQ plugins use (there was a discussion about this on
vst-plugins recently)

--p



RE: [linux-audio-dev] Still I cannot understand why...

2001-12-19 Thread Stuart Allie

Hi All,

Just curious, but could somebody explain *how* delay lines can be used
implement EQ? I have a strong maths background, but no DSP experience if
that helps.

Stuart

 -Original Message-
 From: Paul Davis [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, 20 December 2001 8:51 am
 To: [EMAIL PROTECTED]
 Subject: Re: [linux-audio-dev] Still I cannot understand why... 
 
 
 Only latency sensitive program I made is rtEq where biggest 
 latency source
 is FFT size (usually 75% overlap and 8192 point FFT).
 
 i don't believe its common practice to use FFT for real time EQ. its
 perfectly possible to use delay lines to accomplish high quality EQ
 with much lower latency than FFT. thats certainly what 90%+ of all
 VST/DirectX EQ plugins use (there was a discussion about this on
 vst-plugins recently)
 
 --p
 

--
Dr Stuart Allie
Technical Programmer
Resource Analysis Group
Consulting Division
Hydro Electric Corporation 
4 Elizabeth St, Hobart, Tasmania, 7004
Ph (03) 6230 5760 
Fax (03) 6230 5363 
Email : [EMAIL PROTECTED] 
--
 




Re: [linux-audio-dev] Still I cannot understand why...

2001-12-19 Thread Paul Davis

Just curious, but could somebody explain *how* delay lines can be used
implement EQ? I have a strong maths background, but no DSP experience if
that helps.

i'm not a dsp programmer, but its really quite simple. if you
feedback with a delay of just 1 sample, and attenutate both the
current and previous sample by 0.5:

y[n] = (0.5 * x[n]) + (0.5 * x[n-1])

you've just averaged the two values, which effectively smooths out
jags in the input signal. you can vary the attenuation coefficients
and the delay length and the number of delay lines to alter the kind
of smoothing, which is of course directly equivalent to filtering
certain frequencies out of the signal. 

the actual details are extremely hairy though - there is a lot of
sophisticated math that goes into really good filter design, plus a
lot of subjective, non-double blind tested opinion :)

--p