>I recently showed a Mac user interested in using Linux
>for live stage work (no gui needed) the ALSA site so they
>could get some idea of progress... they just laughed and
>said there was _nothing_ there to help them evaluate whether
>ALSA (therefor Linux) was going to be useful to them... well,
>Hi,
>
>My ALSA sound driver uses an MPU401 for MIDI support,
>and the OSS emulation module snd-seq-oss works well
>with the snd-mpu401-uart module. However, there are no
>native ALSA sequencer devices:
>
>[root@wittsend src]# cat /proc/asound/seq/clients
>Client info
> cur clients : 3
> peak c
>
> Finally, if you're going to turn everything over to the Wikki
>kiddies then count me out! I believe in quality over quantity.
>
isn't there a way to limit wiki access? what i like about wiki is the
ease of updating it every time something occurs to me. no CVS. it
"feels right". just my
>On Tue, 19 Feb 2002, Patrick Shirkey wrote:
>
>> I have just uploaded a template for the idea PD had to make a complete
>> howto for each card and link it to the sound card matrix or something
>> similar.
>[...]
>> http://www.boosthardware.com/LAU/alsa/template2.html
looks OK. a few nitpicks:
>i don't know, is my question off-topic?
its more of a general audio programming question than something ALSA
specific.
>Where can i get some help?
>Would it help to mail the sources?
>Is it an obvious mistake i make?
>I really tried to describe the problem in detail, are there
>any important p
>Ok I can give the command to load the soundcore module instead.
>
>eg. modrobe soundcore
modprobe -n soundcore will test for its existence without doing anything.
>Well it is the most popular unix editor. It is just an example for new users w
>ho may not even know how to edit a file.
pointing
>I disagree. I just went through the process of teaching a friend how to use li
>nux and he needed every piece of information I could give him. It is my opinio
>n that repetiton and hand holding is the key to success for all newbies. (Base
>d on teaching ESL for the past two years and teaching mys
>There was some cruft at the end of the file so I may have fixed it now.
yep, that worked.
--p
___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel
>I wanted to use two RME hammerfalls to record 48 channels of audio via adat,
>do some signal processing and play them back again. But I experienced a very
>poor performance of my computer (dual pentium III 1.3GHz, 512MB RAM). It
>wasn't even able to record 48 channels without xruns.
>
>I modified
>if I set the latency timer to 0x30, I am able to stream 48 channels of data
>to a speed-optimized raid system. However this is not the point. The arecord
>command was just an example, the computer has severe problems to record and
>play data with 48 channels in realtime if I don't increase the la
>if I set the latency timer to 0x30, I am able to stream 48 channels of data
>to a speed-optimized raid system. However this is not the point. The arecord
i would also mention that you may have potential problems doing this
with word-clock synced interfaces. its likely that they will generate
int
>I do not mean to be hammering this issue into the ground, but Linux OS
>as an audio workstation solution has been around for 3 years now, yet
>the only soundcard I am aware of that is capable of doing hardware
>mixing is SBLive!, and even that one is due to fact that Creative had
>their hands in
>So what do think I should use instead? I need all 48 channels in perfect
>sync (sample-accurate). Do you think that's impossible? The CPU-load of my
I think it might be very difficult if it involves two separate
cards. It would be easy on one card, for sure.
>application is (with signal process
i just got info on the internals of this interface
you're all gonna love it! even if it is expensive. peak and rms meters
in hardware, full matrix mixing with a range of -inf .. +2dB, 2 MIDI
ports. its going to be great, and hard to see why anyone would want to
use anything else for serious audio,
so, the matrix mixer on the hammerfall-dsp comes with 1456 independent
controls. each one represents the volume for routing an input (h/w
input or playback stream) to an output. there are 28 h/w inputs, 28
playback streams and 28 outputs.
it seems unwise to simply map this straight to the control
>I agree, my "flawless" statement was certainly overrated. Yet, the
>question remains: what am I to do as a musician needing to utilize my
>portable laptop while the apps/software I currently use get ported to
>the JACK architecture (if they get ported at all)?
1a) Get Abramo to post (or point t
this:
http://www.linuxpower.org/display.php?id=216
is an excellent article on "mistakes" that SGI made developing their
video APIs. much of what it says seems deeply relevant to audio as
well. from my perspective, its pretty confirming of a JACK approach,
but even if you don't see it that way
Hi,
before we begin, I wrote the Hammerfall driver, just so you know :)
>we're trying to get an rme9652 soundcard to work using the ALSA oss emulation
>layer and we're stuck at a strange place. Any help is appreciated. In particul
>ar: Can anyone confirm that the oss emulation works with the r
>Another currently available option is to use the pcm_shm plugin which
>outputs audio to a separate aserver process (see alsa-lib/aserver). This
>plugin does work, but like pcm_share, hasn't been widely used yet. But
>it's the only working ALSA alternative for concurrent pcm access at the
>moment.
>I have a program that read's from a raw midi device. In midi there
>are some simple compression. It is assumed that if the data flow is
>correct, and the data should be interpreted as paramaters to previus
>command if it's not a new command. But when I open a raw midi stream
>I can get in to the
>mixing is available). Your best bet would be to go with something like
>ESD (for Gnome environments) or arts (for KDE environments).
GNOME has adopted artsd as well. For better or worse :)
--p
___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lis
>Anybody please correct or clarify anything I said. I've only begun
>playing with ALSA, so there may be other (better) options I'm not aware of.
better options were described in the mail flurry of 2 days ago, on
using "share" and "shm" devices.
--p
__
>This is the configuration:
>
>Roland MCR-8->midi-device->alsa-seq->user_code->alsa-seq->raw_midi
this is a crazy, wierd setup! but i'll try to just let that be. i
suspect you don't mean "raw MIDI" the way its meant in ALSA.
>So how far back should I need to reset? The communication roland and
>
>Yes! And the device that is using running status is alsa rawmidi device.
What makes you think that? AFAIK, the raw MIDI device code does no
parsing of MIDI data at all ...
>It's not the roland device since I
>get them correct to the user_code. And how do I force the rawmidi device
>to stop sen
>get them correct to the user_code. And how do I force the rawmidi device
>to stop sending running status,
BTW, are you talking about running status, or active sensing?
--p
___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/l
>I don't know how to be more specific. I have a program that listen to
>a raw midi stream generated by alsa. But I try.
You have a program that uses the sequencer to read MIDI data. That's
totally different from a program that uses the raw MIDI interface to
read MIDI data. This is very, very impo
>http://www.boosthardware.com/LAU/alsa/index5-pd.html
there's a bug, because all the main text is invisible in netscape 4.
i've tried to fix it, but i can't deal with automatically generated
HTML. the indentation is unreadable - nested tables where inner tables
are indented less than outer table
>It is the api for midi programming. I don't have to think about
>active sensing, streaming or anything on the low level. And I get
>tools for routing the data as I like. I thougth that you should know
>the concept. Almost everything is in realtime. But some mappings i
>done in the time domain as
>How do you solve the problem with sharing hardware then?
I don't. I intend to wait for (and contribute too, if I can) what I
consider the correct solution:
a) sequencer genuinely split into:
1) a router/multiplexer
2) a scheduler
b) sequencer moves into user space
As
>How can we get the same performance i userspace? For me it is the
>processor/OS schedule that gives the limit for that, and in kernel we
>get
>the hardware as the limit.
there are two things done by the sequencer:
a) routing/multiplexing
this is mostly a matter of code des
Frank -
thanks for writing. I don't want to suggest for one moment that there
is any "blame" to be attached to the current sequencer design. None of
us knew what we know now, and as you point out the hardware state of
affairs has changed considerably.
>Pentium 60 MHz (though faster iron was ava
>I also can't see the tabled text in netscape 4.77, mozilla renders it fine.
i used mozilla to take a look again. i had originally suggested making
the font of the main "text" table larger. i still think is a good idea
- this text is just as critical as everything else on the page, and at
its cur
>ok, don't forget to put CAPs on the 2.5 wishlist :-)
they are already implemented and maintained. its just that (almost)
nobody turns them on.
>> OTOH, JACK faces this too, and we "get around it" by providing
>> adequate performance (more than adequate in some cases) without
>> SCHED_FIFO, and
>Hello
>Is there any test program which I can use to see if the alsa drivers support
>mmap(memory map) audio out.
>Someone is reporting problems with quake3 and alsa.
quake3 had an alsa output module written for precisely this
reason. AFAIK, there is no reason to use OSS.
BTW, because of quake's
>I'm eying the maudio USB Audio Duo. It seems an ideal soundcard for my
>needs, is locally available, and relatively inexpensive, and if I ever
>get a laptop, it'll work with that too. I understand usb is not
>supported by alsa yet. Is this still true? Is there plans for the
>future, or is th
>>Here's the latest.
>>
>>http://www.boosthardware.com/LAU/alsa/index5-pd.html
>
>I liked better the version which had the explanation of the
>project. What were its problems? I.e., why it was taken off?
i didn't "take it off". i tried to rewrite it so that it didn't sound
as if we were just star
I thought this was an interesting message, of some relevance
for the discussions we've had over ALSA and JACK etc. I would also
recommend
http://www.linuxpower.org/display.php?id=216
which is a really good and insightful article on pitfalls with the
development of SGI's dmSDK API (mostly vid
>Just a note to say I have successfully integrated
>MesaGL support into my realtime FX looping software
>(aka 'Techno Primitives'). A project page is coming
>soon. Currently supported is:
>
>-Full duplex low-latency I/O using JACK
Awesome! The rest of the list of features is just as fantastic. I
1) just a quick note to point out that whether you know it or not, the
email program you are using is sending out copies of your mail in both
plain text and HTML formats. increasingly on the net, there are
filters being put in place that silently dump HTML-formatted
email. some mailing lists will
>Hi, I'm developing an 8 channel recorder based on alsa0.9.0beta10 and a
>Terratec EWS88mt sound card.
>I'm now using a 2.4.18 kernel with the Andrew Morton patches applied. I
>really want to be able to record/playback at the full 96k 24bits on all
>8 channels simultaneously, but I need absolutely
>Is there any documentation on the timer api that goes beyond what's on the als
>a-project pages? We're trying to sync Video and Audio, and all we need is a wa
>y to query the current time relative to some arbitrary start point.
i don't believe that the timer API has much to do with this. its mor
>You're right. But it would be really nice to have a continuous timer
>source in some resolution (microseconds?) available for all platforms
>to satisfy synchronization requirements.
its often called "UST" (Unadjusted System Time). Its part of the POSIX
CLOCK_MONOTONIC specification. this is su
>The delay value is calculated from getting the amount of samples still in
>the sound buffer.
>See the "ao_alsa_delay" function is
>"xine-lib/src/audio_out/audio_alsa_out.c"
>
>So if the delay is 100 ms.
>You have to place samples in the audio buffer 100ms before you wish to hear
>them.
Not quite
>Our app does not use poll or anything like it. It does not get woken up by
>the audio device.
actually it does. but not woken up from poll, just from snd_pcm_write.
>Look at our source code.
i did. i downloaded xine a few weeks ago.
xine won't work the way a user expects if anyone tried to ge
well, since nobody responded to a 100% open question on how to support
the 1500 or so mixer controls that exist on the H-DSP, i'll offer up
an explicit solution and see if anyone can comment on that.
since no generic ALSA program is going to be able to usefully present
all those mixer controls, a
>(gdb) bt
>#0 0x4015e8d7 in snd_pcm_route_convert1_one () from /usr/lib/libasound.so.2
>#1 0x40cae5dc in ?? ()
>#2 0x012de908 in ?? ()
>Error accessing memory address 0xe8c1018b: No such process.
>(gdb)
>
>
>When reading plughw after requesting 24-bit format. I expect the output
>should be 32-b
so, you all know jack, i'm sure.
when jack is used with a "plug" PCM device, we end up with duplicated
channel data even on a 2 channel device. using mmap access, everything
we write to the channel area for channel 0 shows up on both channel 0
and channel 1. i don't understand how this could happ
>Is ALSA architecture ready for 24bit/192khz cards?
I believe it is. I can't think of anywhere in ALSA that sample rate
plays a major role in limiting resources or capabilities.
OTOH, I don't know if the kernel can handle this with the smallest
periods. 64 frames/interrupt translates to an inte
>Recently on this very list, Paul Barton-Davis proposed that one could
^
Paul Davis (op.net/~pbd/name.html)
>build a driver for a USB audio device entirely in user space. I think
>this is an interesting i
>I am getting the same with stock 2.4.18, DELL Inspiron laptop (Maestro
>3i), my test is:
>
>copy a cd image file from one partition to another, while trying to play
>an mp3 using mpg123.
>
>The mp3 skips all over the place.
since you almost certainly have IDE drivers, have you configured them
co
>etc. I imagine I would have to have a kernel module which would
>implement filtering of certain events and passing them to some place,
>like a pipe that my control program (perhaps perl) would read and
>perform corresponding actions that I would program it to do.
not to appear too snobbish, but
>Hi,
> Im trying to compile a program that uses alsa lib 0.9 and since by just
>
>including alsa/asoundlib.h it doesnt give the compiler a valid definition of
>the snd_pcm_hw_params_t Im wondering what else I have to include for my
>program to compile.
its an opaque type. you can only de
>Just for your info, the Windows sound drivers for the Dell are just as bad.
>I propose that it is a hardware problem, and not an alsa problem.
if you read to the bottom, you'll notice that the problem is solved,
and it (was) a Linux kernel problem :)
--p
___
>i have done a strace run of rtcw and found some interesting things:
>
>somewhere rtcw opens the dsp device:
>open("/dev/dsp", O_RDWR)= 16
>but i cant find the coresponding
>close(16)
>
>this are the last lines of output (process locks here):
>
>write(2, "- CL_Shutdown -\n"
> a sequencer client ? as a userspace program ? How
>could it then get the (filtered) data back for the
>MIDI app into /dev/midi ? I didn't think there was a
>way for a userspace program to feed "incoming" data
>into /dev/midi, and therefore I thought that it would
>have to be a kernel module, not
>CORRECTION:
>Just retested with full hdparm (added -u setting) settings on
>2.4.19-pre3-ac+preempt, and no skipping with above activities. This
>seems pretty good.
>
>I could generate skips when switching from X to text console. Could not
>generate skip when cycling virtual desktops within X (at
prompted by phil kerr this morning, i started writing a tutorial on
using the ALSA Audio API (the term "PCM" is OK once you get into this
stuff, but ...). The following document represents it current state. I
would like to ask for feedback even though it is very incomplete. I
have not tested any o
>> (the term "PCM" is OK once you get into this
>> stuff, but ...).
>
> How about adding PCM to your Terminology section then?
Done.
>> The following document represents it current state. I
>> would like to ask for feedback even though it is very incomplete.
>
> What is the license
>1) The device "pcm.hw:0" seems to work fine, but when I use
>"pcm.plughw:0", I get "Broken pipe" from snd_pcm_writei. This happens
>even with stop_threshold set to UINT_MAX (i.e., this shouldn't
>be caused by underruns).
no, setting the stop threshold to this value just prevents the driver
from
>
>Oh -- thanks a lot for the quick answer! But then I guess I'll
>rephrase the question: Why would I get xruns with "plughw", but not
>with "hw"?
2 possibilities readily spring to mind:
1) the plughw device is causing more code to be run, possibly
causing timing issues
2) a bug in the
>
>Actually, I think I found the problem. I was using the maximum number
>of playback and captures channels reported by the device. For
>"plughw", this number is apparently 1, rather than the actual
>number the hardware supports! I guess what I have to do is open the
>"hw" device first to get
>Well, if I am running capture and playback channels in sync, I certainly
>don't want to restart the capture channel (and destroy my recording) in the
>case when the playback buffer under-runs. Keep in mind I have a very short
>buffer for playback (to keep latency down) but a long buffer for captu
>I'm trying to compile a program written in c++ using alsa API functions =
>related with the plugin interface (v.05.11). But the linker do not find =
>these functions (such as snd_pcm_plugin_params). I don't have any =
>problems if i use C instead of C++, but in this time i have to use C++. =
>Doe
>I want to be able to detect when sample rate changes during runtime.
>
>An example scenario (by coincidence very similar to my application): a
>multi-channel convolution engine runs on an RME9652 which operates at
>44.1 kHz as clock slave. However, in runtime, the user switches cables
>to the
> I would like to use perl, I guess, to record what an
>external device plays, and then be able to play it
>back.
>
>http://www.perl.com/CPAN-local/authors/id/S/SE/SETHJ/MIDI-Music-0.01.tar.gz
>
> needs SNDCTL_SEQ_RT_ENABLE so it can record MIDI
>events and calcualtes the delta times itself, I gue
>I am running a Hammerfall 9652 in Alsa 0.9beta11 (under Madrake 8.1) and I am
>looking for information about how to route to the varous output channels from
>the command line (say, with ecasoubd or aplay). Right now, sound goes to EVER
>Y
>channel (8 ADAT channels, AND the coaxial/SPDIF stereo).
>> Hi!
>>
>> I tried to write some programs for sound processing recently. Most of it
>> went very well, but some questions about the aplay code (version 0.5.10),
please do not use ALSA 0.5.X anymore. it is old and no longer under
development. it has many design issues that are fixed in ALSA 0.
>Thanks to Maarten I've got involved in implementing ALSA support for
>Rosegarde n 4. I've been browsing the ALSA API and patching together
>a driver layer to give u s a general framework we can share between
>ALSA and aRts. The ALSA Instrument API looks promising but so far I
>can't find much o
>I am working on a problem where I want to extract the left and right
>channels from a sound file.
sox can do this from the command line.
I am running into a problem where I can not
>tell if the samples are from the left only, or the right only channels.
>I have a
>Documentation!
Documentation!
--p
___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel
>> How painful would it be to add an API call to set the async notification
>> signal to be something other than SIGIO?
[ explanation ]
and just remember not to actually do anything complicated from your
signal handler. ioctl(2), which is used for most ALSA user/kernel
space communication, is no
>Hi, I'm looking to help the ALSA project. I have six years' experience
>developing audio device drivers for DOS, OS/2, and Windows, so I want to
>create a new audio driver for ALSA. Is there any particular piece of
>hardware that needs to be supported that no one is working on?
USB.
Search
>I just changed the type of some mixer controls in UDA1341 driver to
>enumerate. With "amixer" it works fine, showing correct names for all
>states. But with "alsamixer" enumarated controls are not showed anymore.
>
>I find, this is because alsamixer uses "simple controls". Is it possible
>to setu
>Are there any test applications which test the pause/resume functionallity.
>
>I have an app which is calling pause, then resume, but after resume, the
>sound is very choppy.
there are quite a lot of audio interfaces that don't support
pause/resume. its of questionable worth to use it in an app
>This is a multi-part message in MIME format...
AKA: This is a message in which the content is about 2% of the message
body :)
>I'm trying to get some specs from midiman for the usb quattro. They have asked
> this question. Can anyone fill me in so I can relay the info?
>Is there "class driver"
>Just a simple question : in the rawmidi functions, I have to know the
>card and the device number... OK ! But where could I find this number ?
>I'm using alsa 0.5 serie, and so I have no asound.conf (or equivalent).
thats your first problem. 0.5.X is not unsuppo
>>framework for ALSA worked out, which is even more of a problem. Its
>>still unclear at this point whether the ALSA framework will be:
>>
>> application<->alsa-lib<->alsa-driver<->low-level-ALSA-USB-class-driver
>>
>>OR
>>
>> application<->alsa-lib<->low-level-non-ALSA-USB-class-driver
>>I think a more exact description of the gphoto approach is:
>>application<->gphoto-lib<->usb-lib<->low-level-generic-USB driver.
>>
>>Only the low-level kernel USB drivers and the usbdevfs filesystem are in
>>kernel space, beyond that the hardware driving is done via the user-space
>>libusb. I
I meant to add:
>Also, what is the current status of gstreamer in this department and
gstreamer has nothing to do with this. gstreamer is an internal
architecture/framework for a single application and has nothing to do
with sharing audio interfaces or audio data with other processes.
--p
>you (or anyone else regarding asoundrc file), as well as additional info
>regrading polling abilities with the soundcards that have lousy drivers
>due to lack of documentation and/or questionable hardware and are
>therefore unable to provide me with the hardware mixing.
the term "polling" in not
>Also, what is the current status of gstreamer in this department and
>jack? (I tried artsd and that one simply doesn't cut it)
jack allows any number of applications to use a single audio
interface, up to a run-time configurable limit imposed by the total
maximum number of "ports" in the JACK sy
>> and testing a few changes), so it'll be clear that the API is finished and
>> code is working.
>
>It is? Last time I tried using Rawmidi on an emu10k1 chip I got no sound. I
>had a soundfont loaded, and tried with my own code and the rawmidi demo
>source. Midi would play with the pmidi progra
>I'm working on a library for accessing MIDI hardware, which uses plugins
>to communicate with the hardware. Now I'm not sure how to compile
>these shared libraries. Should I use -Bsymbolic? This makes the linker
>give a warning when the library is not linked against all the shared
>libraries
alt
>how about to add informational pages for each driver, and make links
>from soundcard matrix? then a user can reach to the detailed info by
>two clicks from the top page.
>
>a driver page will contain:
[ ... ]
>the pages should be controlled by cvs, so that both developers and web
>admins c
>Could anyone point to me a reference for the following (or explain to me
>directly what is the purpose of each of those functions) in the PCM module:
>
> prepare:snd__capture_prepare,
> trigger:snd__trigger,
> pointer:snd__pointer,
> cop
[ those of you on jack-dev will have seen this coming ]
ALSA doesn't seem to provide a way for a driver to way "i provide
samples in the native format of the processor". The specific case in
point that I'm noticing is the Hammerfall, where we currently say that
it supports S32_LE. This is not s
>I am just working (finishing) on modular realtime effect procesor. It
>is my thesis project. I would like to ask three questions:
>
>1.
>How can I calculate the latency in ms with OSS driver?
this is alsa-devel. there is nobody here who is interested in
supporting this archaic and limiting API.
>The documentation you writed - mini fullduplex is incomplete
>and have a lot of wrongs (I send some corrects for mini capture and
>playback before).
yes, i need to catch up on that stuff. oh for spare time. did i
mention i just bought a house and need to redecorate it and move? :)
>alsa-lib is
>Can someone point me to an example alsa app doing mmap ?
>
>I want to convert my application from plain pcm_write to mmap mode.
>I would also like to understand how you detect if a sound card can do
>mmap or not.
>
>I help develope a multimedia application, and 6 audio channels (5.1
>sound) st
>Hi,
>
>I am writing a driver for a new card, and wonder how the ALSA server will
>recognize my module as one of its own? Besides, is there anything I should kno
>w
>regarding debugging ALSA drivers?
there is no "ALSA server", at least not in the context you're
discussing.
you should download A
>I'm trying to get access to my soundcard in duplex mode. I simply try to
> open a pcm device twice, for capture and playback. But if I opened and
>initialized the first device, I get an error message for the second,
>that the sample format is not available. Is that, because my soundcard
>doe
silly me.
a quick reading of the kernel source reveals all. don't bother to
answer, and apologies for the silly question.
--p
___
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel
whats the preferred macro/method for discovering if we are on
a big or little-endian system from within driver code?
there are some details of the Hammerfall-DSP that need to be done
differently depending on which system we're on, and using a single
architecture type (__powerpc__ etc.) isn't righ
>A solution could be alter libasound to support non-kernel drivers. Or use
>a loopback device in the kernel.
libasound *does* support non-kernel drivers. when you access
snd_{pcm,rawmidi}_foo (handle, ...)
what you're actually doing is:
1) check the device type of handle
2
>This looks nice. Do rawmidi ports created this way appear as sequencer
>clients as well?
it depends. there is a module that looks at all current rawmidi ports
and makes them available as sequencer clients. just creating a rawmidi
device doesn't do this by itself.
i wanted to add, for clarificat
for those who keep track of such things, this evening i managed to get
the driver for the Hammerfall DSP to compile. testing will begin
tomorrow. its very similar to the Hammerfall driver, but different
enough that for now at least, it will always be a distinct module.
i am still waiting for info
>Hi,
>
>Now, I have this new driver which manages to open/close a pcm channel whenever
>one attempts to play a file with "play test.au", but complains (Sound protocol
>it not compatible) with "aplay test.au". Any idea what's wrong?
>Besides, I have no mixer defined (I would just to allow open/clos
>Im a User of the 24 I/O-Channel Interface motu2408 for mac and PC.
>I am trying to get startet with sound programs on Linux. The biggest =
>difficulty is,=20
>that there is no support for the mentioned audio interface under linux.
>Mark of the unicorn answered to a real friendly mail of mine the
>How does this work then? Can a file be opened from kernel space? Or
sys_{open,read,write,ioctl,close} can be called from kernel space.
--p
___
Have big pipes? SourceForge.net is looking for download mirrors. We supply
the hardware. Y
>I am starting to write an sound driver for a card with no previous support and
>would like to know if DMA is necessary as per the ALSA framework itself. My
>card doesn't support DMA, yet. So if your answer is no, I would need to
>fallback on OSS for the time being.
its not. ALSA doesn't require
201 - 300 of 798 matches
Mail list logo