Re: [LAD] Does noiseshaping affect quantisation noise?

2010-06-13 Thread Jussi Laako
On 06/08/2010 12:10 AM, f...@kokkinizita.net wrote:
 quantisation noise, making the latter irrelevant. If your A/D
 converter is 24 bit, then analog noise will always dominate,
 so again quantisation noise is irrelevant.

IMO, the important part is that practically none of the modern
converters, especially 24-bit (or more) ones do the conversion in the
presentation they express towards the outside world. That is the actual
converter is not PCM nor has 24-bit resolution.

Quite typical cheap converter is 2-bit 128fs SDM using pretty hefty
noise shaping...

Using PCM in general for digital signal representation is mostly just a
convenience causing unnecessary extra trouble elsewhere.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Does noiseshaping affect quantisation noise?

2010-06-13 Thread Jussi Laako
On 06/07/2010 11:41 PM, Philipp wrote:
 My guess is that quantisation noise is only something present between
 the input signal and its digital representation, and hence no change of
 the digital representations can do anything about it.

It also applies always when the change in digital representation cannot
represent the same resolution. For example when you convert from float
to short you can quite freely play with the noise. You cannot remove
it, but you can move it around to reduce it's masking effect on a signal
(local SNR).

Someone might have fancier wording for it.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Science (OT)

2010-01-07 Thread Jussi Laako
What is typically forgotten or somehow ignored in public discussion are:

1) Impact of methane due to rice/livestock etc. production, methane is
much stronger greenhouse gas than carbon dioxide.
2) Impact of earth overpopulation, just take a look at world's
population growth trends over past couple of hundred years.
3) Impact of sun's activity cycles...

Limiting birthrate/reproduction to maximum of one per couple would
naturally fix most of the problems in long term.

Of course the current trend is just to collect more taxes and money in
sake of climate, but this doesn't have anything to do with the real
problem and doesn't fix it.

Collecting money for good purpose is always fashionable.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Anyone have experience with OSS (3 or 4)?

2009-12-17 Thread Jussi Laako
On 12/17/2009 04:58 PM, Victor Lazzarini wrote:
 Through which API? I not sure I understand audio IO on Vista, with the  
 concurrent existence of several APIs. Never mind, perhaps this is a  
 bit OT in a Linux Audio list...

There's WASAPI and rest is practically emulated on top of it.

The good side is that there's practically one API where the programmer
can choose if he wants to do casual audio with mixing and samplerate
conversion (always floats) or he can talk directly to the hardware in
native sample format if needed. There is also possibility to pipeline
and connect different audio elements there (like effects etc).

So it's kind of ALSA, jack and pulseaudio combined behind one API...
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Anyone have experience with OSS (3 or 4)?

2009-12-17 Thread Jussi Laako
On 12/17/2009 10:20 PM, Paul Davis wrote:
 yeah, they actually took a lot of good lessons from CoreAudio, which
 has been doing all that for years.

I have failed to find native sample format direct hardware access
methods (mixing and samplerate conversion disabled) from CoreAudio, but
maybe I just haven't looked closely enough or something...
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Anyone have experience with OSS (3 or 4)?

2009-12-16 Thread Jussi Laako
On 12/16/2009 02:07 PM, Lennart Poettering wrote:
 Also, note that the big distributors have folks working on ALSA. OTOH
 nobody who has any stakes in Linux supports OSS anymore or even has
 people working on it.

Well, depends on definition.
http://www.4front-tech.com/
Seems to be pretty active still, and AFAIK the only way to have cross
platform interface to audio drivers...

IMO, ALSA is a bit failed in a way that it is Linux-only.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Anyone have experience with OSS (3 or 4)?

2009-12-16 Thread Jussi Laako
On 12/16/2009 08:23 PM, Paul Davis wrote:
 Seems to be pretty active still, and AFAIK the only way to have cross
 platform interface to audio drivers...
 
 cross-platform here meaning across a few selected Unix-lke systems, right?

Well, I would say most IEEE 1003.1 standards compliant systems.
Unfortunately not including OS X... :(
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Anyone have experience with OSS (3 or 4)?

2009-12-13 Thread Jussi Laako
On 12/13/2009 02:31 PM, Kjetil S. Matheussen wrote:
 programs which runs on the JVM. But flash would be nice
 to have working too. Any experience with flash and java?

I think flash was the one I had trouble with.

 I actually only ment if it was easy to set the default
 soundcard?

Yes, configuration utility allows reordering the cards and the first one
is the default. Another way is to just make /dev/dsp a symlink and
change it to point into different devices.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Anyone have experience with OSS (3 or 4)?

2009-12-11 Thread Jussi Laako
On 12/11/2009 03:24 PM, Kjetil S. Matheussen wrote:
 Does anyone know if OSS supports proper software mixing?

Yes it does, and sample rate conversion too. Works pretty OK.

 Is the alsa emulation working somewhat okay?

I have to say no, it's not. I've got best results with OSS - jack -
alsa-jack-plugin.

 Are there any problems configuring the machine to use more than one card?

I haven't encountered any. I usually have three cards and that has been
working fine. Syncing multiple cards is always a bit complex with ALSA
or OSS and I haven't paid too much attention to this. I would assume
using one card with really good number of channels is better solution.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Anyone have experience with OSS (3 or 4)?

2009-12-11 Thread Jussi Laako
On 12/11/2009 04:43 PM, Kjetil S. Matheussen wrote:
 Something that works, i.e. no crashes, no distortion, no glitches,
 no frustration. I'm not talking about hyper-low-latency, just
 having to avoid stopping and starting programs that require
 sound, ever.

Usually when I have to advise some non-computer oriented person on
getting some pro/semi-pro card working with my software, I've usually
recommended using OSS since the setup is usually pretty straightforward.
Supporting ALSA remotely over email for persons who only know how to
click mouse is usually big pain...

On most recent Linux distros, sound tends to work pretty OK with
PulseAudio without too much hassle.

 I really don't care that much about how it's being implemented.
 Sound has worked excellently in windows for many years,
 no reason why it shouldn't work in linux either.

And this is why I'm still developing also Windows software for WASAPI
and ASIO2 APIs...
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Noob Q: Pthreads the way to go?

2009-11-13 Thread Jussi Laako
Harry Van Haaren wrote:
 That's the background to this question: Is mutex locking the only way to
 ensure that variables are only
 being accessed by one thread at a time? Is it the right way to go
 about it?

No, there are also semaphores and read-write locks. RW-locks are
especially useful in cases where there's a single writer but multiple
readers (typically any data distribution).

In general, threading is a wrong way to go if you find sharing and
locking a lot of data between the threads. Threads should be able to
execute fairly independently to properly utilize resources. Naturally
there are typically scatter-gather points which are supposed to be short
compared to overall thread execution time.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Timers, hpet, hrtimer

2009-11-10 Thread Jussi Laako
 Since hrtimer uses hpet if available and apps use hrtimer, why would I
 need to allow some audio group to access hpet directly?

Because it offers memory-mappable access to the actual timer hardware,
significantly lowering overhead and latency for reading the value.

 This system has no accessible HPET device (Device or resource busy)
 
 My interpretation of this is that my motherboard is simply too old and
 simply doesn't have a hpet timer.

This is typically because the device is busy - fully reserved already.
The reason for this is typically because the device driver in kernel is
broken. And still not fixed?

HPET timer has one global time counter, which is the one JACK is
interested in, and in addition three programmable sub-timers. The
problem is that mapping the HPET device also reserves one of these
sub-timers per mapping. Usually kernel is using one and two are left
free, this usually allows JACK daemon and one client to be started
before running out of those timers.

Brokenness is that JACK is not interested on those sub-timers and the
driver should allocate those only via corresponding ioctl() and give
unlimited read access to the global timer value without failing early...

Someone promised to fix the driver, but I haven't verified recently if
it has been fixed. If not, maybe I have to add that to my TODO-list...


BR,

- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [ANN] QjackCtl 0.3.5 slipped away!

2009-10-02 Thread Jussi Laako
Arnold Krille wrote:
 The magic it provides:
 De-coupling functionality from the gui. And allowing several guis to 
 represent 
 the same at the same time. And IPC in general.

That's nothing new and has been done way before dbus existed. And there
are more efficient ways to handle it. Even CORBA is faster in most
cases, because many implementations know how to use shared memory as a
message transport instead of sockets.

In general, dbus is very slow way to do IPC and does unnecessary context
switches to a bottleneck called dbus-daemon. But on the otoh, buy a
faster CPU seems to be the standard solution to any performance
problems these days.

 I am asking this because I am experiencing that the 'Kit'-family is
 doing nameserver lookups before allowing to open a window with my
 current soundcard mixer levels.
 
 Unless I missed a dbus-kit, one has nothing to do with another.

At least on opensuse all *Kits seem to depend libdbus...

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] ALSA latency and MMAP

2009-09-23 Thread Jussi Laako
Clemens Ladisch wrote:
 To decrease that output buffering latency, make sure that there is less
 valid data in the buffer when you are writing.  This is initially
 determined by the amount of data you put into the buffer before starting
 streaming, and can be later adjusted with snd_pcm_forward/rewind().

How it should work on ALSA as well as OSS for low latency:

1) Set period size to something suitable
2) Set number of periods to 2
3) Write two periods of silence to output
3.5) Optionally trigger-start both input and output
4) Block on input until period available
5) Process the input
6) Write period to the output
By this time, first of originally written periods has been played and
second is playing, now the second half of the double buffer gets
filled. Input buffer doesn't ever have more than one period.

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] ALSA latency and MMAP

2009-09-08 Thread Jussi Laako
Practically, for mmap'able devices in addition to waiting, _readi()
seems to be doing this inside a loop:
snd_pcm_mmap_begin(pcm, pcm_areas, pcm_offset, frames);
snd_pcm_areas_copy(areas, offset,
   pcm_areas, pcm_offset,
   pcm-channels,
   frames, pcm-format);
result = snd_pcm_mmap_commit(pcm, pcm_offset, frames);

(follow the pcm_mmap.c:snd_pcm_mmap_readi() called through vtable)
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] ALSA latency and MMAP

2009-09-07 Thread Jussi Laako
Paul Davis wrote:
 something must have changed. back in the day, you could not possible
 use the mmap API to deliver  1 period at a time. has that changed?

Reading
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m___direct.html
at least I understand that readi/writei/readn/writen are just helper
functions around mmap_begin/mmap_commit.

Essentially it should be possible to do read/modify/write as well as the
memcpy actions on mmap buffer, because former can be implemented in user
space using latter...


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Connecting to Ekiga from the rest of the World

2009-07-24 Thread Jussi Laako
Jens M Andreasen wrote:
 Question: Any suggestions for an alternative SIP service?

Not a SIP client, but how about Skype? Works on Windows/Linux/Mac...


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [ANNOUNCE] Safe real-time on the desktop by default; Desktop/audio RT developers, read this!

2009-06-24 Thread Jussi Laako
Lennart Poettering wrote:
 Finally, I believe your insisting on POSIX is a lost cause anyway,
 because it is a fictitious OS interface. It's a good guideline, but

First of all, POSIX is also IEEE and ISO/IEC standard for an operating
system (including command line utilities), thus it has some weight on
it. GNU/Linux seems to be also very closely following it. At leas on my
experience it gives fairly good portability to Solaris and BSDs. And
also to large extent to OS X. In addition, various embedded RT-OSs
support it.

 feature set anymore, but to one that does only exist as an idea, as
 the least common denominator of a few OSes that are already quite aged
 these days. Also, at an ironic side note, your own

I would like to remind that there's a new IEEE Std 1003.1-2008 which
included all kinds of new things which some seem to be originating from
Linux and some new things which need(ed) some support to be implemented
on Linux (mainly seems to be aio_*() stuff). This standard is pretty
comprehensive.


BR,

- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] jack2's dbus name

2009-06-18 Thread Jussi Laako
Lennart Poettering wrote:
 PA does not use fixed block sizes. We always use the largest chunk
 sizes the applications pass to us and break them up into smaller
 pieces only when really necessary. We really try our best not having
 to touch/convert/split/copy user supplied PCM data if we don't have
 to.

This is quite different from the goals of JACK, where the purpose is to
minimize input - output latency and thus use as small as possible blocks.

Typically there's:

[input] -- [processing] -- [output]
[generation] --^

Where input is typically live sound or MIDI, processing can be effects
processing or similar (guitar distortion or such) while generator can be
a softsynth, beatbox or something similar. Both have to naturally run
precisely in-sync. While minimizing the latency from input to output is
crucial for playing and has to run in-sync with all the rest of the
softsynths. Playing a piano with couple of notes lag before you hear the
sound is annoying...


- Jussi

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] jack2's dbus name

2009-06-18 Thread Jussi Laako
Lennart Poettering wrote:
 If an application can send PA data in larger blocks then we are happy
 about it and take it. Of course, if the application needs low
 latencies then it shouldn't pass huge blocks to us, but instead many

No, generally data needs to be fed _to_ application immediately when it
becomes available after A/D conversion (PCI DMA completion interrupt).
Application(s) process the data and it is ought to go to D/A conversion
on next hardware interrupt (PCI DMA reprogram interrupt), along with
time-synchronous data from other applications. This creates total
latency of inputhw+blocksize+outputhw. Generally input and output
latencies should be around few tens of samples (due to delta-sigma
converter resampling filters etc). And generally blocksize is also kept
around 64 or so.

 The big difference between JACK and PA here is that in JACK the
 transfer of data is mostly done synchronously while in PA we do that
 asynchronously. Which is the case because we need to make sure that no

Yes, and this is because all applications are ought to work in harmony
as a whole construct, just like a symphony orchestra, following the pace
set by conductor.


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] jack2's dbus name

2009-06-16 Thread Jussi Laako
alex stone wrote:
 And i can see the use case for, as an example, a headless rig running
 a big sampler, i.e., a raw warrior box.

At least I am running headless audio server setups where jackd is
started by startup script along with required server process. Otherwise
system is pretty minimal with kernel and libc (and possibly alsa
library, depending on OS).

In these systems, dbus is completely useless extra dependency.


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] supporting a dedicated DSP chip with ALSA

2009-05-30 Thread Jussi Laako
Ziv Haziz wrote:
 i'm seeking an advise on the preferred way to support a dedicated DSP
 chip under ALSA.

Unfortunately ALSA is pretty bad on abstracting anything else except PCM
streaming and MIDI.

For ages I have been concerned about how to support hardware accelerated
(DSP) 3D audio. OpenSL ES seems to be the only option at the moment.

 The DSP core is capable of both playing both encoded streams (mp3, wma,
 various voice coders) and of course PCM streams.
 the codec (a2d, d2a) are  connected to DSP.

I think you might have better luck by using OpenMAX[1], it has been
designed for this kind of use.


- Jussi


[1] http://www.khronos.org/openmax/
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] supporting a dedicated DSP chip with ALSA

2009-05-30 Thread Jussi Laako
Ziv Haziz wrote:
 The problem with OpenMAX is that most of the common players (at least
 the ones I looked into mplayer and mpg123) doesn't support them
 natively.

I don't see much point in that anyway since, AFAIK, those players don't
support offloading decoding to DSP anyway. If the support is added, then
it should be fairly trivial to use some other than ALSA interface for
the purpose. For good video synchronization there should be timing
feedback from the decoder output (D/A feed stage).

However, these are fairly simple cases compared to environmental 3D
audio... (environment parameters, passive scene objects, active scene
objects with streams, etc) Some of these could be implemented through
ALSA control interface and virtual channels, but there are still various
interaction issues left and those passive objects are a bit problematic
to represent this way.

 If I will use sample format special will the alsa know not to try and
 do manipulation over it? (resampling, volume, etc...)

One of the common cases would probably be to feed AC-3 or dts audio to
S/PDIF. These should work?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] jackd/jackdbus : D-Bus or not D-Bus...

2009-05-20 Thread Jussi Laako
alex stone wrote:
 Perhaps someone who knows could explain briefly how reliable the dbus
 daemon is in terms of frequency of calls made in and out, and the
 timing involved.

It is quite high latency, mostly due to single-process userspace switch
point. It performs reasonably well as long as overall message frequency
is rather low. It can also efficiently push quite large messages
through. But it is poor on handling high frequency of messages.

Each message passing involves extra context switches to and from daemon
and also copies of the message.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] jackd/jackdbus : D-Bus or not D-Bus...

2009-05-19 Thread Jussi Laako
Stéphane Letz wrote:
 It seems you really don't want to see that using this jack_client_open
 does a fork+exec call to launch jackd with the ./jackdrc file has been
 completely *hard coded* in libjack from day one!  And is a really strong
 constraint for any possible new way of controlling the server.
 
 The discussion is now: do we keep this hard coded thing in libjack or
 do we try to relax it a bit ?

I would vote for using some kind of environment variable control for the
auto-start behavior...
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] jackd buffersize

2009-05-11 Thread Jussi Laako
Jens M Andreasen wrote:
 What is the rationale for jackd requiring buffers to have number of
 frames set to a power of 2? Could this be relaxed to perhaps a multiple
 of 16, 32 or somesuch?

SSE optimizations require, for performance reasons, that buffers are
128-bit aligned. SSE is really slow on non-aligned access and due to
rather small buffers cannot catch up in the performance after getting up
to aligned address. Size itself doesn't have to be anything specific,
but anything which is not multiple of 4 would also cause slow down.
Having some pad bytes (if possible) thus makes it possible to work
around this limitation.

By disabling SIMD optimizations this restriction goes away with some
performance penalty.


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] jackd buffersize

2009-05-11 Thread Jussi Laako
Jens M Andreasen wrote:
 like vector elements in warps. Intel talks about fibers when they
 are describing their Larrabee - which might be a better terminology, at
 least less confusing.

While on Windows API, fiber is a 'thread' or scheduling entity who's
execution is not controlled by kernel, but by application.

http://msdn.microsoft.com/en-us/library/ms682661(VS.85).aspx

I would just speak about hardware threads or execution pipelines.


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] jackd buffersize

2009-05-11 Thread Jussi Laako
Paul Davis wrote:
 However, notice that far more important from a performance perspective
 is that power-of-2 buffer sizes permit buffers to be cache line
 aligned, which as far as we know (its never been carefully measured)
 greatly outweighs the kinds of concerns you are mentioning.

I did some testing on this in past when developing bunch on SSE
routines. The performance difference was around 2x.


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] jackd buffersize

2009-05-11 Thread Jussi Laako
Jens M Andreasen wrote:
 I did some testing on this in past when developing bunch on SSE
 routines. The performance difference was around 2x.
 
 What was 2x and compared to what? Unaligned SSE or exact cacheline
 match?

Sorry for not being specific, in the test it was SSE unaligned vs
256-bit aligned. :)

For 96 float buffers it thus shouldn't be a problem. Something like 97
would be...


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] How is the TSC calibration accuracy on dual core 2 computers? (And what about HPET?)

2009-04-30 Thread Jussi Laako
Lee Revell wrote:
 If there were no need to support old kernels, I think JACK's hardware
 timer support could be removed entirely in favor of the POSIX clock
 api.

Only reason for those is the lower cost of reading the time versus going
through syscall...


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] How is the TSC calibration accuracy on dual core 2 computers? (And what about HPET?)

2009-04-28 Thread Jussi Laako
Kjetil S. Matheussen wrote:
 
 I've looked at the HPET code in jack, but am unsure how accurate it is,
 and whether there are any overhead using it?

Resolution on my machine is 1 / 14.318180 MHz. Frequency can vary
typically from 12 to 16 MHz. Access is usually rather fast through MMIO.
Accuracy can vary depending on hardware and environment, same applies
even more for TSC.

 that's the accuracy of usleep(). So it looks promising, but
 I need at least 0.1ms accuracy...

In my opinion, clock_gettime() with CLOCK_MONOTONIC is pretty good if
kernel is using reasonably good clock source. You can check the current
one by cat
/sys/devices/system/clocksource/clocksource0/current_clocksource. Over
rather long periods, CLOCK_REALTIME with good NTP sync would probably be
a good choice.

Over what time span you would need that 0.1 ms accuracy?


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [Jack-Devel] How is the TSC calibration accuracy on dual core 2 computers? (And what about HPET?)

2009-04-28 Thread Jussi Laako
Kjetil S. Matheussen wrote:
 Paul Davis wrote:
 the cycle counter on intel systems is (was?) guaranteed to run exactly
 in sync. AMD had a problem a few generations back where they neglected
 to provide this feature and it caused havoc for several different
 categories of users. they corrected their error very quickly and i
 believe that all their chipsets will now also always have precisely
 A synced cycle counter.
 
 Thanks.

I would just warn that, AFAIK, on some CPUs frequency scaling affects
TSC while not on some others... And the frequency of CPU clock is not
necessarily very precise and can vary depending on hardware/temperature/etc.

Some hardwares may employ spread spectrum on the clock - ie. generating
jitter on the clock to lower EMI.

Best option for timing is usually to use hardware intended for timing.
Kernel can use HPET as clock source with clocksource=hpet parameter.


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] JACK for openSUSE 11.0 x86_64

2008-12-16 Thread Jussi Laako
oc2...@arcor.de wrote:
 STOP SPLITTING JACK UP.
 This fetaure was not packman's idea ... 
 
 see openSuSE-shared-library policy:
 http://en.opensuse.org/Shared_Library_Packaging_Policy

This is one of the bright ideas adopted from Debian-based systems, but
I'm not at all convinced that it's a good idea especially in jack case.

Normally the reason for this is that older applications which depend on
older shared libraries can co-exist and work with newer applications
depending on newer shared libraries. However, for jack this creates a
conflict situation which might be hard for the end user to solve.

Reason is that the jack server and the shared library for clients are
tied to each other by specific version/layout of shared memory block
used to communicate. Even if dynamic linking dependency for older
applications wouldn't break and the application would continue to load,
they will stop actually functioning! For this particular reason there's
a specific way to handle also shared library versions. This is not done
for binaries, however!

Now this bright idea of let's not break old binaries, let's just
install bunch of different versions of the same library is doomed for
jack (and for many other non-self-contained apps too). It doesn't take
into account the dependency between the server version and certain
client library version. And even more confusing for the user would be to
have two different server versions with applications for two different
library versions and the user would start wondering why he cannot route
audio between different applications, etc...

 This shared library policy needs a lot of extra-work but it allows also to 
 update library packages without breaking existing packages and or 
 mass-rebuilds if a so-name of a library is changed (ffmpeg-libs, x264 are 
 well known candidates for changing often API).

That's especially what it doesn't achieve with jack. It specifically
breaks things, badly...


BR,

- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] jack and c++

2008-11-10 Thread Jussi Laako
Malte Steiner wrote:
 For a new audio application I need to code a JACK client with C++. So
 far I did it only with C and have a problem with giving the pointer to
 the callback process function, which is a method now. So what is the

This is what I use and one way (of many) to do it:
http://libjackmm.cvs.sourceforge.net/viewvc/libjackmm/libJackMM/

Release tarball is outdated (I should remember to release a new one),
but the repository version should be OK.


BR,

- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


[LAD] [ANN] libshmsg-0.0.1

2008-10-24 Thread Jussi Laako
Hi,

This might be of interest to multimedia developers willing to transfer
audio and/or video data between processes...

libshmsg implements (optionally) zero-copy message passing on top of
libsharedmem. This is very first release, so it's still lacking some
functionality and features.

Related tarballs:
http://sourceforge.net/project/platformdownload.php?group_id=171566

Project page:
http://sourceforge.net/projects/libsharedmem


Best regards,

- Jussi Laako
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Getting the most out of onboard HDA

2008-09-12 Thread Jussi Laako
Jens M Andreasen wrote:
 With a 24bit signal of +/- 15 integer values, we would be drowned in
 noise. But since this is foremost a theoretical discussion, let us stick
 to those numbers.

What might be more useful would be to use two DACs in parallel followed 
by high quality mixing electronics. In a way some highend audio hardware 
does. Then non-error signals get amplified and error-signals might get 
even attenuated (given the error signal is not the same for both channels).


BR,

- Jussi

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Patching linux 2.6.26.3

2008-08-24 Thread Jussi Laako
Rui Nuno Capela wrote:
 just tested 2.6.23.3-rt3 here with NO_HZ not set in my (old) pentium4 
 desktop.
 
 it just confirmed that NO_HZ is not the culprit here. midi events are 
 still being delivered *completely* out of time and the funny thing is it 
 just gets somewhat better whenever you hit the pc-keyboard keys. 
 however, it all gets back to badness once you stop pressing any key (eg. 
 shift-key)
 
 another funny thing goes that on a core2 duo T7200 laptop (x86_64) the 
 same kernel config it runs all fine (NO_HZ=y)

I'm kind of wondering what these have in
/sys/devices/system/clocksource/clocksource0/current_clocksource


BR,

- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] vectorization

2008-05-07 Thread Jussi Laako
Jens M Andreasen wrote:
 PS: Your fastest calculation is when the data floods the cache:
 N=(1024*1024), n=1000, gcc, clock: 8410 ms (_Complex). Is that a typo?

Nope, that's the actual result, I just verified the settings, recompiled 
and re-run, and it's still:
  clock: 8390 ms (_Complex)
  clock: 9310 ms (cvec_t)
  clock: 8480 ms (original float array[N][2])
  clock: 10550 ms (asm on float array)

Fast memory bus + prefetch is a really good thing...

I also have vectorized float array copy and it's significantly faster 
than memcpy(). While memcpy() stays under 1 GB/s, vectorized version can 
reach around 90% of the theoretical memory speed for large copies.


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] vectorization

2008-05-05 Thread Jussi Laako
Jussi Laako wrote:
 I would propose something like -march=prescott -O3 -ftree-vectorize or 
 -O3 -sse3 -ftree-vectorize.

Sorry, typo, -O3 -msse3 -ftree-vectorize of course...


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] vectorization

2008-04-23 Thread Jussi Laako
Fons Adriaensen wrote:
 I tried out vectorizing the complex multipl-and-accumulate loop in
 zita-convolver. For long convolutions and certainly if you have 
 
 The results are very marginal, about 5% relative speed increase
 even in cases where the MAC operations largely outnumber any

For me, the complex MAC operation written for SSE3 practically doubled 
the speed for double precision and more than doubled for single 
precision, compared to -march=i686 -O3 -ffast-math case (the code has 
to run practically on all x86 platforms).

Prior to SSE3, there was no nice way to do complex multiplication on 
SSE. Now it can be done in three instructions for two single precision 
complex numbers.

Still, one of the most elegant is E3DNow on AMD, it can do single 
precision complex multiply in four instructions.

These instruction numbers are for the calculation itself, in addition it 
of course needs the load and store operations, where SSE3 requires a few 
extra instructions compared to E3DNow.


BR,

- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] vectorization

2008-04-18 Thread Jussi Laako
Christian Schoenebeck wrote:
 I compared a pure C++ implementation vs. the hand crafted SSE assembly code 
 (by Sampo Savolainen, Ardour) and of course an implementation utilizing GCC's 
 vector extensions. On my very weak, but environment friendly ;-) VIA box the 

For simple operations, compilers are rather good on vectorization. Even 
though I don't know if there's any support for multi-arch targets on 
gcc, so that the SSE2/SSE3 optimized binary would run on hardware 
without SSE (dynamic code selection)? I haven't got time to follow the 
latest gcc developments.

For more complex operations like FIR, IIR, normalized cross-correlation 
or complex multiply-accumulate, I haven't seen any compiler being able 
to match hand-crafted assembly code.


BR,

- Jussi Laako

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] vectorization

2008-04-18 Thread Jussi Laako
Jens M Andreasen wrote:
 engine.o.586  # plain C, runs everywhere but probably pretty terrible
 engine.o.sse  # vectorized but has some kludges 
 engine.o.sse2 # vectorized and no kludges, works for AMD, recomended!
 
 The pre-install script then looks in /proc/cpuinfo and decides which
 engine to rename to engine.o, links the objects in a jiffy, strips the
 binary and continues installation.

I believe the implementation jack has, to dynamically select suitable 
versions of some functions at runtime is nicer... ;)


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] alsa and OSS (again?)

2008-01-19 Thread Jussi Laako
Dave Phillips wrote:
 As a user, it seems to me that ALSA has itself been minimized as a 
 directly audio supported system, that JACK is the preferred audio 
 control system now. Fine by me, so if OSS delivers low-latency and 
 flawless performance as a JACK back-end, that's great. If not, I use 
 another backend, right ? JACK rules. :)

I believe that applications shouldn't use ALSA or OSS directly, but
instead use either JACK or PulseAudio interfaces, depending on the
application's goals and target user group.

IMO, there's no superior audio/driver API. All current systems seem to
lack support for the things which are supported on competing system.
Namely 3D audio API and hardware acceleration and support for advanced
DSP or hardware acceleration functionalities. If we assume that general
user owns something like SB Audigy/X-Fi, then the hardware is not very
extensively utilized in current Linux systems (?).

One of the good sides of OSS has been the ease of setting it up. If the
user (possibly a client) doesn't have extensive knowledge about Linux or
computers, setting up something like asound.rc might not be very
straightforward, and instructing such users over a phone or email can be
challenging too...


BR,

- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Embedded linux board

2007-08-19 Thread Jussi Laako
Florian Schmidt wrote:
 Thought about that, but no, too much latency [although i guess one could try 
 hacking the USB stuff ;)].. I was rather thinking about attaching a codec to 
 the SPI bus.. But then i'd have to write a driver for it, too.. :)

Just by quick googling I found for example following:
http://blackfin.uclinux.org
http://docs.blackfin.uclinux.org/doku.php?id=ad1836a

So there might be plenty of hardware and even supporting software
available which could be suitable for this kind of use...


- Jussi
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo.cgi/linux-audio-dev