Jonathan,
Problem with that line of thought is not everyone runs LMMS on today's system. 
LMMS runs fairly well on my 1998 machine.  I hope trade-offs between precision
and performance can be made optional, as has been done so far in many ways.

LMMS does so much in real-time that it is very sensitive to performance.  Even
with a new system, I can see myself taking it to the limit on how many tracks
and effects I can pile onto it.

Allowing a CPU/cache hit for greater precision would not be such an issue if
LMMS was less real-time sensitive.  For example, imagine if there was a feature
to render selected tracks to a new audio track (muting the original ones) it
would allow a workflow to get around hitting the CPU wall.  Tweeking gets a bit
complicated, as one would delete rendered track, unmute source tracks, tweek,
and re-render to new audio track.  But that would allow users to do huge
projects.  I remember using this feature often when I used to write with Mackie
Tracktion.
--Tommy

From: Jonathan Aquilina <[email protected]>
>toby with the processing power we have now a days it shouldnt hit
>performance badly. especially since cache size and speeds are getting
>quicker

On Thu, Apr 1, 2010 at 11:40 PM, Tobias Doerffel
<[email protected]>wrote:
>> ...Changing internal processing sample format to
>> double definitely would be nice if it does not introduce performance
>> regressions (whichI fear due to double data rate and thus less CPU cache
>> efficiency)...

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
LMMS-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/lmms-devel

Reply via email to