Mathieu HENRI wrote:
Dr. Markus Walther wrote:
I have noted an asymmetry between <canvas> and <audio>:

<canvas> supports loading of ready-made images _and_ pixel manipulation (get/putImageData).

<audio> supports loading of ready-made audio but _not_ sample manipulation.

With browser JavaScript getting faster all the time (Squirrelfish...), audio manipulation in the browser is within reach, if supported by rich enough built-in objects.

Minimally, sample-accurate methods would be needed to
- get/set a sample value v at sample point t on channel c from audio
- play a region from sample point t1 to sample point t2

(Currently, everything is specified using absolute time, so rounding errors might prevent sample-accurate work).

More powerful methods might cut/add silence/amplify/fade portions of audio in a sample-accurate way.

It would be OK if this support were somewhat restricted, e.g. only for certain uncompressed audio formats such as PCM WAVE.

Question: What do people think about making <audio> more like <canvas> as sketched above?

My understanding of HTMLMediaElement is that the currentTime, volume and playbackRate properties can be modified live.

So in a way Audio is already like Canvas : the developer modify things on the go. There is no automated animations/transitions like in SVG for instance.

Doing a cross fade in Audio is done exactly the same way as in Canvas.

And if you're thinking special effects ( e.g.: delay, chorus, flanger, pass band, ... ) remember that with Canvas, advanced effects require trickery and to composite multiple Canvas elements.





--
Mathieu 'p01' HENRI
JavaScript developer, Opera Software ASA

Reply via email to