Thanks for the response, if you don't believe this is a good use, I won't bother further. GStreamer, then.

Regards ---

-brian

On 9/1/2019 11:58 PM, Arun Raghavan wrote:
Hey Brian,

On Mon, 2 Sep 2019, at 11:45 AM, Brian Bulkowski wrote:
Hey Arun,

Thanks for responding.

I usually point folks to the sync-playback.c test that we have a starting 
point, but this is certainly less than ideal.
"less than ideal" is quite the politeness. Let's see....

1) it doesn't cover files, nor does it cover files of different types (
mp3 vs wav )

Files are very good because they always work locally, which is critical
in art installations where you are being paid to deliver an experience,
regardless of whether there is connectivity. Unlike streams. Streams are
useless.
Not that there are good ways to discover this, but imo PulseAudio is not the 
best fit for either of these cases. PulseAudio makes sense when you have PCM 
data that you want to get out of some output, and you want fairly low-level 
access to the audio API.

IMO it is better to use a higher-level API such as GStreamer where you can easily say "play 
this file/URI/..." and have things "just work". You a volume API there, as well as a 
way to drop lower and select devices etc.

That said, you'd still likely want it to go through PulseAudio, and there'd 
still be the same hoops to jump through to get the seamless headless setup.

(As an aside, the stream abstraction is what it is because that is how the 
audio hardware works -- you open it, get it started, and provide a stream of 
audio, then stop it and close it when you're done. The low level APIs do not 
have a notion of playing a file, just a continuous stream of audio.)

2) it doesn't cover changing sounds in response to an "interrupt", that
is, pressing a button on an art installation.
That's not really in an audio API's purview.

3) it doesn't cover multiple "layers", that is, different sound
experiences being added or subtracted by buttons.
Same as above, I don't think the audio API is the right place to document how 
to do this.

I don't have a good answer for what the right place to document this is, but it's 
basically "how to write an event-driven application that plays sound". The 
first part can be done in a number of ways (roll your own, GLib, Qt), and the second also 
has a number of choices as I described above.

Pulse audio is great all all of this, just that the example code is, to
put it your polite fashion, less than ideal, or where I come from, an
embarrassment.
Sure. The project is driven more or less entirely by volunteer bandwdith, and 
we each pick the points of focus that best match our interests and the time we 
have after dealing with personal and professional commitments, etc.

Documentation unfortunately loses priority in this, as the to-do list is (or at 
least feels) more or less infinite. Your criticism is valid, and welcome. 
Patches would also be most welcome (but not providing them does not invalidate 
your criticism).

The code I've written allows external REST requests, and allows the
player to make REST requests to find current state. REST being the
current lingua franca of the internet. And allows both volume changes
and source file changes. Hopefully committed literally tomorrow, putting
it through some extra tests. Will post with a permissive license.
Cheers,
Arun
_______________________________________________
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss

Reply via email to