Re: [Musicpd-dev-team] MPD visualization stuff
On 2009/01/15 12:05, Vasily Stepanov vasily.stepa...@gmail.com wrote: I've studied some audio streaming problems and existent solutions, and I think that now, you are talking about RTCP over RTP :) Why not to use this protocol? It works over UDP and designed for delivering real-time audio (and video) over the Internet. I have no idea why you were trying to invent a new protocol ;-) Of course, using an existing and standardized protocol is the best solution I could imagine. Max -- This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword ___ Musicpd-dev-team mailing list Musicpd-dev-team@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/musicpd-dev-team
Re: [Musicpd-dev-team] MPD visualization stuff
On Thu, Jan 15, 2009 at 2:22 PM, Max Kellermann m...@duempel.org wrote: On 2009/01/15 12:05, Vasily Stepanov vasily.stepa...@gmail.com wrote: I've studied some audio streaming problems and existent solutions, and I think that now, you are talking about RTCP over RTP :) Why not to use this protocol? It works over UDP and designed for delivering real-time audio (and video) over the Internet. I have no idea why you were trying to invent a new protocol ;-) My justification: I was trying to follow KISS principal. I'm still believing that we don't need to throw integral audio stream just for visualization tasks. But in global meaning and future usage it would be better to implement something like RTP. Of course, using an existing and standardized protocol is the best solution I could imagine. Well, I found two pretty implementations of RTP: - ccRTP (http://www.gnu.org/software/ccrtp/) (GPL-2) - oRTP (http://www.linphone.org/index.php/eng/code_review/ortp) (LGPL) ccRTP has C++ headers. But mpd and ncmpc don't use C++, right? oRTP has C headers, but it has some extended features for telephony stuff. I think... despite the GNU lib, we should try oRTP first. What do you think? -- This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword ___ Musicpd-dev-team mailing list Musicpd-dev-team@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/musicpd-dev-team
Re: [Musicpd-dev-team] MPD visualization stuff
On 2009/01/10 23:57, Vasily Stepanov vasily.stepa...@gmail.com wrote: Currently I'm working on MPD visualization stuff. Hi Vasily, here's a rough review of the udpfeed plugin (commit f0fd78a911d6). First: I like your coding style, you're using inline functions a lot. You're writing a lot of source code comments. However don't use CamelCase for new code, we're trying to get rid of it (probably you have copy'n'pasted the names from other output plugins). I'd like to see better patch descriptions. udpfeed_plugin.c added just doesn't seem enough - what kind of plugin is it? What does it do? I can't find documentation about the protocol you were implementing. Looks like you designed a brand new protocol for MPD. As long as your patches are not submitted, you may fold corrections into the first patch. e.g. the GLib conversion patch may just be folded into the first patch. Some comments about implementation details: bool udpserver_kill; That should be volatile, otherwise gcc may play optimization tricks with you. size_t udpfeed_client_size = sizeof(udpfeed_client); This variable is initialized only once, but used several times. This breaks if a IPv4 client is first used, and then another client using IPv6. g_mutex_lock(ufd-playChunkMutex); [...] if (sendto(ufd-sock, ufd-buf.data, ufd-buf.offset, MSG_DONTWAIT, udpfeed_client, udpfeed_client_size) == -1) { [...] g_mutex_unlock(ufd-playChunkMutex); Don't call blocking functions while holding a mutex! * I will wait for incoming connection just one second [...] * And if there is no incoming data I will check kill state I don't like fixed sleep times. poll() for an event with unlimited timeout, and let the kernel wake you up. new_addrinfo(const char *service, const struct addrinfo *hints, struct addrinfo **res) free_addrinfo(struct addrinfo **res) new_socket(int domain, int type, int protocol, int *sock) free_socket(int *sock) Why so many wrapper functions? Why are you using getaddrinfo(), anyway? Looks like a complicated way to obtain INADDR_ANY. I also don't like the protocol you designed. It allows only one single client, and requires the client to request each chunk. If a client doesn't send the PCM request quickly enough, chunks are lost. Also your protocol isn't byte order safe. My idea: introduce a new MPD command udpfeed. A MPD client would send: udpfeed 6543 MPD would then send all PCM chunks to this client on port 6543 for, say, 60 seconds, after which the client has to refresh the subscription. Any number of clients can be subscribed at a time. The same code could be used to send PCM data to a multicast destination. Another huge advantage: we could do that without threading and without polling and without buffering! Of course, we'd have to make sure that we get the byte order thing right. We need to define (and document) a proper packet header with a well-defined byte order. Your code looks very interesting altogether! Doesn't solve any of my own problems, but it's a nice hack ;) Max -- This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword ___ Musicpd-dev-team mailing list Musicpd-dev-team@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/musicpd-dev-team
Re: [Musicpd-dev-team] MPD visualization stuff
On Tue, Jan 13, 2009 at 10:24 PM, Max Kellermann m...@duempel.org wrote: On 2009/01/10 23:57, Vasily Stepanov vasily.stepa...@gmail.com wrote: Currently I'm working on MPD visualization stuff. Hi Vasily, here's a rough review of the udpfeed plugin (commit f0fd78a911d6). First: I like your coding style, you're using inline functions a lot. You're writing a lot of source code comments. However don't use CamelCase for new code, we're trying to get rid of it (probably you have copy'n'pasted the names from other output plugins). Great :) About CamelCase... do you have any code formatting standards? May be in your wiki... I didn't found them, so of course I was trying to copypaste existing functions names and variables. I'd like to see better patch descriptions. udpfeed_plugin.c added just doesn't seem enough - what kind of plugin is it? What does it do? Ok, I'll try to use more complex descriptions. I can't find documentation about the protocol you were implementing. Looks like you designed a brand new protocol for MPD. First of all I'd like to implement some stable protocol, and only then I'll write words about how it works. As long as your patches are not submitted, you may fold corrections into the first patch. e.g. the GLib conversion patch may just be folded into the first patch. Yes, if it possible, I'd like to use git.musicpd.org repository to develop my plugin, so there would be some more pushes before final submit. Some comments about implementation details: bool udpserver_kill; That should be volatile, otherwise gcc may play optimization tricks with you. I've heard about that and of course forgot it :) size_t udpfeed_client_size = sizeof(udpfeed_client); This variable is initialized only once, but used several times. This breaks if a IPv4 client is first used, and then another client using IPv6. My mistake! g_mutex_lock(ufd-playChunkMutex); [...] if (sendto(ufd-sock, ufd-buf.data, ufd-buf.offset, MSG_DONTWAIT, udpfeed_client, udpfeed_client_size) == -1) { [...] g_mutex_unlock(ufd-playChunkMutex); Don't call blocking functions while holding a mutex! I use MSG_DONTWAIT, about what kind of blocks are you talking about? May be about very slow sendto()? So here I will have to use some local buf.data and lock mutex while memcpy() from ufd-buf.data to local buf.data... is it good at all? :) I've never saw this before. * I will wait for incoming connection just one second [...] * And if there is no incoming data I will check kill state I don't like fixed sleep times. poll() for an event with unlimited timeout, and let the kernel wake you up. Sorry, but how should I cancel UDP server thread in this case? For example when the main program terminates... In pthreads API there was a cancel function, but in glibc I didn't found it. new_addrinfo(const char *service, const struct addrinfo *hints, struct addrinfo **res) free_addrinfo(struct addrinfo **res) new_socket(int domain, int type, int protocol, int *sock) free_socket(int *sock) Why so many wrapper functions? I think that error log messages should be somewhere far away from the code flow. I just forgot to make my functions inlined. Why are you using getaddrinfo(), anyway? Looks like a complicated way to obtain INADDR_ANY. I thought that getaddrinfo() is a new-way-main-stream-cool-function and everybody should use it instead of anything else. This is not true? I also don't like the protocol you designed. It allows only one single client, and requires the client to request each chunk. If a client doesn't send the PCM request quickly enough, chunks are lost. Why only one client? It works with more than one client :) (I'm not talking about millions, but more than one) Yes we can lose some chunks, and if a client will be impatient, it will receive same chunks more than once. So what? This is not an audio-streaming server, if you want an audio stream you can use shout plugin. In visualization tasks I don't worry about missed chunks, I will lose some, but who will see it? And don't forgot about UDP lacks at all. Also your protocol isn't byte order safe. Yes, it is true, I will fix it. My idea: introduce a new MPD command udpfeed. A MPD client would send: udpfeed 6543 MPD would then send all PCM chunks to this client on port 6543 for, say, 60 seconds, after which the client has to refresh the subscription. Any number of clients can be subscribed at a time. The same code could be used to send PCM data to a multicast destination. Another huge advantage: we could do that without threading and without polling and without buffering! Of course, we'd have to make sure that we get the byte order thing right. We need to define (and document) a proper packet header with a well-defined byte order. Looks like you want to create and
Re: [Musicpd-dev-team] MPD visualization stuff
On 2009/01/14 03:37, Vasily Stepanov vasily.stepa...@gmail.com wrote: Great :) About CamelCase... do you have any code formatting standards? May be in your wiki... I didn't found them, so of course I was trying to copypaste existing functions names and variables. No, we don't have it documented. Try not to copy'n'paste from old code ;-) Examples for new code: notify.c, pcm_buffer.h, icy_metadata.c. As long as your patches are not submitted, you may fold corrections into the first patch. e.g. the GLib conversion patch may just be folded into the first patch. Yes, if it possible, I'd like to use git.musicpd.org repository to develop my plugin, so there would be some more pushes before final submit. Suggestion: have you tried stgit yet? Some comments about implementation details: bool udpserver_kill; That should be volatile, otherwise gcc may play optimization tricks with you. I've heard about that and of course forgot it :) Avuton reminded me of this Linus Torvalds rant: http://lkml.org/lkml/2006/7/6/159 I'm not suggesting volatile because it's correct here - it is not! In your case, recvfrom() establishes a memory barrier, and volatile would be superfluous. Developers who are not aware of the uncertainties of the C memory model are better off using volatile (until they learn it properly). So my volatile suggestion was wrong, you don't need to add it, I forgot about the recvfrom() memory barrier. Don't call blocking functions while holding a mutex! I use MSG_DONTWAIT, about what kind of blocks are you talking about? May be about very slow sendto()? You're right, I didn't notice the MSG_DONTWAIT. I'm not 100% sure if that really makes the code un-blockable, but I think it's good enough in any case. If we really switch to non-threaded code, all of the locking code will go away anyway. * I will wait for incoming connection just one second [...] * And if there is no incoming data I will check kill state I don't like fixed sleep times. poll() for an event with unlimited timeout, and let the kernel wake you up. Sorry, but how should I cancel UDP server thread in this case? For example when the main program terminates... In pthreads API there was a cancel function, but in glibc I didn't found it. Side note: glibc is a whole different project, you mean GLib: http://www.gnu.org/software/libc/ http://library.gnome.org/devel/glib/ GLib has no thread cancellation because it is non-portable. If you want to send a pollable signal to a thread, create an anonymous pipe, and close it when the thread should exit. Thinking about it again, you could as well install a polling event in the GLib main loop, which is then run by the main thread. See: http://library.gnome.org/devel/glib/2.18/glib-IO-Channels.html#g-io-add-watch This way, we don't need yet another thread, and thus we don't have to care about killing it. I think that error log messages should be somewhere far away from the code flow. I just forgot to make my functions inlined. The inline keyword is merely a suggestion, and gcc's default decisions are quite good, even without inline. If you forget it, don't worry, gcc will take care of that. I thought that getaddrinfo() is a new-way-main-stream-cool-function and everybody should use it instead of anything else. This is not true? It's the new-way-main-stream-cool-function, yes. Instead of anything else - depends. However it's not portable. :-( It's ok to use it here, I was just wondering what led you to that decision. If your code should run on Windows one day, you have to implement some fallback. Or we may copy libmpdclient's portable resolver library as well. I also don't like the protocol you designed. It allows only one single client, and requires the client to request each chunk. If a client doesn't send the PCM request quickly enough, chunks are lost. Why only one client? It works with more than one client :) (I'm not talking about millions, but more than one) Oh, right! The thread does not delete the buffer when it has been sent. However that's even worse.. Yes we can lose some chunks, and if a client will be impatient, it will receive same chunks more than once. So what? This is not an audio-streaming server, if you want an audio stream you can use shout plugin. In visualization tasks I don't worry about missed chunks, I will lose some, but who will see it? And don't forgot about UDP lacks at all. Lost chunks are not a big deal, even for audio streaming. It's better to lose a chunk than doing all the retries implied in the TCP protocol, which makes TCP a rather bad protocol for real-time streaming. I never understood why the Shoutcast guys chose TCP (and even HTTP!), instead of UDP, it's a lousy choice. Problem with your protocol: (a) the protocol is designed to lose chunks (UDP is already a lossy protocol, and you add more loss to it!) (b) the protocol is designed to