On 2009/01/14 03:37, Vasily Stepanov <vasily.stepa...@gmail.com> wrote:
> Great :)
> About CamelCase... do you have any code formatting standards?
> May be in your wiki... I didn't found them, so of course
> I was trying to copypaste existing functions names and variables.

No, we don't have it documented.  Try not to copy'n'paste from "old"
code ;-)

Examples for "new" code: notify.c, pcm_buffer.h, icy_metadata.c.

> > As long as your patches are not submitted, you may fold corrections
> > into the first patch.  e.g. the GLib conversion patch may just be
> > folded into the first patch.
> 
> Yes, if it possible, I'd like to use git.musicpd.org repository
> to develop my plugin, so there would be some more pushes before
> final submit.

Suggestion: have you tried "stgit" yet?

> > Some comments about implementation details:
> >
> > >          bool udpserver_kill;
> >
> > That should be volatile, otherwise gcc may play optimization tricks
> > with you.
> 
> I've heard about that and of course forgot it :)

Avuton reminded me of this Linus Torvalds rant:

 http://lkml.org/lkml/2006/7/6/159

I'm not suggesting volatile because it's correct here - it is not!  In
your case, recvfrom() establishes a memory barrier, and "volatile"
would be superfluous.  Developers who are not aware of the
uncertainties of the C memory model are better off using volatile
(until they learn it properly).  So my "volatile" suggestion was
wrong, you don't need to add it, I forgot about the recvfrom() memory
barrier.

> > Don't call blocking functions while holding a mutex!
> 
> I use MSG_DONTWAIT, about what kind of blocks are you talking about?
> May be about very slow sendto()?

You're right, I didn't notice the MSG_DONTWAIT.  I'm not 100% sure if
that really makes the code un-blockable, but I think it's good enough
in any case.  If we really switch to non-threaded code, all of the
locking code will go away anyway.

> > > * I will wait for incoming connection just one second
> > [...]
> > > * And if there is no incoming data I will check kill state
> >
> > I don't like fixed sleep times.  poll() for an event with unlimited
> > timeout, and let the kernel wake you up.
> 
> Sorry, but how should I cancel UDP server thread in this case?
> For example when the main program terminates...
> In pthreads API there was a cancel function,
> but in glibc I didn't found it.

Side note: glibc is a whole different project, you mean GLib:

 http://www.gnu.org/software/libc/
 http://library.gnome.org/devel/glib/

GLib has no thread cancellation because it is non-portable.  If you
want to send a pollable signal to a thread, create an anonymous pipe,
and close it when the thread should exit.

Thinking about it again, you could as well install a polling event in
the GLib main loop, which is then run by the main thread.  See:

 http://library.gnome.org/devel/glib/2.18/glib-IO-Channels.html#g-io-add-watch

This way, we don't need yet another thread, and thus we don't have to
care about killing it.

> I think that error log messages should be somewhere far away from the code 
> flow.
> I just forgot to make my functions inlined.

The "inline" keyword is merely a "suggestion", and gcc's default
decisions are quite good, even without "inline".  If you forget it,
don't worry, gcc will take care of that.

> I thought that getaddrinfo() is a new-way-main-stream-cool-function
> and everybody should use it instead of anything else. This is not
> true?

It's the new-way-main-stream-cool-function, yes.  Instead of anything
else - depends.  However it's not portable.  :-(

It's ok to use it here, I was just wondering what led you to that
decision.  If your code should run on Windows one day, you have to
implement some fallback.  Or we may copy libmpdclient's portable
resolver library as well.

> > I also don't like the protocol you designed.  It allows only one
> > single client, and requires the client to request each chunk.  If a
> > client doesn't send the "PCM" request quickly enough, chunks are lost.
> 
> Why only one client? It works with more than one client :)
> (I'm not talking about millions, but more than one)

Oh, right!  The thread does not delete the buffer when it has been
sent.  However that's even worse..

> Yes we can lose some chunks, and if a client will be impatient,
> it will receive same chunks more than once. So what?
> This is not an audio-streaming server, if you want an audio stream
> you can use shout plugin. In visualization tasks I don't worry about
> missed chunks,
> I will lose some, but who will see it? And don't forgot about UDP lacks at 
> all.

Lost chunks are not a big deal, even for audio streaming.  It's better
to lose a chunk than doing all the retries implied in the TCP
protocol, which makes TCP a rather bad protocol for real-time
streaming.  I never understood why the Shoutcast guys chose TCP (and
even HTTP!), instead of UDP, it's a lousy choice.

Problem with your protocol: (a) the protocol is designed to lose
chunks (UDP is already a lossy protocol, and you add more loss to it!)
(b) the protocol is designed to send duplicate chunks.

I say: why not use our new protocol for real audio streaming?

Now let's talk about your definition of "impatient".  What do you mean
by that?  What should a client do to never receive duplicate chunks?
(and to never lose chunks!)

I have an exact idea where this discussion will be going, I could
explain to you now, but currently I prefer that you attempt do answer
that question, and during that learn for yourself what you forgot to
design in the protocol ;-)

> Looks like you want to create and audio-udp-stream integral output!
> :)

Yes!  It's easier to implement, is more robust, doesn't send duplicate
chunks, and its loss of chunks is not favored by its design.  It saves
bandwidth because only half the number of packets is transferred.

Its only problem is rate limiting.

> Any way it is an interesting idea to use new command,
> but firstly I'd like to finish what I've started... and then to look round :)

Sure.  Just want to give you some food for thought, so you know early
enough when you're wasting time with ideas too complicated.

Max

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Musicpd-dev-team mailing list
Musicpd-dev-team@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/musicpd-dev-team

Reply via email to