hi, thx for your awnser, our technician say: 

With clock.assign_new and buffer, he had the script
put on, but there was
Second misfire, on the stream. 

do you have an idea, where the second-misfires come from?

greetings daniel



> Romain Beauxis <romain.beau...@gmail.com> hat am 2. Dezember 2018 um 22:18 
> geschrieben:
> 
> 
> Hi all,
> 
> Le sam. 1 déc. 2018 à 08:31, Daniel Kielczewski <i...@slugstyle.com> a écrit :
> >
> > hello, we use liquidsoap for our radio station (sthoerfunk.de)
> >
> > we have a little problem,
> >
> > we have a total of 4 output streams. the streams run on separate servers, 
> > if now one server crashes and the stream fails, the other 3 stream will 
> > fail too. Please let us know how we need to configure liquidsoap to keep 
> > the other streams running.
> 
> I believe that you are experiencing a typical case where the network
> lag from that one source causes a lag in the whole streaming thread,
> leading to a disconnect of the other sources.
> 
> The solution in this case is to assign a new clock to each output,
> making them independent from each other. Here's how you can do it:
> 
> def assign_new_clock(s) =
>   clock.assign_new(id=source.id(s),[buffer(s)])
> end
> 
> # Now the outputs:
> output.icecast(<params 1>, fallible=true, assign_new_clock(s))
> output.icecast(<params 2>, fallible=true, assign_new_clock(s))
> output.icecast(<params 3>, fallible=true, assign_new_clock(s))
> output.icecast(<params 4>, fallible=true, assign_new_clock(s))
> 
> I haven't tested this code so let us know if that works for you.
> 
> The full documentation about clocks is here:
> https://www.liquidsoap.info/doc-1.3.4/clocks.html
> 
> Romain
> 
> 
> _______________________________________________
> Savonet-users mailing list
> Savonet-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/savonet-users


_______________________________________________
Savonet-users mailing list
Savonet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/savonet-users

Reply via email to