Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
"David P. Reed" writes: > Regarding EDF. > > I've been pushing folks to move latency sensitive computing in ALL OS's to a > version of EDF since about 1976. This was when I was in grad school working > on distributed computing on LANs. In fact, it is where I got the idea for my > Ph.D. thesis (completed in 1978) which pointed out a bigger idea - that > getting ACID consistency [ACID hadn't been invented then as a term, we called > it atomic actions] on data in a distributed system being processed by > concurrent distributed transactions can be done by using timestamps that > behave like the "deadlines" in EDF. In fact, the scheduling of code in my > thesis was a generalized version of EDF, approximated because of the > impossibility of perfect synchronization. > > The Croquet system, which was a real-time edge based decentralized system, > with no central server, that we demonstrated with a Second-Life style virtual > world that wored entirely on a set of laptops that could be across the > country from each other was based on an OS implemented in a variant of the > Squeak programming language, where the scheduling and object model was not > process based, but message based with replicated computation synchronized via > a shared "timestamp" that was used for execution scheduling (essentially > distributed EDF). The latency requirements for this distributed virtual world > were on the order of 100 msec. simultaneity for mouse clicks affecting all > participating nodes across the country in a virtual 3D world, with sound, etc. > > Croquet was built in 2 years by 3 people (starting from scratch). And > scheduling was never a problem, nor was variable network delay (our protocol > was based on UDP frames synchronized by the same timestamps used to > synchronize each object method execution. > > The operating system model is one I created within that modified Squeak > environment as part of its base "interpreter", which wasn't a loop, but a > scheduler using EDF. > > To make this work properly, the programming model has to be unified around > this kind of scheduling. > > And here's why I am mentioning this. To put EDF *only* into the networking > stack, but leave the userspace applicaiton living with the stupid Linux > timesharing system scheduler, optimized for people typing commands on > terminals every few seconds and running batch compilation is the *worst of > all possible ways to use EDF*. > > Because it creates a huge mess bridging those two ideas. > > Croquet is a much more complicated thing that a teleconferencing system, > because it actually lets end users write simple programs that control the > user interactive experience, 30 frames per second across the entire US, > replicated on each computer, in the Squaak variant of Smalltalk. And we did > it with 3 coders in a couple of years. (yes, they are sckilled people - me, > David A. Smith, and the late Andreas Raab, who died way too young). > > In contrast, trying to bridge between EDF and regular Linux processes running > under the ordinary scheduler, even with "nice" and all kinds of hacks, just > to do a video conferencing system with fixed, non-programmable behavior, > would take far more design, far more lines of code, etc. > > So this is why I think timesharing OS's are really obsolescent for modern > distributed interactive systems. Yeah, "rsync" and "git" are nice for batch > replication of files. ANd yeah, EDF can help make them perform faster in > their file transferring. > > But to make an immersive, real-time experience (which is what computing today > is all about, on all time scales, even in the servers other than HPC) it is > ALL wrong, and incrementally patching little pieced of Linux ain't gonna get > there. Windows or BSD (macOS) ain't gonna do it either. > > I'm old. Why is Linux living in the idea space of operating systems that > precededed networking, distributed computing, media sharing? > > My opinion, and it is only an opinion based on experience, is that it really > is time for networking to stop focusing on file transfers, and OS's to stop > focusing on timesharing behavior. The world is "live" and time-based. It may > not be hard-real-time. But latency is what matters. > > Since networking will remain separate from OS's, the interface concepts in > both really need to be matched to get to that future. > > It's why I pushed so hard for UDP, not reliable in-order streams alone. And > in my view, though no one every implemented it, those UDP packets will be > carring times, essential for synchronization of coordinated operations at all > the endpoints of the computation. > > I'd love to see that happen before this old guy dies. I think it will make it > a whole lot easier to make networked programs work. > > Decentralization isn't "blockchain". My thesis, in 1978, talked about one way > to decentralize computation, not just data structures. And timing is critical. > > Sorry
Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
Dave Taht writes: >> So: 1. We really should rethink how timing-sensitive algorithms are >> expressed, and it isn't gonna be good to base them on semaphores and >> threads that run at random rates. That means a very different OS >> conceptual framework. Can this share with, say, the Linux we know and >> love - yes, the hardware can be shared. One should be able to >> dedicate virtual processors that are not running Linux processes, but >> instead another computational model (dataflow?). > > Linux switched to an EDF model for networking in 5.0 Not entirely. There's EDT scheduling, and the TCP stack is mostly switched over, I think. But as always, Linux evolves piecemal :) >> 2. EBPF is interesting, because it is more secure, and is again >> focused on running code at kernel level, event-driven. I think it >> would be a seriously difficult lift to get it to the point where one >> could program the networked media processing in BPF. > > But there is huge demand for it, so people are writing way more in it > than i ever ever thought possible... or desirable. Tell me about it. We have seen a bit of interest for combining eBPF with realtime, though. With the upstreaming of the realtime code, support has landed for running eBPF even on realtime kernels. And we're starting to see a bit of interest for looking specifically at latency bounds for network processing (for TSN), including XDP. Nothing concrete yet, though. -Toke ___ Cerowrt-devel mailing list Cerowrt-devel@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/cerowrt-devel
Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
Regarding EDF. I've been pushing folks to move latency sensitive computing in ALL OS's to a version of EDF since about 1976. This was when I was in grad school working on distributed computing on LANs. In fact, it is where I got the idea for my Ph.D. thesis (completed in 1978) which pointed out a bigger idea - that getting ACID consistency [ACID hadn't been invented then as a term, we called it atomic actions] on data in a distributed system being processed by concurrent distributed transactions can be done by using timestamps that behave like the "deadlines" in EDF. In fact, the scheduling of code in my thesis was a generalized version of EDF, approximated because of the impossibility of perfect synchronization. The Croquet system, which was a real-time edge based decentralized system, with no central server, that we demonstrated with a Second-Life style virtual world that wored entirely on a set of laptops that could be across the country from each other was based on an OS implemented in a variant of the Squeak programming language, where the scheduling and object model was not process based, but message based with replicated computation synchronized via a shared "timestamp" that was used for execution scheduling (essentially distributed EDF). The latency requirements for this distributed virtual world were on the order of 100 msec. simultaneity for mouse clicks affecting all participating nodes across the country in a virtual 3D world, with sound, etc. Croquet was built in 2 years by 3 people (starting from scratch). And scheduling was never a problem, nor was variable network delay (our protocol was based on UDP frames synchronized by the same timestamps used to synchronize each object method execution. The operating system model is one I created within that modified Squeak environment as part of its base "interpreter", which wasn't a loop, but a scheduler using EDF. To make this work properly, the programming model has to be unified around this kind of scheduling. And here's why I am mentioning this. To put EDF *only* into the networking stack, but leave the userspace applicaiton living with the stupid Linux timesharing system scheduler, optimized for people typing commands on terminals every few seconds and running batch compilation is the *worst of all possible ways to use EDF*. Because it creates a huge mess bridging those two ideas. Croquet is a much more complicated thing that a teleconferencing system, because it actually lets end users write simple programs that control the user interactive experience, 30 frames per second across the entire US, replicated on each computer, in the Squaak variant of Smalltalk. And we did it with 3 coders in a couple of years. (yes, they are sckilled people - me, David A. Smith, and the late Andreas Raab, who died way too young). In contrast, trying to bridge between EDF and regular Linux processes running under the ordinary scheduler, even with "nice" and all kinds of hacks, just to do a video conferencing system with fixed, non-programmable behavior, would take far more design, far more lines of code, etc. So this is why I think timesharing OS's are really obsolescent for modern distributed interactive systems. Yeah, "rsync" and "git" are nice for batch replication of files. ANd yeah, EDF can help make them perform faster in their file transferring. But to make an immersive, real-time experience (which is what computing today is all about, on all time scales, even in the servers other than HPC) it is ALL wrong, and incrementally patching little pieced of Linux ain't gonna get there. Windows or BSD (macOS) ain't gonna do it either. I'm old. Why is Linux living in the idea space of operating systems that precededed networking, distributed computing, media sharing? My opinion, and it is only an opinion based on experience, is that it really is time for networking to stop focusing on file transfers, and OS's to stop focusing on timesharing behavior. The world is "live" and time-based. It may not be hard-real-time. But latency is what matters. Since networking will remain separate from OS's, the interface concepts in both really need to be matched to get to that future. It's why I pushed so hard for UDP, not reliable in-order streams alone. And in my view, though no one every implemented it, those UDP packets will be carring times, essential for synchronization of coordinated operations at all the endpoints of the computation. I'd love to see that happen before this old guy dies. I think it will make it a whole lot easier to make networked programs work. Decentralization isn't "blockchain". My thesis, in 1978, talked about one way to decentralize computation, not just data structures. And timing is critical. Sorry for the rant. I'm tired of waiting for "backwards compatibility" with Unix version 1 to allow us to go forward. To me, Linux is a great version of a subset of the operating systems I
Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
I don't know to what extent the freeswitch guys would be interested in this thread. I'd like find a good list or forum to talk about the state of the art in videoconferencing ? , the ietf rmcat and webrtc lists are mostly dead. hangouts, jitsi, zoom, etc, seem to be pretty good products nowadays (at least in my fq_codel'd environment), but solid info on how to make them better in the home and for online tele-learning On Fri, Mar 27, 2020 at 12:00 PM David P. Reed wrote: > > Congestion control for real-time video is quite different than for streaming. > Streaming really is dealt with by a big enough (multi-second) buffering, and > can in principle work great over TCP (if debloated). Your encoder still has to adjust to the available bandwidth. The facebook streaming application did this beautifully through my very limited highly shared 5mbit uplink - adjusting quickly to a parallel rrul test in particular by skipping some frames. then lowering the frame rate and quality, but an early attempt of mine to merely reflect rtmp streams did not, neither an attempt with "obs studio". there was about 30 sec of delay in the facebook test - I figure some of this is tuned to visible uplink buffer sizes (still seconds over cell), but also to give the riaa a shot at censoring the audio. (a commercial song crept into - over a mic! - which was detected as infringing on one attempt which automatically muted the audio and keyed a nastygram from fb) I'm going to poke into obs studios underlying code (rtsp anyone?0 at some point, and really - udp with a head dropping aqm is the best thing for transporting video, IMHO. > UDP congestion control MUST be end-to-end and done in the application layer, > which is usually outside the OS kernel. This makes it tricky, because you end > up with latency variation due to eh OS's process scheduler that is on the > order of magnitude of the real-time requirements for air-to-air or > light-to-light response (meaning the physical transition from sound or > picture to and from the transducer). We are so far from that point! encoder latencies today are in the 100+ms range. I always liked the opus codec because it can get down to 2.7ms encoding latencies, and a doubled frame rate camera 8ms but video encoding rates Im out of date on. (?) One long deferred piece of webrtc/rmcat research I always meant to do was audio and video on separate ports in the stream, and using that 2.7m opus clock and depending on fq at the bottleneck to provide better congestion control information by treating the smaller audio packets as a clock signal. Due to lack of port space and a widespread perception that fq isn't out there, most videoconferencing streams multiplex everything over the same port. With ipv6 in place, well, port space is no longer a problem. > > This creates a godawful mess when trying to do an app. Whether in WebRTC > (peer to peer UDP) or in a Linux userspace app, the scheduler has huge > variance in delay. I figure the bounding scheduler latency is still well manageable below a single 60fps frame. > Now getting rid of bloat currently requires TCP to respond to congestion > signalling. UDP in the kernel doesn't do that, and it doesn't tell userspace > much either (you can try to detect packet drops in userspace, but coding that > up is quite hard because the schdulers get in the way of measurement, and > forget about ECN being seen in userspace) ECN in userspace is easy on udp, except that most api's tend to abstract into a file handle style abstraction and a single return of data, not control information, and the api for getting tos options ugly. APIs that can return data and info (data, packetheader) = getudp_someway() probably exist for more modern languages like go, but rarely c or c++. Totally out of date on this, last I looked at the google congestion congtrol code bae was in mozilla... 8 years ago! As for doing udp semi-efficiently in batches... sendmmsg, recvmmsg is a rather underused kernel api. And ugly as sin. With some major limitations. > > This is OS architecture messiness, not a layer 2 or 3 issue. To me the nightmare starts with most cpu context switch latencies being 1000s of clocks nowadays. > > I've thought about this a lot. Here's my thoughts: > > I hate putting things in the kernel! It's insecure. But what this says is > that for very historical and stupid reasons (related to the ideas of early > timesharing systems like Unix and Multics) folks try to make real-time > algorithms look like ordinary "processes" whose notion of controlling > temporal behavior is abstracted away. On the whole, with the rise of quic - in particular quic, as multiple userspace libs have been emerging - we've got good bases to move forward with more stuff in userspace. > > So: > 1. We really should rethink how timing-sensitive algorithms are expressed, > and it isn't gonna be good to base them on semaphores and threads that run at > random rates. That means a
Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
Of interest given some of what you say below, there is a huge discussion on netdev about how to best implement hardware offloads for network slicing: https://www.spinics.net/lists/netdev/msg638836.html Me, I always rolled my eyes up at all the network virtualization stuff and ran from the room, screaming, given ow much I care about low latency. The udp vs tcp offload split has been nightmare enough. That said, to this day I lack a clear idea how any multi-tenant dc operation really works, I've generally assumed it was policers, and have deployed sqm (now cake) instead on everything in the cloud that seemed to need it. On Fri, Mar 27, 2020 at 12:00 PM David P. Reed wrote: > > Congestion control for real-time video is quite different than for streaming. > Streaming really is dealt with by a big enough (multi-second) buffering, and > can in principle work great over TCP (if debloated). > > UDP congestion control MUST be end-to-end and done in the application layer, > which is usually outside the OS kernel. This makes it tricky, because you end > up with latency variation due to eh OS's process scheduler that is on the > order of magnitude of the real-time requirements for air-to-air or > light-to-light response (meaning the physical transition from sound or > picture to and from the transducer). > > This creates a godawful mess when trying to do an app. Whether in WebRTC > (peer to peer UDP) or in a Linux userspace app, the scheduler has huge > variance in delay. > > Now getting rid of bloat currently requires TCP to respond to congestion > signalling. UDP in the kernel doesn't do that, and it doesn't tell userspace > much either (you can try to detect packet drops in userspace, but coding that > up is quite hard because the schdulers get in the way of measurement, and > forget about ECN being seen in userspace) > > This is OS architecture messiness, not a layer 2 or 3 issue. > > I've thought about this a lot. Here's my thoughts: > > I hate putting things in the kernel! It's insecure. But what this says is > that for very historical and stupid reasons (related to the ideas of early > timesharing systems like Unix and Multics) folks try to make real-time > algorithms look like ordinary "processes" whose notion of controlling > temporal behavior is abstracted away. > > So: > 1. We really should rethink how timing-sensitive algorithms are expressed, > and it isn't gonna be good to base them on semaphores and threads that run at > random rates. That means a very different OS conceptual framework. Can this > share with, say, the Linux we know and love - yes, the hardware can be > shared. One should be able to dedicate virtual processors that are not > running Linux processes, but instead another computational model (dataflow?). > An example of this (though clunky and unsupported by good tools) is in > FreeBSD, it's called *netgraph*. It's a structured way to write reactive > algorithms that are demand or arrival driven. It also has some security > issues, and since it is heavily based on passing mbufs around it's really > quirky. But I have found it useful for the kind of things that need to get > done in teleconferencing voice and video. > > 2. EBPF is interesting, because it is more secure, and is again focused on > running code at kernel level, event-driven. I think it would be a seriously > difficult lift to get it to the point where one could program the networked > media processing in BPF. > > 3. One of the nice things about KVM (hardware virtualization) is that > potentially it lets different low level machine models share a common > machine. It occurs to me that using VIRTIO network devices and some kind of > VIRTIO media processing devices, that a KVM virtual machine could be hooked > up to the packet-level networking drivers in the end device, isolating the > teleconferencing from the rest of the endpoint OS, and creating the right > kind of near-bare--metal environment for managing the timing of network > packets and the paths to the screen and audio that would be simple and clean > and tightly scheduled. KVM could "own" one or more of the physical cores > during the teleconference. > > You can see, though, that this isn't just a "network protocol design" > problem. This is only partly a network protocol issue, but one that is > coupled with the architecture of the end systems. > > I reminisce a little bit thinking back to the 1970's and 80's when TCP/IP and > UDP/IP were being designed. Sadly, it was one of the big problems of > communicating between the OS community and the protocol community that the OS > community couldn't think outside the "timesharing" system box, and the > protocol community thought of networking like phone calls (sessions). This is > where the need for control of timing and buffering got lost. The timesharing > folks largely thought of networks as for reliable timeless sequential > "streams" of data that had no particular
Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
On Fri, 27 Mar 2020, David P. Reed wrote: Congestion control for real-time video is quite different than for streaming. Streaming really is dealt with by a big enough (multi-second) buffering, and can in principle work great over TCP (if debloated). UDP congestion control MUST be end-to-end and done in the application layer, which is usually outside the OS kernel. This makes it tricky, because you end up with latency variation due to eh OS's process scheduler that is on the order of magnitude of the real-time requirements for air-to-air or light-to-light response (meaning the physical transition from sound or picture to and from the transducer). at some level this is correct, but if the link is clogged with TCP packets, it doesn't matter what your UDP application attempts to do, so installing cake to keep individual links from being too congested will allow your UDP application have a chance to operate. David Lang ___ Cerowrt-devel mailing list Cerowrt-devel@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/cerowrt-devel
Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?
Congestion control for real-time video is quite different than for streaming. Streaming really is dealt with by a big enough (multi-second) buffering, and can in principle work great over TCP (if debloated). UDP congestion control MUST be end-to-end and done in the application layer, which is usually outside the OS kernel. This makes it tricky, because you end up with latency variation due to eh OS's process scheduler that is on the order of magnitude of the real-time requirements for air-to-air or light-to-light response (meaning the physical transition from sound or picture to and from the transducer). This creates a godawful mess when trying to do an app. Whether in WebRTC (peer to peer UDP) or in a Linux userspace app, the scheduler has huge variance in delay. Now getting rid of bloat currently requires TCP to respond to congestion signalling. UDP in the kernel doesn't do that, and it doesn't tell userspace much either (you can try to detect packet drops in userspace, but coding that up is quite hard because the schdulers get in the way of measurement, and forget about ECN being seen in userspace) This is OS architecture messiness, not a layer 2 or 3 issue. I've thought about this a lot. Here's my thoughts: I hate putting things in the kernel! It's insecure. But what this says is that for very historical and stupid reasons (related to the ideas of early timesharing systems like Unix and Multics) folks try to make real-time algorithms look like ordinary "processes" whose notion of controlling temporal behavior is abstracted away. So: 1. We really should rethink how timing-sensitive algorithms are expressed, and it isn't gonna be good to base them on semaphores and threads that run at random rates. That means a very different OS conceptual framework. Can this share with, say, the Linux we know and love - yes, the hardware can be shared. One should be able to dedicate virtual processors that are not running Linux processes, but instead another computational model (dataflow?). An example of this (though clunky and unsupported by good tools) is in FreeBSD, it's called *netgraph*. It's a structured way to write reactive algorithms that are demand or arrival driven. It also has some security issues, and since it is heavily based on passing mbufs around it's really quirky. But I have found it useful for the kind of things that need to get done in teleconferencing voice and video. 2. EBPF is interesting, because it is more secure, and is again focused on running code at kernel level, event-driven. I think it would be a seriously difficult lift to get it to the point where one could program the networked media processing in BPF. 3. One of the nice things about KVM (hardware virtualization) is that potentially it lets different low level machine models share a common machine. It occurs to me that using VIRTIO network devices and some kind of VIRTIO media processing devices, that a KVM virtual machine could be hooked up to the packet-level networking drivers in the end device, isolating the teleconferencing from the rest of the endpoint OS, and creating the right kind of near-bare--metal environment for managing the timing of network packets and the paths to the screen and audio that would be simple and clean and tightly scheduled. KVM could "own" one or more of the physical cores during the teleconference. You can see, though, that this isn't just a "network protocol design" problem. This is only partly a network protocol issue, but one that is coupled with the architecture of the end systems. I reminisce a little bit thinking back to the 1970's and 80's when TCP/IP and UDP/IP were being designed. Sadly, it was one of the big problems of communicating between the OS community and the protocol community that the OS community couldn't think outside the "timesharing" system box, and the protocol community thought of networking like phone calls (sessions). This is where the need for control of timing and buffering got lost. The timesharing folks largely thought of networks as for reliable timeless sequential "streams" of data that had no particular urgency. The network protocol folks were focused on ARQ. Only a few of us cared about end-to-end latency bounds (where ends meant keyboard click or audio sample to screen display change or speaker motion). The packet speech guys did, but most networking guys wanted to toss them under the bus as annoying. And those of us doing distributed multinode algorithms did, but the remote login and FTP guys were skeptical that would ever matter. It's the latency, stupid. Not the reliability, nor the consistency, nor throughput. Unless both the OS and the path are focused on minimizing latency, a vast set of applications will suck. Unfortunately, both the OS and network communities are *stuck* in a world where latency is uncontrollable, and there are no tools for getting it better. On Friday, March 27, 2020