On Wed, Mar 17, 2010 at 6:55 PM, Sad Clouds <cryintotheblue...@googlemail.com> wrote: > On Wed, 17 Mar 2010 18:28:21 +0100 > Vlad Galu <d...@dudu.ro> wrote: > >> On Wed, Mar 17, 2010 at 5:22 PM, Sad Clouds >> <cryintotheblue...@googlemail.com> wrote: >> > On Wed, 17 Mar 2010 16:01:28 +0000 >> > Quentin Garnier <c...@cubidou.net> wrote: >> > >> >> Do you have a real world use for that? For instance, I wouldn't >> >> call a web server that sends the same data to all its clients *at >> >> the same time* realistic. >> >> >> > Why? Because it never happens? I think it happens quite often. >> > Another example is a server that is sending live data, i.e. audio >> > playback, video stream, etc. If you can't use multicasting over a >> > WAN, then you have a situation where you are streaming the same >> > data to large number of clients. >> >> That's almost never practical, since you never fall into the ideal >> case when all clients consume the data at the exact same rate. >> > > That's true, but it could still improve overall performance. You need > to keep track of how much data is still outsdanding for which sockets. > As long as you have that data cached in memory (which is true for small > files), you update the data offsets and next time kevent() returns, > write data to sockets from those offsets >
That's easy to do with sendfile(), the VFS layer should do the caching for you. Since you already mentioned kevent(), I assume you have nonblocking I/O in mind, in which case your app won't be considerably slown down by the send()/sendfile()/whatever() calls. Yes, you'll do it over more context switches, but otherwise what you want isn't possible, unless you have a true async I/O system. Even the function you wants existed, how could it update the outstanding data counters without blocking? -- Good, fast & cheap. Pick any two.