So, you're saying that there are, today, in the real world, > applications where you must iterate over a work queue that is: > > 1. Impossible to serialize in a child process. > 2. Arbitrarily large. > 3. Small enough to fit in memory. > 4. Well-partitioned enough to split up into multiple passes. > 5. Important enough to risk affecting IO performance by shoving a very > high priority uv listener into the queue. > 6. Not so important to starve IO while all the passes complete. > > Show me. That's a pretty narrow use case. I'm skeptical, but you > know how much I love being surprised by evidence :) > > I don't know how you came up with this set of requirements. I wasn't just talking about arbitrarily large work queues. My concerns about i/o starvation are hopefully outlined a little better in my last message.
> It's not "just fine". It's "just fine for small n". Everything is > fine for small n. Node is for high traffic applications. > High-traffic problems are our problems. > We are still talking past each other. nextTick works perfectly well for large n. It doesn't work well when it is used to defer execution outside the current stack, AND n is large, AND you still want to catch pending data events. Even though it was designed for that, it ended up being not the right solution for that. We're asking for another solution that works for deferring and catching data events for large n. But don't change nextTick to do so. I still don't think this is an unreasonable request. You can decide not to take the advice, but what you're doing is trying to re-characterize this request to something that it is not. > > There's currently no good way to assign a handler to the end of the > current RTC. > Agreed. We should add one and call it something besides nextTick :) :Marco -- Marco Rogers [email protected] | https://twitter.com/polotek Life is ten percent what happens to you and ninety percent how you respond to it. - Lou Holtz
