Hi Bill, inline
----- Original Message ----- > From: "William Allen Simpson" <william.allen.simp...@gmail.com> > To: "NFS Ganesha Developers" <nfs-ganesha-devel@lists.sourceforge.net> > Sent: Monday, June 19, 2017 3:04:52 AM > Subject: [Nfs-ganesha-devel] async dispatch not good > > As folks may have noticed, I've been re-working my old 2015 dispatch > patches that eliminate the network input-side queues in Ganesha. > > Matt had wanted fully async non-blocking I-O. I've been poking at it > for a week, and now am sure that's the wrong way to go. I don't think so, but, below > > It might still be good for FSALs. Remains to be seen. DanG and > Soumya are looking at that now. > > The devil in userland network I-O is system calls. Each epoll_wait > is a system call. Each read or write is a system call. Each thread > switch is a system call. > > My code in Ganesha v2.5 (NTIRPC v1.5) gets the network output down to > one system call per request on a very hot thread. Cannot do better, > as trying harder would just push the data into kernel buffers, > possibly slowing our own output (for various reasons). > > Trying to re-work that for async non-blocking calls instead means > many more system calls. Instead of one clean writev with the TCP > fragment header and all ready buffers in one single call, we'd at > minimum have a call, an epoll_wait, spawn another work thread, then > another call and/or release the buffer, rinse and repeat. the expensive part of this (spawn) is necessary only due to aspects of the old design, but, considering effort, ok, below > > For a long buffer chain (the times we want more performance), we'd > have much less performance -- roughly 2 + (3 * number of buffers) > additional system calls. For common short response chains, still > have the extra overhead of the epoll system call, doubling calls. > > Also, using writev minimizes buffer copies. Eliminating data > copying will usually give far better performance. > > The only thing async output is saving is waiting threads. But I've > already got the output threads down to the minimum (per interface). > No gain here! > > On the input side, the truly optimum reduction in system calls would > be one read to get the TCP fragment header and up to 1500 bytes of > data, followed (only when needed) by another read to get the entire > rest of long fragments in one fell swoop. well, maybe, not considering blocking? I think we really do want avoid blocking in the paths that now can/do, but, below > > With async input I've tried level triggered, and am getting spurious > epoll read data signals. Googling shows that's been a problem since > at least 2014, but possible to program around. ok > > Still, this could be better, had it not been terrible for output-side. > > Changing to edge triggered means that every good read would be > followed by another read to make sure that we've gotten all the data. > That is, common small reads turn into two (2) reads. Doubling our > system calls in the common case is not the way to go.... > > In conclusion, with epoll we know when input data is available, so > input threads aren't sitting around waiting anyway, and trying to > minimize threads results in more system calls and poorer performance. > > NTIRPC already defaults to 200 worker threads. If we need more, we > should allocate more. Memory should not be an issue. it's not about memory, this is the problem we're trying to avoid but, referring for context to our verbal discussion earlier today, your suggestion to hybridize the existing output side (which depends on blocking sockets) and an async input side using recv() seems work at least exploring; I assume you are proposing to use recv() with MSG_DONTWAIT? Matt > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Nfs-ganesha-devel mailing list > Nfs-ganesha-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel > -- Matt Benjamin Red Hat, Inc. 315 West Huron Street, Suite 140A Ann Arbor, Michigan 48103 http://www.redhat.com/en/technologies/storage tel. 734-821-5101 fax. 734-769-8938 cel. 734-216-5309 ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel