""Magnus Hagander"" <[EMAIL PROTECTED]> wrote > > The way I read it, a delay should help. It's basically running out of > kernel buffers, and we just delay, somebody else (another process, or an > IRQ handler, or whatever) should get finished with their I/O, free up > the buffer, and let us have it. Looking around a bit I see several > references that you should retry on it, but nothing in the API docs. > I do think it's probably a good idea to do a short delay before retrying > - at least to yield the CPU for one slice. That would greatly increase > the probability of someone else finishing their I/O... >
More I read on the second thread: " NTBackupread and NTBackupwrite both use buffered I/O. This means that Windows NT caches the I/O that is performed against the stream. It is also the only API that will back up the metadata of a file. This cache is pulled from limited resources: namely, pool and nonpaged pool. Because of this, extremely large numbers of files or files that are very large may cause the pool resources to run low. " So does it imply that if we use unbuffered I/O in Windows system will elminate this problem? If so, just add FILE_FLAG_NO_BUFFERING when we open data file will solve the problem -- but this change in fact very invasive, because it will make the strategy of server I/O optimization totally different from *nix. Regards, Qingqing ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly