> > > I have written a small perl script to check how slow is fsync for Smart > > Array E200i controller. Theoretically, because of write cache, fsync > MUST > > cost nothing, but in practice it is not true > > That theory is fundamentally flawed; you don't know what else is in the > operating system write cache in front of what you're trying to fsync, and > you also don't know exactly what's in the controller's cache when you > start. For all you know, the controller might be filled with cached reads > and refuse to kick all of them out. This is a complicated area where
tests are much more useful than trying to predict the behavior. Nobody else writes, nobody reads. The machine is for tests, it is clean. I monitor dstat - for 5 minutes before there is no disc activity. So I suppose that the conntroller cache is already flushed before I am running the test. > tests are much more useful than trying to predict the behavior. You > haven't mentioned any details yet about the operating system you're > running on; Solaris? Guessing from the device name. There have been some > comments passing by lately about the write caching behavior not being > turned on by default in that operating system. > Linux CentOS x86_64. A lot of memory, 8 processors. Filesystem is ext2 (to reduce the journalling side-effects). OS write caching is turned on, turned off and also set to flush once per second (all these cases are tested, all these have no effect). The question is - MUST my test script report about a zero fsync time or not, if the controler has built-in and large write cache. If yes, something wrong with controller or drivers (how to diagnose?). If no, why? There are a lot of discussions in this maillist about fsync & battery-armed controller, people say that a controller with builtin cache memory reduces the price of fsync to zero. I just want to achieve this.