On Wed, Feb 01, 2017 at 02:27:39PM +0530, shirish शिरीष wrote: > Basically the article's statement is wrong. > There is no such thing as explicit itable initialization IO bandwidth > restriction in MB/s. itable initialization rate is controlled by init_itable=N > see: https://www.kernel.org/doc/Documentation/filesystems/ext4.txt > """ > The lazy itable init code will wait n times the > number of milliseconds it took to zero out the > previous block group's inode table. This > minimizes the impact on the system performance > while file system's inode table is being initialized. > """ > By default init_itable=10, so it use 1/10th bandwidth of the disk. > And if we back to original article this means author used generic > HDD with 160Mb/s sequential write performance.
Oh OK, that sounds even better. > My patch was fix for bug which was spotted on large disk arrays, > 36 in my case. So itable initialization was active all the time > while holding global lock. > > From this, it seems there aren't any limits except for 10% of whatever > the link between Why would a large array make a difference to the algorithm if it aims to use 1/10 of the bandwidth? -- Len Sorensen