Hi Ross On Aug 13, 2013, at 3:40 AM, "Burton, Ross" <ross.bur...@intel.com> wrote:
> Hi, > > For a while I've been wondering about the possible performance > improvement from using a tmpfs as $WORKDIR, so each package is > unpacked, compiled, installed, and packaged in a tmpfs and then the > final packages moved to the deploy/sstate/sysroot in persistent > storage. In theory with lots of RAM and a relatively long file system > commit duration this should be what effectively happens, but there is > a lot of I/O still happening during builds (and occasionally pausing > the build whilst buffers empty here) which I was trying to mitigate, > despite 12G of my 16G RAM being used as the page cache. > > Last night I finally got around to testing this. Unless you've an > ungodly amount of RAM the use of rm_work is mandatory (a 6G tmpfs > wasn't sufficient as a kernel build almost fills that, 8G was > sufficient for me) so I did the HDD times with and without rm_work for > fair comparisons. Each build was only done once but the machine was > otherwise idle so error margins should be respectable. The benchmark > was core-image-sato for atom-pc from scratch (with cached downloads). > > Work in HDD without rm_work: ~68 minutes > Work in HDD with rm_work: ~71 minutes > Work in tmpfs with rm_work: ~64 minutes This seems reasonable especially for folks who don't want to keep intermediate objects around in build tree. I also have a setup where I build in tmpfs. I think it would be a nice step to write a wiki document explaining steps that are needed to configure your system to use tmpfs so anyone can try it out and with a bit of added explanation on how much ram is minimally needed and so on. and we can then keep improving this document. > > Everyone loves graphs, so here's one I knocked up: http://bit.ly/146B0Xo > > Conclusion: even with the overhead of rm_work there's a performance > advantage to using a tmpfs for the workdir, but the build isn't > massively I/O bound on commodity hardware (i7 with WD Caviar Green > disks). It's definitely a quick and easy test (assuming enough RAM) > to see how I/O bound your own builds are. > > Ross > _______________________________________________ > yocto mailing list > yocto@yoctoproject.org > https://lists.yoctoproject.org/listinfo/yocto _______________________________________________ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto