Hi Robert, On Wednesday, 3 January 2018 6:23:57 PM NZDT Robert Yang wrote: > On 12/27/2017 09:23 PM, Robert Yang wrote: > > We have 124 layers, the "update.py -b <branch>" costs about 10 minutes to > > finish the update for one branch, and we need update several branches > > periodly, which is really a little slow. So I'd like to make it can run > > parally, here are some thoughts about it: > > 1) Make fetch can run parallelly, this is easy to do, and safe enough as > > the patches does > > 2) Make recipeparse can parse layers parallelly, we may need split > > update_layer.py into two parts: > > - The one which only does recipeparse, this costs a lot of time, and > > can be parallell. > > - The one which writes to database (can't be parallel, and doesn't > > have to) > > > > I found that there is an easy way to improve the performance, just use > update_layer.py as a module rather than an program can save a lot of time, > we can use multiprocessing.Process() to call the function to put it in a > subprocess, and remove bb modules when switch branches since it may not > be compatible, please see next email for details, it can reduce time from > 9m20s to 1m43s for 124 layers when everything is up to date totally.
OK, but: 1) Does this work across python 2 and 3? 2) Are you 100% sure that this clears out the memory of anything that happens to get loaded? One of the issues we used to see with the earlier single program structure was "corruption" of the python variable/module space; the current fully separate processes completely avoid that. Cheers, Paul -- Paul Eggleton Intel Open Source Technology Centre -- _______________________________________________ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto