Hello,

I work on a project called Cobalt (cobalt.foo), it is an application container 
designed to run web apps (namely youtube.com/tv) on embedded TV devices.  We 
use SpiderMonkey as our JavaScript engine, and are currently in the process of 
rebasing from SpiderMonkey 24 to SpiderMonkey 45.  On our application specific 
benchmarks, which measure input latency, which effectively measures total 
JavaScript execution time, we are noticing a performance regression of about 
40% when running in interpreted mode on a Raspberry Pi 1, which is our 
reference low end platform.

While we plan on investigating as much as possible on our side as well, I would 
like to ask, is a performance regression of this magnitude to be expected?  Our 
configuration of SpiderMonkey, HTML application, and bindings code are all held 
about as constant as they can be.  If yes, then are there any high ROI 
configuration (or even code) changes that we can make in order to best mitigate 
this, and if no, then in what areas should we begin looking into first?

Thanks,

-Nathan
_______________________________________________
dev-tech-js-engine-internals mailing list
dev-tech-js-engine-internals@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals

Reply via email to