Hi

>> That is what the emulator is already doing. If we start emulating HW
>> down to individual CPU cycles, it'll only get slower. :(
> 
> I think this is wrong in some way.  Otherwise I wouldn't see this:
> 1) running on TBPL (AWS) the internal timings reported show the specific
>    test going from 30 seconds to 450 seconds with the patch.
> 2) on my local system, the test self-reports ~10 seconds, with or
>    without the patch.
> 
> The only way I can see that happening is if the simulator in some way
> exposes the underlying platform performance (in specific timers).

Right. What I mean is that we're currently emulating an ARM chipset, but
without timing. If we start doing cycle-correct emulation, it won't get
faster.

>>> Another option (likely not simple) would be to find a way to "slow down
>>> time" for the emulator, such as intercepting system calls and increasing
>>> any time constants (multiplying timer values, timeout values to socket
>>> calls, etc, etc).  This may not be simple.  For devices (audio, etc),
>>> frequencies may need modifying or other adjustments made.
>>
>> If we do that, writing and debugging tests will take even longer.
> 
> It shouldn't, if the the system self-adapted (per below).  That should
> give a much more predictable (and closer-to-similar to a real device)
> result.  BTW, I presume we're simulating a single-core ARM, so again not
> entirely representative anymore.

Oh, I now get the point of this idea. We could probably implement this
by modifying the emulated timer(s?) within the emulator;
hw/goldfish_timer.c might be the place. Although I wouldn't do this if
we have other options. Don't know how this would affect frequencies
(audio, etc.).

Best regards
Thomas

> 
>>> We could require that the emulator needs X Bogomips to run, or to run a
>>> specific test suite.
>>>
>>> We could segment out tests that require higher performance and run them
>>> on faster VMs/etc.
>>
>> Do we already know which tests are slow and why? Maybe there are ways to
>> optimize the emulator. For example, if we execute lots of driver code
>> within the guest, maybe we can move some of that into the emulator's
>> binary, where it runs on the native machine.
> 
> Dunno.  But it's REALLY slow.  Native linux on tbpl for a specific test: 1s.
> Local emulator (fast 2year-old desktop) 10s.  tbpl before patch 30-40s.
> after 350-450 and we're lucky it finishes at all.
> 
> So compared to AWS linux native it's ~30-40x slower without the patch,
> 300+ x slower with.  (Again speaks to realtime stuff leaving no CPU for
> test running on tbpl.)  Others can speak to overall speed.
> 
>>> We could turn off certain tests on tbpl and run them on separate
>>> dedicated test machines (a bit similar to PGO).  There are downsides to
>>> this of course.
>>>
>>> Lastly, we could put in a bank of HW running B2G to run the tests like
>>> the Android test boards/phones.
>>
>> There are tests that instruct the emulator to trigger certain HW events.
>> We can't run them on actual phones.
> 
> Sure.  Most don't do that I presume (very few)
> 
>> To me, the idea of switching to a x86-based emulator seems to be the
>> most promising solution. What would be necessary?
> 
> Dunno.
> 

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to