jimingham wrote: > > The user and system times should be monotonically increasing. So you could > > stop at a breakpoint, fetch the times, then run your program through a > > little spin loop to burn some CPU before hitting a second breakpoint. Then > > get the times again and assert that they are > the first set. You could > > also set a timer in the test between the first and second stop and assert > > that the difference in system and user time is less than or equal to the > > timer difference. A single threaded program can only run on one core at a > > time, so that should always be true. > > I added a test for user time. System time seems really likely to be flaky in > the unittest. It'll increase with kernel time. However if I use side-effect > free system calls like getpid() to try and increase that counter it seems > like that'll move around a lot from different kernels on different machines
It should never decrease, however. If you were just getting garbage values, then there should be a roughly even chance the second value will be less than the first. So this still gives some confidence this isn't totally bogus... https://github.com/llvm/llvm-project/pull/88995 _______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits