Hi Henry and Michal,

On 2023/12/8 17:56, Henry Wang wrote:
Hi Michal,

On Dec 8, 2023, at 17:50, Michal Orzel <michal.or...@amd.com> wrote:
On 08/12/2023 10:21, Henry Wang wrote:


Hi Michal,

On Dec 8, 2023, at 17:11, Michal Orzel <michal.or...@amd.com> wrote:
On 08/12/2023 10:05, Henry Wang wrote:

Hi Michal,

On Dec 8, 2023, at 16:57, Michal Orzel <michal.or...@amd.com> wrote:

Hi Henry,

On 08/12/2023 06:46, Henry Wang wrote:
diff --git a/automation/scripts/expect/fvp-base-smoke-dom0-arm64.exp 
b/automation/scripts/expect/fvp-base-smoke-dom0-arm64.exp
new file mode 100755
index 0000000000..25d9a5f81c
--- /dev/null
+++ b/automation/scripts/expect/fvp-base-smoke-dom0-arm64.exp
@@ -0,0 +1,73 @@
+#!/usr/bin/expect
+
+set timeout 2000
Do we really need such a big timeout (~30 min)?
Looking at your test job, it took 16 mins (quite a lot but I know FVP is slow
+ send_slow slows things down)

This is a really good question. I did have the same question while working on
the negative test today. The timeout 2000 indeed will fail the job at about 
30min,
and waiting for it is indeed not really pleasant.

But my second thought would be - from my observation, the overall time now
would vary between 15min ~ 20min, and having a 10min margin is not that crazy
given that we probably will do more testing from the job in the future, and if 
the
GitLab Arm worker is high loaded, FVP will probably become slower. And normally
we don’t even trigger the timeout as the job will normally pass. So I decided
to keep this.

Mind sharing your thoughts about the better value of the timeout? Probably 
25min?
 From what you said that the average is 15-20, I think we can leave it set to 
30.
But I wonder if we can do something to decrease the average time. ~20 min is a 
lot
even for FVP :) Have you tried setting send_slow to something lower than 100ms?
That said, we don't send too many chars to FVP, so I doubt it would play a 
major role
in the overall time.

I agree with the send_slow part. Actually I do have the same concern, here are 
my current
understanding and I think you will definitely help with your knowledge:
If you check the full log of Dom0 booting, for example [1], you will find that 
we wasted so
much time in starting the services of the OS (modloop, udev-settle, etc). All 
of these services
are retried many times but in the end they are still not up, and from my 
understanding they
won’t affect the actual test(?) If we can somehow get rid of these services 
from rootfs, I think
we can save a lot of time.

And honestly, I noticed that qemu-alpine-arm64-gcc suffers from the same 
problem and it also
takes around 15min to finish. So if we managed to tailor the services from the 
filesystem, we
can save a lot of time.
That is not true. Qemu runs the tests relatively fast within few minutes. The 
reason you see e.g. 12 mins
for some Qemu jobs comes from the timeout we set in Qemu scripts. We don't have 
yet the solution (we could
do the same as Qubes script) to detect the test success early and exit before 
timeout. That is why currently
the only way for Qemu tests to finish is by reaching the timeout.

So the problem is not with the rootfs and services (the improvement would not 
be significant) but with
the simulation being slow. That said, this is something we all know and I 
expect FVP to only be used in scenarios
which cannot be tested using Qemu or real HW.

Ok, you made a point. Let me do some experiments to see if I can improve. 
Otherwise maybe
we can live with this until a better solution.

Kind regards,
Henry


QEMU works like FVP enabled use_real_time flag. How about enable use_real_time flag in CI for most test cases, but disable it for
some time sensitive test cases? Normally, enable use_real_time
will give several times improvement of FVP performance.

Cheers,
Wei Chen


~Michal


Reply via email to