On 23/12/2021 17:35, John Harrison wrote:

[snip]

On the other hand, the GuC log is useful for debugging certain issues and it is included automatically in the sysfs error capture along with all the other hardware and software state that we save.

There is intended to be a publicly released tool to decode the GuC log into a human readable format. So end users will be able to see what engine scheduling decisions were taken prior to the hang, for example. Unfortunately, that is not yet ready for release for a number of reasons. I don't know whether that would be released as part of the IGT suite or in some other manner.

Okay, it would be handy if it was part of IGT, maybe even intel_error_decode being able to use it to show everything interesting to kernel developers in one go. Or at least the log parsing tool being able to consume raw error capture.
I have some wrapper scripts which can do things like take a raw error capture, run intel_error_decode, extract the GuC log portion, convert it to the binary format the decoder tool expects, extract the GuC firmware version from the capture to give to the decoder tool and finally run the decoder tool. The intention is that all of the helper scripts will also be part of the log decoder release.

If you want to try it all out now, see the GuC log decoder wiki page (internal developers only).

Thanks, I'll see after the holiday break where we are with certain project in 
terms of are we still hitting hangs.

[snip]

My view is that the current message is indeed woefully uninformative. However, it is more important to be reporting context identification than engine instances. So sure, add the engine instance description but also add something specific to the ce as well. Ideally (for me) the GuC id and maybe something else that uniquely identifies the context in KMD land for when not using GuC?

Not sure we need to go that far at this level, but even if we do it could be a follow up to add new data to both backends. Not sure yet I care enough to drive this. My patch was simply a reaction to noticing there is zero information currently logged while debugging some DG2 hangs.
In terms of just reporting that a reset occurred, we already have the 'GPU HANG: ecode 12:1:fbffffff, in testfw_app [8177]' message. The ecode is a somewhat bizarre value but it does act as a 'something went wrong, your system is not happy' type message. Going beyond that, I think context identification is the next most useful thing to add.

But if you aren't even getting the 'GPU HANG' message then it sounds like something is broken with what we already have. So we should fix that as a first priority. If that message isn't appearing then it means there was no error capture so adding extra info to the capture won't help!

The issue I have is that "GPU HANG ecode" messages are always "all zeros". It thought that was because GuC error capture was not there, but maybe its something else.
Hmm. All zeros including the platform and engine class part or just the instdone part?

Class and instdone - all we were seeing was "[drm] GPU HANG: ecode
12:0:00000000".

Even on the CI run for this patch I see in the logs:

<5>[  157.243472] i915 0000:00:02.0: [drm] rcs0 GuC engine reset
<6>[  157.254568] i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000

So there seem circumstances when the GPU HANG line somehow misses to figure out the engine class.
Class zero is render. So it is correct.

It's a bitmask, so not quite correct, something is missing:

                for (cs = gt->engine; cs; cs = cs->next) {
                        if (cs->hung) {
                                hung_classes |= BIT(cs->engine->uabi_class);

The instdone value is engine register state and will have been cleared before i915 can read it if the reset was handled by GuC. That comes under the heading of state we need the new error capture API for. However, the context should be correctly identified as should the platform/engine class.

Currently, the recommended w/a is to set i915.reset=1 when debugging a hang issue. That will disable GuC based resets and fall back to old school i915 based (but full GT not engine) resets. And that means that i915 will be able to read the engine state prior to the reset happening and thus produce a valid error capture / GPU HANG message.

Ah.. that's the piece of the puzzle I was missing. Perhaps it should even be the default until error capture works.
Decision was taken that per engine resets are of real use to end users but valid register state in an error capture is only of use to i915 developers. Therefore, we can take the hit of less debuggability. Plus, you do get a lot of that information in the GuC log (as debug prints, essentially) if you have the verbosity set suitably high. So it is not impossible to get the information out even with GuC based engine resets. But the reset=1 fallback is certainly the easiest debug option.

It's tricky, error capture is useful for developers but when debugging issues 
reported by end users. And DG1 is available on the shelves to buy. You say data 
is available in GuC logs but there is no upstream tool to read it. Luckily DG1 
is behind force probe so we get away with it for now.

Regards,

Tvrtko

Reply via email to