Hi Mike,

On Mon, Jan 11, 2021 at 12:09:12PM +0000, Mike Leach wrote:
> Hi Leo,
> 
> I think there is an issue here in that your modification assumes that
> all cpus in the system are of the same ETM type. The original routine
> allowed for differing ETM types, thus differing cpu ETM field lengths
> between ETMv4 / ETMv3, the field size was used after the relevant
> magic number for the cpu ETM was read.
> 
> You have replaced two different sizes - with a single calculated size.

Thanks for pointing out this.

> Moving forwards we are seeing the newer FEAT_ETE protocol drivers
> appearing on the list, which will ultimately need a new metadata
> structure.
> 
> We have had discussions within ARM regarding the changing of the
> format to be more self describing - which should probably be opened
> out to the CS mailing list.

I think here have two options.  One option is I think we can use
__perf_cs_etmv3_magic/__perf_cs_etmv4_magic as indicator for the
starting of next metadata array; when copy the metadata, always check
the next item in the buffer, if it's __perf_cs_etmv3_magic or
__perf_cs_etmv4_magic, will break loop and start copying metadata
array for next CPU.  The suggested change is pasted in below.

Another option is I drop patches 03,05/07 in the series and leave the
backward compatibility fixing for a saperate patch series with self
describing method.  Especially, if you think the first option will
introduce trouble for enabling self describing later, then I am happy
to drop patches 03,05.

How about you think for this?

Thanks,
Leo

---8<---

diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
index a2a369e2fbb6..edaec57362f0 100644
--- a/tools/perf/util/cs-etm.c
+++ b/tools/perf/util/cs-etm.c
@@ -2558,12 +2558,19 @@ int cs_etm__process_auxtrace_info(union perf_event 
*event,
                                err = -ENOMEM;
                                goto err_free_metadata;
                        }
-                       for (k = 0; k < CS_ETM_PRIV_MAX; k++)
+                       for (k = 0; k < CS_ETM_PRIV_MAX; k++) {
                                metadata[j][k] = ptr[i + k];
 
+                               if (ptr[i + k + 1] == __perf_cs_etmv3_magic ||
+                                   ptr[i + k + 1] == __perf_cs_etmv4_magic) {
+                                       k++;
+                                       break;
+                               }
+                       }
+
                        /* The traceID is our handle */
                        idx = metadata[j][CS_ETM_ETMTRACEIDR];
-                       i += CS_ETM_PRIV_MAX;
+                       i += k;
                } else if (ptr[i] == __perf_cs_etmv4_magic) {
                        metadata[j] = zalloc(sizeof(*metadata[j]) *
                                             CS_ETMV4_PRIV_MAX);
@@ -2571,12 +2578,19 @@ int cs_etm__process_auxtrace_info(union perf_event 
*event,
                                err = -ENOMEM;
                                goto err_free_metadata;
                        }
-                       for (k = 0; k < CS_ETMV4_PRIV_MAX; k++)
+                       for (k = 0; k < CS_ETMV4_PRIV_MAX; k++) {
                                metadata[j][k] = ptr[i + k];
 
+                               if (ptr[i + k + 1] == __perf_cs_etmv3_magic ||
+                                   ptr[i + k + 1] == __perf_cs_etmv4_magic) {
+                                       k++;
+                                       break;
+                               }
+                       }
+
                        /* The traceID is our handle */
                        idx = metadata[j][CS_ETMV4_TRCTRACEIDR];
-                       i += CS_ETMV4_PRIV_MAX;
+                       i += k;
                }
 
                /* Get an RB node for this CPU */

Reply via email to