Hi Peter,

On 7/16/24 11:09 AM, Peter Maydell wrote:
On Tue, 16 Jul 2024 at 14:48, Alex Bennée <alex.ben...@linaro.org> wrote:

Gustavo Romero <gustavo.rom...@linaro.org> writes:

Hi Alex,

On 7/16/24 8:42 AM, Alex Bennée wrote:
Coverity reported a memory leak (CID 1549757) in this code and its
admittedly rather clumsy handling of extending the command table.
Instead of handing over a full array of the commands lets use the
lighter weight GPtrArray and simply test for the presence of each
entry as we go. This avoids complications of transferring ownership of
arrays and keeps the final command entries as static entries in the
target code.
How did you run Coverity to find the leak? I'm wondering what's the
quickest way to check it next time.

Coverity is only run in the cloud on the released build. There is a
container somewhere but I don't know how its used.

The Coverity cloud stuff comes in two parts:
  (1) you build locally with the Coverity tools, which creates
a big opaque build-artefact
  (2) you upload that artefact to the cloud server, and the
actual analysis happens on the cloud

The container stuff and the integration with the Gitlab CI
is there for the sole purpose of automating the "local build
and upload" steps. You can't do an analysis-run locally.
(Well, you probably can if you have the paid-for commercial
version of the tooling, but we haven't got any kind of
setup for doing that.)

We only do analysis runs on head-of-git because the Coverity Scan
resource limits for open source projects give us about one
complete scan a day. So this is all "after the fact" stuff.
Developers who want to look at the scan results can create
an account via https://scan.coverity.com/projects/qemu .
Triaging new coverity reports is a bit tedious because there
are a ton of false positives...

Thanks for the explanation!


Cheers,
Gustavo

Reply via email to