On Tue, May 12, 2026 at 11:21:42AM -0600, Jonathan Corbet wrote:
> Willy Tarreau <[email protected]> writes:
> 
> > AI tools are increasingly used to assist in bug discovery. While these
> > tools can identify valid issues, reports that are submitted without
> > manual verification often lack context, contain speculative impact
> > assessments, or include unnecessary formatting. Such reports increase
> > triage effort, waste maintainers' time and may be ignored.
> >
> > Reports where the reporter has verified the issue and the proposed fix
> > typically meet quality standards. This documentation outlines specific
> > requirements for length, formatting, and impact evaluation to reduce
> > the effort needed to deal with these reports.
> >
> > Cc: Greg KH <[email protected]>
> > Acked-by: Greg Kroah-Hartman <[email protected]>
> > Reviewed-by: Leon Romanovsky <[email protected]>
> > Signed-off-by: Willy Tarreau <[email protected]>
> > ---
> >  Documentation/process/security-bugs.rst | 57 +++++++++++++++++++++++++
> >  1 file changed, 57 insertions(+)
> 
> One nit:
> 
> > +  * **Impact Evaluation**: Many AI-generated reports lack an understanding 
> > of
> > +    the kernel's threat model and go to great lengths inventing theoretical
> > +    consequences.
> 
> If only we had a shiny new document describing that threat model that we
> could reference here... :)

Ah yes, a link to that would make things better, but don't we have that
elsewhere in this series?

thanks,

greg k-h

Reply via email to