On Tue, 2023-03-28 at 08:08 -0400, Eric Feng wrote:
> My apologies. Forgot to CC the mailing list in my previous e-mail.
> Original reply below:
> 
> _______
> 
> 
> Hi David,
> 
> Thank you for your feedback!
> 
> > 
[...snip...]

> > 
> 
> > > Error-handling checking: Various checks for common errors such as
> > > dereferencing a NULL value.
> > 
> > Yes.  This is already done by -fanalyzer, but we need some way for
> > it
> > to know the outcomes of specific functions: e.g. one common pattern
> > is
> > that API function "PyFoo_Bar" could either:
> > (a) succeed, returning a PyObject * that the caller "owns" a
> > reference
> > to, or
> > (b) fail, returning NULL, and setting an exception on the thread-
> > local
> > interpreter state object
> 
> 
> Sounds good. In other words, the infrastructure of this check is
> already there and our job is to retrofit CPython API-specific
> knowledge for this task. Please correct me if my understanding here
> is
> wrong.

For the task above, I think it's almost all there, it's "just" a case
of implementing the special-case knowledge about the CPython API,
mostly via known_function subclasses.

You might need to implement a new plugin hook for reference-count
checking: IIRC cpychecker had some kind of custom check when returning
from the top-level function in the analysis path that issued a warning
about any PyObjects that had the wrong ob_refcnt.  So you might need to
add a little infrastructure for that, but hopefully that's a relative
simple task.

Some of the other tasks on the list might need other extensions of the
analyzer, some more involved than others; e.g.
- the format-string checking would need integrating the analyzer with -
Wformat (see the two bugzilla entries I posted; this could take a fair
bit of work, but of great benefit beyond just this project)
- checking of PyMethodDef tables would need an extra hook for walking
global initializers (hopefully easy),
etc.

> 
> > > Errors in exception-handling: Checks for situations in which
> > > functions
> > > returning PyObject* that is NULL are not gracefully handled.
> > 
> > Yes; detection of this would automatically happen if we implemented
> > known_function subclasses e.g. for the pattern above.
> 
> 
> Sounds good. I will merge this task with the previous one in the next
> iteration of this proposal since it will be handled as a side effect
> of implementing the previous task.

(nods)

> 
> 
> > You don't mention testing.  I'd expect part of the project to be
> > the
> > creation of a regression test suite, with each step adding test
> > coverage for the features it adds.  There are lots of test cases in
> > the
> > existing cpychecker test suite that you could reuse  - though
> > beware,
> > the test harness there is very bad - I made multiple mistakes:
> > - expecting "gold" outputs from test cases - specific stderr
> > strings,
> > which make the tests very brittle
> > - external scripts associated with .c files, to tell it how to
> > invoke
> > the compiler, which make the tests a pain to maintain and extend.
> > 
> > GCC's own test suite handles this much better using DejaGnu where:
> > - we test for specific properties of interest in the behavior of
> > each
> > test (rather than rigidly specifying everything about the behavior
> > of
> > each test)
> > - the tests are expressed as .c files with "magic" comments
> > containing
> > directives for the test harness
> > 
> > That said DejaGnu is implemented in Tcl, which is a pain to deal
> > with;
> > you could reuse DejaGnu, or maybe Python might be a better choice;
> > I'm
> > not sure.
> 
> 
> You're right, I forgot to mention that in the initial draft. Thank
> you
> for pointing that out. I agree with the bottom-up approach with
> respect to building a comprehensive regression test suite. In terms
> of
> specifically what to implement the suite in, I'll explore DejaGnu/Tcl
> in more detail before making a more informed decision.

(nods)

> 
> > It might be good to add new attributes to CPython's headers so that
> > the
> > function declarations become self-descriptive about e.g.
> > refererence-
> > counting semantics (in a way readable both to humans and to static
> > analysis tools).  If so, this part of the project would involve
> > working
> > with the CPython development community, perhaps writing a PEP:
> >   https://peps.python.org/pep-0001/
> > Again, this would be an ambitious goal, probably better done after
> > there's a working prototype.
> 
> 
> That would be very exciting. However, I'm not sure if I fully
> understand what you mean. Can you clarify by giving an example of
> what
> the new attributes you had in mind might look like and how they would
> help (for example with respect to reference counting semantics)?

In cpychecker I added some custom function attributes:
  https://gcc-python-plugin.readthedocs.io/en/latest/cpychecker.html
which were:
  __attribute__((cpychecker_returns_borrowed_ref))
  __attribute__((cpychecker_steals_reference_to_arg(n)))

I imagine that the plugin could provide similar attributes, and that
CPython could be patched so that the autoconf checks detect the
attributes, and the headers have some macros that optionally use them.
For example, consider e.g. list objects, where the API declarations
here:
https://github.com/python/cpython/blob/main/Include/listobject.h

are simply:

  PyAPI_FUNC(Py_ssize_t) PyList_Size(PyObject *);
  PyAPI_FUNC(PyObject *) PyList_GetItem(PyObject *, Py_ssize_t);

etc, as documented here:
https://docs.python.org/3/c-api/list.html

and implemented in here:
https://github.com/python/cpython/blob/main/Objects/listobject.c

Given that the implementation of both PyList_Size and PyList_GetItem
begin with:

    if (!PyList_Check(op)) {

and PyList_Check is a macro that unconditionally dereferences its
pointer operand:

#define PyList_Check(op) \
    PyType_FastSubclass(Py_TYPE(op), Py_TPFLAGS_LIST_SUBCLASS)

these declarations require non-NULL args, and thus perhaps we could add
some new macros making the above decls look like:

  PyAPI_FUNC(Py_ssize_t) PyList_Size(PyObject *)
    PyAPI_NonNullArg(1);

  PyAPI_FUNC(PyObject *) PyList_GetItem(PyObject *, Py_ssize_t)
    PyAPI_NonNullArg(1))
    PyAPI_ReturnsBorrowedRef();

or somesuch, documenting the required non-nullness of the args, and
that if PyList_GetItem returns successfully, the caller is borrowing a
reference to the object.

But exactly what these macros would look like would be a decision for
the CPython community (hence do it via PEP, based on a sample
implementation).

> 
> Incidentally, I forgot to mention in my previous email but I believe
> the 350-hour option is the one that is more appropriate for this
> project. Please let me know otherwise.

Yeah, this sounds like a big project.  Fortunately there are a lot of
possible subtasks in this one, and the project has benefits to GCC and
to CPython even if you only get a subset of the ideas done in the time
available (refcount checking being probably the highest-value subtask).

Hope this all makes sense
Dave

> 
> Best,
> Eric
> 
> On Sun, Mar 26, 2023 at 11:58 AM David Malcolm <dmalc...@redhat.com>
> wrote:
> > 
> > On Sat, 2023-03-25 at 15:38 -0400, Eric Feng via Gcc wrote:
> > > Hi GCC community,
> > > 
> > > For GSoC, I am extremely interested in working on the selected
> > > project
> > > idea with respect to extending the static analysis pass. In
> > > particular, porting gcc-python-plugin's cpychecker to a plugin
> > > for
> > > GCC
> > > -fanalyzer as described in
> > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107646.
> > 
> > Hi Eric, welcome to the GCC commmunity.
> > 
> > I'm the author/maintainer of GCC's static analysis pass.  I'm also
> > the
> > author of gcc-python-plugin and its erstwhile "cpychecker" code, so
> > I'm
> > pleased that you're interested in the project.
> > 
> > I wrote gcc-python-plugin and cpychecker over a decade ago when I
> > was
> > focused on CPython development (before I switched to GCC
> > development),
> > but it's heavily bitrotted over the years, as I didn't have enough
> > cycles to keep it compatible with changes in both GCC and CPython
> > whilst working on GCC itself.  In particular, the cpychecker code
> > stopped working a number of GCC releases ago.  However, the
> > cpychecker
> > code inspired much of my work on GCC's static analysis pass and on
> > its
> > diagnostics subsystem, so much of it now lives on in C++ form as
> > core
> > GCC functionality.  Also, the Python community would continue to
> > find
> > static analysis of CPython extension modules useful, so it would be
> > good to have the idea live on as a GCC plugin on top of -fanalyzer.
> > 
> > >  Please find an
> > > initial draft of my proposal below and let me know if it is a
> > > reasonable starting point. Please also correct me if I am
> > > misunderstanding any particular tasks and let me know what areas
> > > I
> > > should add more information for or what else I may do in
> > > preparation.
> > 
> > Some ideas for familiarizing yourself with the problem space:
> > 
> > You should try building GCC from source, and hack in a trivial
> > warning
> > that emits "hello world, I'm compiling function 'foo'".  I wrote a
> > guide to GCC for new contributors here that should get you started:
> >   https://gcc-newbies-guide.readthedocs.io/en/latest/
> > This will help you get familiar with GCC's internals, and although
> > the
> > plan is to write a plugin, I expect that you'll run into places
> > where a
> > patch to GCC itself is more appropriate (bugs and missing
> > functionality
> > ), so having your own debug build of GCC is a good idea.
> > 
> > You should become familiar with CPython's extension and embedding
> > API.
> > See the excellent documentation here:
> >   https://docs.python.org/3/extending/extending.html
> > It's probably a good exercise to write your own trivial CPython
> > extension module.
> > 
> > You can read the old cpychecker code inside the gcc-python-plugin
> > repository, and I gave a couple of talks on it as PyCon a decade
> > ago:
> > 
> > PyCon2012: "Static analysis of Python extension modules using GCC"
> > https://pyvideo.org/pycon-us-2012/static-analysis-of-python-extension-modules-using.html
> > 
> > PyCon2013: "Death by a thousand leaks: what statically-analysing
> > 370
> > Python extensions looks like"
> > https://pyvideo.org/pycon-us-2013/death-by-a-thousand-leaks-what-statically-analys.html
> > https://www.youtube.com/watch?v=bblvGKzZfFI
> > 
> > (sorry about all the "ums" and "errs"; it's fascinating and
> > embarrassing to watch myself from 11 years ago on this, and see how
> > much I've both forgotten and learned in the meantime.  Revisiting
> > this
> > work, I'm ashamed to see that I was referring to the implementation
> > as
> > based on "abstract interpretation" (and e.g. absinterp.py), when I
> > now
> > realize it's actually based on symbolic execution (as is GCC's-
> > fanalyzer)
> > 
> > Also, this was during the transition era between Python 2 and
> > Python 3,
> > whereas now we only have to care about Python 3.
> > 
> > There may be other caveats; I haven't fully rewatched those talks
> > yet
> > :-/
> > 
> > Various comments inline below, throughout...
> > 
> > > 
> > > _______
> > > 
> > > Describe the project and clearly define its goals:
> > > One pertinent use case of the gcc-python plugin is as a static
> > > analysis tool for CPython extension modules.
> > 
> > It might be more accurate to use the past tense when referring to
> > the
> > gcc-python plugin, alas.
> > 
> > >  The main goal is to help
> > > programmers writing extensions identify common coding errors.
> > > Broadly,
> > > the goal of this project is to port the functionalities of
> > > cpychecker
> > > to a -fanalyzer plugin.
> > 
> > (nods)
> > 
> > > 
> > > Below is a brief description of the functionalities of the static
> > > analysis tool for which I will work on porting over to a -
> > > fanalyzer
> > > plugin. The structure of the objectives is taken from the
> > > gcc-python-plugin documentation:
> > > 
> > > Reference count checking: Manipulation of PyObjects is done via
> > > the
> > > CPython API and in particular with respect to the objects'
> > > reference
> > > count. When the reference count belonging to an object drops to
> > > zero,
> > > we should free all resources associated with it. This check helps
> > > ensure programmers identify problems with the reference count
> > > associated with an object. For example, memory leaks with respect
> > > to
> > > forgetting to decrement the reference count of an object
> > > (analogous
> > > to
> > > malloc() without corresponding free()) or perhaps object access
> > > after
> > > the object's reference count is zero (analogous to access after
> > > free()).
> > 
> > (nods)
> > > 
> > > Error-handling checking: Various checks for common errors such as
> > > dereferencing a NULL value.
> > 
> > Yes.  This is already done by -fanalyzer, but we need some way for
> > it
> > to know the outcomes of specific functions: e.g. one common pattern
> > is
> > that API function "PyFoo_Bar" could either:
> > (a) succeed, returning a PyObject * that the caller "owns" a
> > reference
> > to, or
> > (b) fail, returning NULL, and setting an exception on the thread-
> > local
> > interpreter state object
> > 
> > 
> > > 
> > > Errors in exception-handling: Checks for situations in which
> > > functions
> > > returning PyObject* that is NULL are not gracefully handled.
> > 
> > Yes; detection of this would automatically happen if we implemented
> > known_function subclasses e.g. for the pattern above.
> > > 
> > > Format string checking: Verify that arguments to various CPython
> > > APIs
> > > which take format strings are correct.
> > 
> > Have a look at:
> > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107017
> > ("RFE: support printf-style formatted functions in -fanalyzer")
> > and:
> > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100121
> > ("RFE: plugin support for -Wformat via __attribute__((format()))")
> > 
> > 
> > 
> > > 
> > > Associating PyTypeObject instances with compile-time-types:
> > > Verify
> > > that the run-time type of a PyTypeObject matches its
> > > corresponding
> > > compile-time type for inputs where both are provided.
> > 
> > (nods)
> > 
> > > 
> > > Verification of PyMethodDef tables: Verify that the function in
> > > PyMethodDef tables matches the calling convention of the ml_flags
> > > set.
> > 
> > (nods)
> > 
> > > 
> > > I suspect a good starting point would be existing proof-of-
> > > concept
> > > -fanalyzer plugins such as the CPython GIL example
> > > (analyzer_gil_plugin). Please let me know of any additional
> > > pointers.
> > 
> > Yes.
> > 
> > There are also two example of "teaching" the analyzer about the
> > behavior of specific functions via subclassing known_function in:
> >   analyzer_known_fns_plugin.c
> > and:
> >  analyzer_kernel_plugin.c
> > 
> > 
> > > If there is time to spare, I think it is reasonable to extend the
> > > capabilities of the original checker as well (more details in the
> > > expected timeline below).
> > > 
> > > Provide an expected timeline:
> > > I suspect the first task to take the longest since it is
> > > relatively
> > > involved and it also includes getting the initial infrastructure
> > > of
> > > the plugin up. From the experience of the first task, I hope the
> > > rest
> > > of the tasks would be implemented faster. Additionally, I
> > > understand
> > > that the current timeline outline below is too vague. I wished to
> > > check in with the community for some feedback on whether I am in
> > > the
> > > right ballpark before committing to more details.
> > > 
> > > Week 1 - 7: Reference counting checking
> > > Week 8: Error-handling checking
> > > Week 9: Errors in exception handling
> > > Week 10: Format string checking
> > > Week 11: Verification of PyMethodDef tables
> > > Week 12: I am planning the last week to be safety in case any of
> > > the
> > > above tasks take longer than initially expected. In case
> > > everything
> > > goes smoothly and there is time to spare, I think it is
> > > reasonable to
> > > spend the time extending the capabilities of the original pass.
> > > Some
> > > ideas include extending the subset of CPython API that cpychecker
> > > currently support, matching up similar traces to solve the issue
> > > of
> > > duplicate error reports, and/or addressing any of the other
> > > caveats
> > > currently mentioned in the cpychecker documentation. Additional
> > > ideas
> > > are welcome of course.
> > 
> > FWIW I think it's a very ambitious project, but you seem capable.
> > 
> > You don't mention testing.  I'd expect part of the project to be
> > the
> > creation of a regression test suite, with each step adding test
> > coverage for the features it adds.  There are lots of test cases in
> > the
> > existing cpychecker test suite that you could reuse  - though
> > beware,
> > the test harness there is very bad - I made multiple mistakes:
> > - expecting "gold" outputs from test cases - specific stderr
> > strings,
> > which make the tests very brittle
> > - external scripts associated with .c files, to tell it how to
> > invoke
> > the compiler, which make the tests a pain to maintain and extend.
> > 
> > GCC's own test suite handles this much better using DejaGnu where:
> > - we test for specific properties of interest in the behavior of
> > each
> > test (rather than rigidly specifying everything about the behavior
> > of
> > each test)
> > - the tests are expressed as .c files with "magic" comments
> > containing
> > directives for the test harness
> > 
> > That said DejaGnu is implemented in Tcl, which is a pain to deal
> > with;
> > you could reuse DejaGnu, or maybe Python might be a better choice;
> > I'm
> > not sure.
> > 
> > 
> > It might be good to add new attributes to CPython's headers so that
> > the
> > function declarations become self-descriptive about e.g.
> > refererence-
> > counting semantics (in a way readable both to humans and to static
> > analysis tools).  If so, this part of the project would involve
> > working
> > with the CPython development community, perhaps writing a PEP:
> >   https://peps.python.org/pep-0001/
> > Again, this would be an ambitious goal, probably better done after
> > there's a working prototype.
> > 
> > 
> > > 
> > > Briefly introduce yourself and your skills and/or
> > > accomplishments:
> > > I am a current Masters in Computer Science student at Columbia
> > > University. I did my undergraduates at Bates College (B.A Math)
> > > and
> > > Columbia University (B.S Computer Science) respectively. My
> > > interests
> > > are primarily in systems, programming languages, and compilers.
> > > 
> > > At Columbia, I work in the group led by Professor Stephen Edwards
> > > on
> > > the SSLANG programming language: a language built atop the Sparse
> > > Synchronous Model. For more formal information on the Sparse
> > > Synchronous Model, please take a look at "The Sparse Synchronous
> > > Model
> > > on Real Hardware" (2022). Please find our repo, along with my
> > > contributions, here: https://github.com/ssm-lang/sslang (my
> > > GitHub
> > > handle is @efric). My main contribution to the compiler so far
> > > involved adding a static inlining optimization pass with another
> > > SSLANG team member. Our implementation is mostly based on the
> > > work
> > > shown in "Secrets of the Glasgow Haskell Compiler Inliner"
> > > (2002),
> > > with modifications as necessary to better fit our goals. The
> > > current
> > > implementation is a work in progress and we are still working on
> > > adding (many) more features to it. My work in this project is
> > > written
> > > in Haskell.
> > > 
> > > I also conduct research in the Columbia Systems Lab.
> > > Specifically, my
> > > group and I, advised by Professor Jason Nieh, work on secure
> > > containerization with respect to untrusted software systems.
> > > Armv9-A
> > > introduced Realms, secure execution environments that are opaque
> > > to
> > > untrusted operating systems, as part of the Arm Confidential
> > > Compute
> > > Architecture (CCA). Please find more information on CCA in
> > > "Design
> > > and
> > > Verification of the Arm Confidential Compute Architecture"
> > > (2022).
> > > Introduced together was the Realm Management Monitor (RMM), an
> > > interface for hypervisors to allow secure virtualization
> > > utilizing
> > > Realms and the new hardware support. Currently, the Realm
> > > isolation
> > > boundary is at the level of entire VMs. We are working on
> > > applying
> > > Realms to secure containers. Work in this project is mostly at
> > > the
> > > kernel and firmware level and is written in C and ARM assembly.
> > > 
> > > Pertaining experience with compilers in addition to SSLANG, my
> > > undergraduate education included a class on compilers that
> > > involved
> > > writing passes for Clang/LLVM. More currently, I am taking a
> > > graduate-level class on Types, Languages, and Compilers where my
> > > partner and I are working on a plugin for our own small toy
> > > language
> > > compiler which would be able to perform type inference. The
> > > plugin
> > > would generate relevant constraints and solve them on behalf of
> > > the
> > > compiler. This project is still in its early stages, but the idea
> > > is
> > > to delegate type inference functionalities to a generic library
> > > given
> > > some information instead of having to write your own constraint
> > > solver.
> > 
> > It sounds like you may know more about the theory than I do!
> > 
> > > 
> > > Thank you for reviewing my proposal!
> > 
> > Thanks for sending it; hope the above is helpful (and not too
> > intimidating!)
> > 
> > Dave
> > 
> 

Reply via email to