Now that I've packaged the GEF gdb extension (package "gdb-gef" in Rawhide) which provides the "got-audit" command, I wanted to ask again whether there's interest in monitoring critical packages for signs of GOT tampering.

The "got-audit" gdb command examines a running process to print a list of symbols, and the file backing the area of memory where each symbol points.  For example:

accept@GLIBC_2.2.5 : /usr/lib64/libc.so.6
access@GLIBC_2.2.5 : /usr/lib64/libc.so.6
AES_encrypt@OPENSSL_3.0.0 : /usr/lib64/libcrypto.so.3
AES_set_encrypt_key@OPENSSL_3.0.0 : /usr/lib64/libcrypto.so.3
...
EVP_PKEY_set1_RSA@OPENSSL_3.0.0 : /usr/lib64/liblzma.so.5 :: ERROR EVP_PKEY_set1_RSA not exported by /usr/lib64/liblzma.so.5

In addition to printing the mapping, the command will print an error if it determines that two libraries are both exporting a matching symbol (e.g., if EVP_PKEY_set1_RSA appeared in libcrypto.so and in liblzma.so), and it will print an error if it determines that a symbol has been resolved to a file that doesn't export that symbol (e.g., if EVP_PKEY_set1_RSA points to liblzma.so, but liblzma.so doesn't export a symbol of that name).

In the case of the xz-utils attack, the got-audit command would have logged an error due to the latter check, as illustrated above.

(As a topic for later: the tirpc library exports functions with the same name as functions that appear in libc, so the behavior of erroring on duplicate symbols needs to be rationalized.  Maybe an exemption for libtirpc.so?  Are there other libraries that do this?)

I have a proof-of-concept test implemented for the openssh package, in this pull request: https://src.fedoraproject.org/rpms/openssh/pull-request/73

If we are interested in building out protection from this class of attack, there are a number of questions I have about the best way to do that:

The current implementation compares the result to an expected result.  While I like the idea of keeping historical records, I'm not sure I can justify the developer overhead of updating the expected result file as it evolves due to updates in the program and in the libraries it uses.  It would be simpler to simply watch for errors in the output.

If we only watch for errors in the output, it would be much easier to convert this into a generic test and run it against more daemons.

The proof-of-concept only audits symbols in the binary, but shared objects have their own set of dynamic symbols loaded from shared libraries.  The output is quite large if we log the full set, and more importantly it will change much more often, but if we only look for errors, we can use "got-audit --all" to also watch symbol resolution within the shared objects.

But it's probably important to mention that if we only watch for errors in the output, the test could very easily invisibly break. If it stopped working properly and printed nothing, we might go on thinking that we had mitigations in place when the test had actually become a NOP.

Is this work interesting?  Should I continue working on it?
--
_______________________________________________
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

Reply via email to