Hi Steffen,

Steffen Nurpmeso wrote:
How about malloc hookability?

https://developers.redhat.com/articles/2021/08/25/securing-malloc-glibc-why-malloc-hooks-had-go
"The key misfeature of the debugging hooks, though, was their presence as unprotected function pointers that were guaranteed to be executed at specific events. This made the hooks an easy exploit primitive in practically every program that ran on Linux distributions. A trivial search for __malloc_hook "house of" turns up a long list of exploit methods that use the hooks as either an intermediate step or the final goal for the exploit."

I prefer to use valgrind. Someday someone will discover that preloading malloc can also be exploited.

git (at least an automated conversion to Savannah / xy)?

Consider lzip as a work of literature. I just won't reveal my writing habits to the world.

I think the BSDs refrain because of GPL; at least it is v2 not v3,
but still; zstd instead (i think: "then, later") changed to
BSD-style OR GPLv2 saying "You may select, at your option, one of
the above-listed licenses".

There is a fully functional permissively licensed implementation of the lzip data compressor: http://www.nongnu.org/lzip/pdlzip.html

Pdlzip is free software. The (de)compression code is in the public domain. The rest is under a 2-clause BSD license.

Lzlib is free software distributed under a 2-clause BSD license.

While CRC-32 is ok, i guess people (including me) doubt its
viability for long-term archiving, especially when compared with
other algorithms.  It is not so terrible as years ago, since most
people surely have lots of copies, and the filesystems use
checksumming.  But as a standalone archive, CRC-32 fails badly,
for example smhash says "insecure, 8590x collisions, distrib,
PerlinNoise":

The tests performed by smhasher are 100% unrelated to error detection in a decompressor context. CRC32 is probably optimal to detect errors in lzip members. See http://www.nongnu.org/lzip/manual/lzip_manual.html#Quality-assurance

"Lzip, like gzip and bzip2, uses a CRC32 to check the integrity of the decompressed data because it provides optimal accuracy in the detection of errors up to a compressed size of about 16 GiB, a size larger than that of most files. In the case of lzip, the additional detection capability of the decompressor reduces the probability of undetected errors several million times more, resulting in a combined integrity checking optimally accurate for any member size produced by lzip."

See also http://www.nongnu.org/lzip/safety_of_the_lzip_format.html#lzma_crc
'4.1 Interaction between LZMA compression and CRC32' and '7 Conclusions':

"After 14 years of testing, the MTBF of lzip can only be estimated because not even one false negative has ever been observed. If one were to continuously decompress corrupt lzip files of about one megabyte in size (10 decompressions per second), each of them containing the kind of corruption most difficult to detect (one random bit flip), then a false negative would be expected to happen every 694 million years."

Best regards,
Antonio.


Reply via email to