Good morning Prayank,

I think this is still good to do, controversial or no, but then I am 
permanently under a pseudonym anyway, for what that is worth.

> Few questions for everyone reading this email:
>
> 1.What is better for Security? Trusting authors and their claims in PRs or a 
> good review process?

Review, of course.

> 2.Few people use commits from unmerged PRs in production. Is it a good 
> practice?

Not unless they carefully reviewed it and are familiar enough with the codebase 
to do so.
In practice core maintainers of projects will **very** occassionally put 
unmerged PRs in experimental semi-production servers to get data on it, but 
they tend to be very familiar with the code, being core maintainers, and 
presumably have a better-than-average probability of catching security issues 
beforehand.

> 3.Does this exercise help us in being prepared for worst?

I personally believe it does.

Do note that in practice, humans being lazy, will come to trust long-time 
contributors, and may reduce review for them just to keep their workload down, 
so that is not tested (since you will be making throwaway accounts).
However, long-time contributors introducing security vulnerabilities tend to be 
a good bit rarer anyway (reputations are valuable), so this somewhat matches 
expected problems (i.e. newer contributors deliberately or accidentally (due to 
unfamiliarity) introducing vulnerabilities).

I think it would be valuable to lay out exactly what you intend to do, e.g.

* Generate commitments of the pseudonyms you will use.
* Insert a few random 32-byte numbers among the commitments and shuffle them.
* Post the list with the commitments + random crap here.
* Insert avulnerability-adding PRs to targets.
* If it gets caught during review, publicly announce here with praise that 
their project caught the PR and reveal the decommitment publicly.
* If not caught during review, privately reveal both the inserted vulnerability 
*and* the review failure via the normal private vulnerability-reporting 
channels.

The extra random numbers mixed with the commitments produce uncertainty about 
whether or not you are done, which is important to ensure that private 
vulnerabilities are harder to sniff out.

I think public praise of review processes is important, and to privately 
correct review processes.
Review processes **are** code, followed by sapient brains, and this kind of 
testing is still valuable, but just as vulnerabilities in machine-readable code 
require careful, initially-private handling, vulnerabilities in review 
processes (being just another kind of code, readable by much more complicated 
machines) also require careful, initially-private handling.

Basically: treat review process failures the same as code vulnerabilities, 
pressure the maintainers to fix the review process failure, then only reveal it 
later when the maintainers have cleaned up the review process.



Regards,
ZmnSCPxj
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to