First of all, again great to see that you look so carefully at the paper. Your observation is right: If each participant has a distinct b_i as you've sketched, then the duplicate checks can be omitted. And I tend to agree that this is more natural scheme. In fact, this is where we started, and we had a security proof of this (though without tweaking worked out) in an earlier unpublished draft of the paper, and only afterwards we found a scheme with a single b.
The reason we why prefer the scheme with a single b is simply efficiency. The current signing protocol needs 3 group exponentiations (i.e., scalar-point multiplications). With separate b values, one of those becomes a multi-exponentiation of size n-1, which is much slower and needs O(n/log n) time instead of O(1). Another, very minor and optional efficiency benefit is that the coordinator can take care of the computation of the final R2 and send it to the participants. (But that's really minor, because the coordinator needs to send the individual R_{2,i} values anyway.) Intuitively, we manage to get away with a single b by (ab)using the R_i values as ephemeral identifiers for the participants, which makes it possible to distinguish them even if they happen to have the same public key (or, in other words, if it's the same participant that joins the session more than once). This is why we need the uniqueness check, to make sure that the identifiers are unique. And yes, the uniqueness check looks a bit strange at first glance, but (as the proof shows) there should be nothing wrong with it. One could argue that the uniqueness check is a potential footgun in practice because an implementation could omit it by accident, and would still "work" and produce signatures. But we find this not really convincing because it's possible to create a failing test vector for this case. We didn't talk about identifying disruptive participants in the paper at all, but one could also argue that the uniqueness check creates a problem if the honest participants want to figure out who disrupted a session: if malicious participant i copies from honest participant j, then how the others tell who of them to exclude? But if you think about it, that's not a real issue. In any case, identifying disruptive participants will work reliably only if the coordinator is honest, so let's assume this. And then, if additionally the participants have secure channels to the coordinator, then no malicious can steal the R_{2,j} of an honest participant j. So, if the coordinator sees that R_{2,i} = R_{2,j}, the right conclusion is that they are colluding and both malicious. On Mon, 2025-06-16 at 12:35 -0700, waxwing/ AdamISZ wrote: > So here's my question: why does the signing context, represented by > "b", in the aggregate R-value, need to be a fixed value across > signing indices? Clearly if we have one b-value, H-non(ctx), where > ctx is ((P1, m1), (P2, m2),..) [1], then it is easy to sum all the > R1,i = R1 and then sum all the R2,i values = R2 and then R = R1 + > bR2, exploiting the linearity. But why do we have to? If coefficient > b were different per participant, i.e. b_i = H(ctx, m_i, P_i) then it > makes that sum "harder" but still trivial for all participants to > create/calculate. All participants can still agree on the correct > aggregate "R" before making their second stage output s_i. > > If I am right that that is possible, then the gain is clear (I > claim!): the attacks previously described, involving "attacker uses > same key with different message" fail. The first thing I'd note is > that the basic thwarting of ROS/Wagner style attacks still exists, > because the b_i values still include the whole context, meaning > grinding your nonce doesn't allow you to control the victim's > effective nonce. But because in this case, you cannot create > scenarios like in Appendix B, i.e. in the notation there: > F(X1, m1(0), out1, ctx) = F(X1, m1(1), out1, ctx) is no longer true > because b no longer only depends on global ctx, but also on m1 (b_1 = > Hnon(ctx, m1, P1) is my proposal), > > then the "Property 3" does not apply and so (maybe? I haven't thought > it through properly) the duplication checks as currently described, > would not be needed. > > I feel like this alternate version is more intuitive: I see it as > analogous to (though not the same) as Fiat-Shamir hashing, where the > main idea is to fix the actual context of the proof; but the context > of *my* partial signature for this aggregate, is not only ((P1, m1), > (P2, m2),..) but also my particular entry in that list. > -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscr...@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/437237c5f0debe352aafd0a184d6266c14d6e142.camel%40timruffing.de.