Re: [openssl-project] dropping out
>> Another contributing factor is lack of opportunities to pursue >> so to say "fundamental" goals, formal validation of assembly code being >> one example. > > Formal validation of the assembly code is something I would > actually like to see, and even talked with some people about it. > But I don't have the time to actually get this done. To clarify. As far as *this* specific remark goes, it's not like I feel constrained, by policies or anything of the sort. It's rather about "chronic" failure to find opportunity to put things aside and devote a concentrated effort. The remark belongs in "what future holds" part. (As most should know by now, I'm not on good terms with words.) ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
[openssl-project] speaking of versioning schemes
As once said, what matters at the end of the day is whether or not versioning scheme is *accepted* by "downstream". And by "accepted" I mean to degree that ROBOT wouldn't have happened to Facebook. In other words first question should not be how to arrange digits and letters, but rather how to mitigate the apparent tendency to deviate from "upstream". ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
[openssl-project] dropping out
I'm dropping out in a week time. It's not some single recent thing, but combination. I don't see that my perception of project spirit aligns with current state of affairs. This goes for both technical aspects and policies (lack of, and/or their applications). One can probably say that these are misguided expectations, but that's where I draw line for myself. Another contributing factor is lack of opportunities to pursue so to say "fundamental" goals, formal validation of assembly code being one example. Just in case, I don't know what future holds exactly, but it's not end of "crypto programming" for me, it's just that I won't be lending my expertise directly/solely to this project on general basis. As for my e-mail address. I'd like to argue that it would be appropriate if it remains working for a while after, at the very least to see through some of ongoing threads. I promise not to use it in context other than purely technical. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Please freeze the repo
It would be appropriate to merge https://github.com/openssl/openssl/pull/6916 (1.0.2, commit message would need adjustment for merged from) and https://github.com/openssl/openssl/pull/6596 (1.1.0, was marked "pending for 2nd" but it's apparently ready). ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Forthcoming OpenSSL releases
> Forthcoming OpenSSL releases > I have some RSA hardening fixes in pipeline... >>> >>> Do you suggest we wait with a release on that, or can we just put >>> it in the next release? >> >> I should be able to pull it off in before release. What I'm saying is >> that it would probably be appropriate to review them as they appear. > > Is it #6915 you're talking about? Updates to blinding are coming shortly. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Removal of NULL checks
> Should not be changed period. Even across major release boundaries. > This is not an ABI compatibility issue, it is a source compatibility > issue, and should avoided all the time. If we want to write a *new* > function that skips the NULL checks it gets a new name. While I admittedly crossed a line, I would actually argue against drawing the line as categorically as above. In sense that NULL checks *can* be excessive, and thing is that such checks can jeopardize application security. And in such case it wouldn't be totally inappropriate to remove them, excessive checks, *without* renaming function as suggested above. It would be kind of equivalent of changing one undefined behaviour pattern for another undefined one. Or rather for more "honest" undefined behaviour pattern:-) As for jeopardizing application security, taking this case as an example. What worked so far? Create stack without paying attention to result[!], dump things into stack even if it's NULL, pull nothing if it's NULL. Now imagine that this stack was some kind of deny list. If entry producer didn't pay attention to error when creating stack and putting things into it, consumer would be led to belief that there is nothing to deny and let through the requests that are supposed to be denied. This is kind of not-quite-right example in the context, because it implies that *all* NULL checks should have been removed, thus making *every* caller to pay attention, including consumer. While I left checks in some of the consume operations reckoning that at least those who will be putting things into NULL stack will have to pay attention and will cancel whole operation hopefully without getting to pull-nothing step. So that those entry producers who are not *currently* paying attention would actually be caught. Which actually would be a positive thing! Oh! Triggering factor was quite a number of unchecked pushes in apps. Yes, you can sense contradiction in sense that one can't "fix" those unchecked pushes with removal of NULL checks in questions. But they were just something that made me look at stack.c and wonder above questions. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Forthcoming OpenSSL releases
>>> Forthcoming OpenSSL releases >>> >> >> I have some RSA hardening fixes in pipeline... > > Do you have PR numbers for them? "in pipeline" kind of means "not yet [but I'll intensify the work to put them out]". In other words it's a pre-heads-up thing... ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Forthcoming OpenSSL releases
> Forthcoming OpenSSL releases > I have some RSA hardening fixes in pipeline... ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] To use or not use the iconv API, and to use or not use other libraries
> Regarding general use of other libraries, please think carefully before > voting, 'cause this *is* tricky. If you have a look, you will see that we > *currently* depend on certain standard libraries, such as, for example, > libdl. And perhaps we should also mention the pile of libraries used with > windows. > > In my mind, this makes that more general vote ridiculous, It certainly is, the vote should not be about such general principle. It's way too general and most importantly *too natural* to vote about. In sense that there are things that can't be voting matter. For example "humans need air to breath" is not, and so is assertion that OpenSSL needs libraries. And it's all but utterly natural to *minimize* dependencies and make *minimum* assumptions. This means that one can only vote on specifics, i.e. do we want specific dependency (at specific level) or make specific assumption (at specific level). That's why ballot should not be formulated "such as iconv", but *be* about iconv. As for iconv per se. One has to recognize that it was standardized by POSIX and one can expect it to be available on compliant system. Trick is to reliably identify the said systems, and possibly by version, not just by being X. So it's as important to have a fall-back to handle the exclusions. As there will be exclusions. I can even name one already, Android. And once again, I argue that there is even timing issue. It's not right moment to make too broad assumptions about iconv availability arguing that one is prepared to deal with issues. One can only argue that it might be appropriate to enable it in cases one can actually verify and confirm. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Help deciding on PR 6341 (facilitate reading PKCS#12 objects in OSSL_STORE)
> I think that the gist of the difference of opinion is whether it's OK > to use locale dependent functions such as mbstowcs in libcrypto or > not. > > The main arguments against allowing such functions in libcrypto is > that we should push applications to run with an UTF-8 input method > (whether that's provided by the terminal driver, or the process > holding the terminal, or the application itself...) rather than have > them call setlocale() with appropriate arguments (which is needed for > mbstowcs to work right). Assertion is rather that we can't/shouldn't rely on application to call setlocale and with argument suitable for specific purpose [of accessing PKCS#12 in this case]. And since we can't do that, we would be better off not calling mbstowcs. Because it adds a variable we have no control over. Given some specific input it would be more honest/appropriate to return error to application than to make outcome conditional on whether or not application called setlocale and with which argument. One can also view it as following: all applications get exactly same treatment. Pushing applications and users toward UTF-8 input method is merely a consequence of the suggestion to give all applications same treatment, it's not alternative by itself. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)
> ... what does not make any sense to me is what Google is doing. Snatching > defeat from the jaws of victory by needlessly forcing clients to downgrade to > TLS 1.2. Is there a justification for this? It can either be a probe just to see if it's reasonable to demand it, or establish a precedent that they can refer to saying "it was always like that, *your* application is broken, not ours." Also note that formally speaking you can't blame them for demanding it. But you can blame them for demanding it wrong. I mean they shouldn't try to communicate through OU of self-signed certificate, but by terminating connection with missing_extension alert, should they? ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Potentially bad news on TLS 1.3 compatibility (sans SNI)
> When I link posttls-finger with the OpenSSL 1.1.1 library, I see: Just in case for reference, one can reproduce all this with 1.1.1 s_client. Below case is -starttls smtp -noservername. "Fun" part is that OU for the self-signed certificate reads "No SNI provided; please fix your client." Another "fun" part is that they don't seem to be concerned with SNI value, it can be literally whatever. > posttls-finger: gmail-smtp-in.l.google.com[173.194.66.26]:25 > CommonName invalid2.invalid > posttls-finger: certificate verification failed for > gmail-smtp-in.l.google.com[173.194.66.26]:25: > self-signed certificate > posttls-finger: gmail-smtp-in.l.google.com[173.194.66.26]:25: > subject_CN=invalid2.invalid, issuer_CN=invalid2.invalid > posttls-finger: Untrusted TLS connection established to > gmail-smtp-in.l.google.com[173.194.66.26]:25: > TLSv1.2 with cipher ECDHE-RSA-CHACHA20-POLY1305 (256/256 bits) Below case is -starttls smtp -tls1_2 [with of without servername]. > The same command with OpenSSL 1.1.0 yields (no CAfile/CApath > so authentication fails where it would typically succeed): > > posttls-finger: certificate verification failed for > gmail-smtp-in.l.google.com[173.194.66.27]:25: > untrusted issuer /OU=GlobalSign Root CA - R2/O=GlobalSign/CN=GlobalSign > posttls-finger: gmail-smtp-in.l.google.com[173.194.66.27]:25: > subject_CN=gmail-smtp-in.l.google.com, > issuer_CN=Google Internet Authority G3, > posttls-finger: Untrusted TLS connection established to > gmail-smtp-in.l.google.com[173.194.66.27]:25: > TLSv1.2 with cipher ECDHE-RSA-CHACHA20-POLY1305 (256/256 bits) > > This is a substantial behaviour change from TLS 1.2, and a rather > poor decision on Google's part IMHO, though I'm eager to learn why > this might have been a good idea. > > In the mean-time, Richard's objection to automatic TLS 1.3 use > after shared-library upgrade is starting to look more compelling? Question is what would be users' expectation. If we arrange it so that application compiled with 1.1.0 simply won't negotiate 1.3, then would it be reasonable of them to expect that it will start working if they simply recompile application? I.e. without changing application code! But in such case it wouldn't actually help for example this case, but only make things more confusing. I mean you end up with "all I wanted 1.3 and now it doesn't work at all". I'd say that it would be more appropriate to make user ask "I want 1.3 but don't get it, why? it keeps working with 1.2 though." With this in mind, wouldn't it be more appropriate to simply not offer 1.3 capability if application didn't provide input for SNI? ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] The problem of (implicit) relinking and changed behaviour
> 2. Make TLSv1.2 the absolutely maximum TLS version available for >programs linked with libssl 1.1.0. This is what's done in this PR: >https://github.com/openssl/openssl/pull/5945 >This makes sense insofar that it's safe, it works within the known >parameters for the library these programs were built for. Here is a thing. If we assume that 110/test/sslapitest.c is *representative* example of an 1.1.0 application, then it's not at all given that #5945 would actually solve anything. Indeed, observing the first failing test, recompiled 110/test/sslapitest.c would still fail, because it makes assumption that session id is available right after negotiation, something that doesn't hold true for TLS 1.3. This gives you either a) no 1.1.0 application goes safe, recompiled[!] or not; b) 110/test/sslapitest.c is not representative example. Well, as far as first failing test goes, I'd say it's b), which means that it should be adjusted in one way or another or failing check omitted. [It's b), because it's unnatural to put session id you've just got back to id cache, the test is ultimately synthetic.] This naturally doesn't answer all the questions, but it does show that above mentioned premise is somewhat flawed. I mean "programs were built for [1.1.0]" would still work if recompiled with #5945. >It also makes sense if we view TLSv1.3 as new functionality, and >new functionality is usually only available to those who >explicitely build their programs for the new library version. >TLSv1.3 is unusual in this sense because it's at least it great >part "under the hood", just no 100% transparently so. In such case it should rather be #ifdef OPENSSL_ENABLE_TLS1_3 instead of #ifndef OPENSSL_DISABLE_TLS1_3. And we don't want that. To summarize, failing tests in 110 should be revisited to see if they are actually representative, before one can consider as drastic measures as #5945. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
[openssl-project] SM2
Hi, I'd like to hear *more* opinions in light of recent comments to https://github.com/openssl/openssl/pull/4793. (Strangely enough I get "This page is taking way too long to load" if attempt to access it when I'm logged on[!?]. But I have no problems opening requests from the middle, long closed.) As for my opinion I found myself objecting the way SM2 is hammered into ec_pmeth.c, and it's irregardless concerns risen in 4793, which merely enforced the opinion. Concern is that we risk being stuck with maintaining quirky behaviour for long time. It should really be up to application to choose the scheme (as suggested in 4793) or SM2 methods should be parameterizeable. This means that I'd like to raise a voice in favour of removal the hacks. If it means missing release, I'd argue that it's better to miss it than to get stuck with something that would be problematic to support. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] FW: April Crypto Bulletin from Cryptosense
> This is one reason why keeping around old assembly code can have a cost. :( > > https://github.com/openssl/openssl/pull/5320 There is nothing I can add to what I've already said. To quote myself. "None of what I say means that everything *has to* be kept, but as already said, some of them serve meaningful purpose..." Well, I also said that "I'm *not* saying that bit-rot is not a concern, only that it's not really assembly-specific." And I can probably add something here, in addition to already mentioned example of legacy code relying on formally undefined or implementation-specific behaviour. It's not actually that uncommon that *new* C code is committed[!!!] "bit-rotten". So one can *just as well* say that supporting another operating system has a cost, and so does using another compiler... Why not get "angry" about that? What's the difference really? Relevant question is what's more expensive, supporting multiple compilers? multiple OSes? multiple assembly? To give a "tangible" example in the context of forwarded message [that mentions PA-RISC assembly code.] How long time did it take me to figure out what's wrong and verify that problem is resolved? Couple of hours (mostly because old systems are slw and it takes time to compile our stuff). How long time did it take me to resolve HP-UX problems triggered by report on 20th of March? I'm presumably[!] done by about now... To summarize, one can make same argument about multiple things, yet we do them. Why? Because it works to our advantage [directly or indirectly]... ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] rule-based building attributes
> appro> How about this. We have touched this when discussing windows-makefile. > I > appro> mean when I called it VC-specific, you disagreed, and I said that > appro> embedding manifest effectively makes it VC-specific. In the context I > appro> also suggested post-link stage, a command one could *add* to rule. Or > appro> let's rephrase this and say that you can simply specify multiple > appro> commands in an override rule. So with this in mind. > appro> > appro> link => [ 'script-that-composes-list-of-objects $@.$$', > appro> 'link -o $@ -list $@.$$' ] > > So zooming in on this particular idea, I was concerned about how those > object file names would be passed to the script... but then you > mention manifests, and it dawns on me that there's *nothing* stopping > us from generating a file with the list of object files when building > the Makefile. That does make some kind of sense to me. > > Or were you thinking something different? Well, when I said "manifest embedding" I meant specifically $(MT) step of the link rule. But if you want to call $@.$$ output from 'script-that-composes-list-of-objects' for manifest, sure you can do that. > It would of course be > possible to have the script you suggest pull data from configdata.pm, > right? Or from Makefile, whichever simplest. > But considering we're talking about third parties building > their own, that raises another concern, that we suddenly give the > whole %unified_info structure public exposure that I'm not entirely > sure were ready for. You mean not ready to commit to not changing it as our discretion? Well, then maybe grep-ing Makefile for libcrypto.a: would be more appropriate... ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] rule-based building attributes (Re: cppflags, cflags and ldflags)
> appro> > Another side thing that I've been thinking of for quite a while, and > I > appro> > think you may have argued for even though I feel a bit unsure, and > appro> > that's to support command line attributes as an alternative to that > appro> > increasing amount of specialised attributes, so something like this: > appro> > > appro> > link => '$(CCLD) $(LDFLAGS) -o $@ ...', > appro> > appro> You've mentioned once that it would take a special language to do that > appro> something, and I recall myself thinking "why not just use make syntax". > appro> I'm unsure if I actually said this, but I did think the thought. So > that > appro> I for one wouldn't actually be opposed to something like this. It's > just > appro> a way to specify "override" rule (and on per-toolchain basis:-). > appro> However! It all depends on whether or not straight make syntax would > appro> actually be expressive enough in the context. If it is, then it would > be > appro> totally appropriate. But if not, then ... I'm not so sure. Maybe[!] > appro> if-else in makefile template could turn out to be more appropriate. > appro> (Once again, each clause in if-else is isolated and complexity is > appro> linearly proportional to number of clauses in if-else, which is viewed > appro> as advantage.) > > Straight make syntax will probably not cut it. How would you pass a > list of object file names to something that might have a limited > command line length, for example? These are actually two different problems, aren't they? I mean a) how do you pass, and b) what if there is some limitation. As for former. Even though it spells "use make syntax", it would still be rather "make-*like* syntax". In other words it doesn't mean that you will be able to just copy referred line into makefile, it would still have to be processed somehow first. Well, in case at hand it might be possible to drop a rule with $+ if you know you'll use GNU make, but others don't understand it, so $+ would have to be processed. (And even in GNU make case to keep things unified.) So you end up using mostly-make-like syntax to formulate an override rule, but you do some replacements before emitting actual rule into makefile. Naturally one might have to make up more internally recognized make-looking idioms, in addition to $+ that is... > With nmake, we do have a feature we > can use, the inline file text thing, but others may not be so lucky > (the nonstop case I mentioned is such a case), and one might need to > do a bit of pre-processing with perl-level variables (an idea could be > to have some well defined hooks that users can define values for, such > as function names... again, this is a draft of an idea). For > example, imagine that we have something that dumps object file names > into a text file and then expects that file name to appear in the > linking command line (for example, and typically in our case), how do > we pass the text file name to the command line, and where in the > command line? Do you mean that you have like 'libcrypto.so: all-the-object-files' and want a rule that would put all-the-object-files into text file and invoke linker with that file name as argument? And putting them into text file may not exceed a length limit either, so that you can't say for example 'echo $+ > o-files.$$.' either... > So no, a straight makefile rule for a config attribute value isn't > going to be good enough. How about this. We have touched this when discussing windows-makefile. I mean when I called it VC-specific, you disagreed, and I said that embedding manifest effectively makes it VC-specific. In the context I also suggested post-link stage, a command one could *add* to rule. Or let's rephrase this and say that you can simply specify multiple commands in an override rule. So with this in mind. link => [ 'script-that-composes-list-of-objects $@.$$', 'link -o $@ -list $@.$$' ] Well, in this particular case you'd probably be more interested to specify multi-line command instead of multiple commands. As '(trap "rm -f $@.* INT 0; script ... && link ...)', because you might want to clean up temporary file even if make is interrupted. But multi-line can be useful if different lines reside in different templates. I mean base template can say "link" and target can add a post-link command. Of course I'm not suggesting to take this for immediate implementation, these are just ideas to ponder about. They might turn unusable (or give a spark for some other idea :-) ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] cppflags, cflags and ldflags
>> Note: this mail follows the discussion on github started here: >> https://github.com/openssl/openssl/pull/5742#discussion_r176919954 In the light of the new evidence presented in https://github.com/openssl/openssl/pull/5745 one can argue in favour of splitting -pthread flag to cppflags and ldflags. Yes, but I'd still be reluctant to formulating it like that. As implied I'm kind of arguing against too many options/variables/joints and call rather for simplification than increased complexity. I feel that there are more efficient ways to cover corner/odd-ball places... ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] to fully overlap or not to
>>> I'd like to request more opinions on >>> https://github.com/openssl/openssl/pull/5427. Key dispute question is >>> whether or not following fragment should work >>> >>> unsigned char *inp = buf, *out = buf; >>> >>> for (i = 0; i < sizeof(buf); i++) { >>> EVP_EncryptUpdate(ctx, out, , inp++, 1); >>> out += outl; >>> } >> >> This should work. > > On second thought, perhaps not. It seems that double-clarification is appropriate. As it stands now it *does* work. So question is rather if it should keep working [and if not, is it appropriate that stops working in minor release]. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
[openssl-project] to fully overlap or not to
Hi, I'd like to request more opinions on https://github.com/openssl/openssl/pull/5427. Key dispute question is whether or not following fragment should work unsigned char *inp = buf, *out = buf; for (i = 0; i < sizeof(buf); i++) { EVP_EncryptUpdate(ctx, out, , inp++, 1); out += outl; } [Just in case, corresponding corner case is effectively exercised in evp_test.] Bernd argues that this is unreasonable, which I counter with assertion that it doesn't matter how unreasonable this snippet is, because since we support in-place processing, it's reasonable to expect stream-cipher-like semantic as above to work even with block ciphers. As it stands now, suggested modified code would accept in-place calls only on block boundaries. Collateral question also is whether or not it would be appropriate to make this kind of change in minor release. Just in case, to give a bit of more general background. Benrd has shown that current checks are not sufficient to catch all corner cases of partially overlapping buffers. It was discussed that partially overlapping is not same as fully overlapping, a.k.a. in-place processing, with latter being in fact supported. And even though above snippet can be formally viewed as a stretch, it's argued that it does exhibit "legitimate intention" that deserves to be recognized and supported. At least it was so far, in sense that it's not exactly a coincidence that it currently works. [The fact that other corner cases are not recognized as invalid is of course a bug, but there is no contradiction, as fixing the bug doesn't have to mean that specific corner case is no longer recognized.] Thanks in advance. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Freezing the repo soon
> Just a reminder to everyone that we are doing the alpha2 release > tomorrow, so we will be freezing the repo soon (in about an hour or so). master is read at https://travis-ci.org/openssl/openssl/branches since https://github.com/openssl/openssl/commit/b38ede8043439d99a3c6c174f17b91875cce66ac. appears as ubsan failure... ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
[openssl-project] x25519-x86_64
Hi, A word of clarification about recently introduced x25519-x86_64 module. Specifically in the context of an apparently common shift toward auto-generated C such fiat-crypto or hacl64. As far as I'm concerned problem with just mentioned auto-generated C implementations is that while being verified by themselves, they rely on effectively unverified compiler and its [unverifiable] ability to perform advanced optimizations. Or in other words it's problematic on two levels: a) can output from unvalidated compiler count as validated? b) will compilers get performance right across a range of platforms? And as one can imagine I imply that answer to both questions is "not really". As for b). Well, current [x86_64] compilers were observed to generate decent code that performs virtually on-par with assembly on contemporary high-end Intel processors. At least for 2^51 radix customarily chosen for C. Indeed, if you consider the improvement coefficients table in x25519-x86_64.pl, you'll notice as low coefficients as 11%, 13%, 9% in comparison to code generated by gcc-5.4. Normally it's not large enough improvement [to subject yourself to misery of assembly programming], right? But assembly does shine on non-Sandy Bridge and non-Haswell processors, most notably in ADCX/ADOX cases. As for a). Intention is to work with group at Academia Sinica to formally verify the implementation. On related note they did verify amd64-51(*) implementation that x25519-x86_64 refers to, so it wouldn't be unprecedented. Cheers. (*) Just in case for reference. Being hardly readable, amd64-51 is hard to integrate, because it targets specifically Unix ABI and is not position-independent. It also performs sub-optimally on non-"Intel Core family" processors, nor does it solve ADCX/ADOX problem. These are arguments against taking in amd64-51 that is. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] OS/X builds failing
For the record. I don't generally appreciate fast commits, but I feel like 100 times stronger when it comes to public headers [naturally in supported or minor release] and I can't qualify 19 minutes turnaround for merge request as appropriate. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] OS/X builds failing
>> apps/enc.c:568:54: error: format specifies type 'uintmax_t' (aka >> 'unsigned long') but the argument has type 'uint64_t' (aka 'unsigned >> long long') [-Werror,-Wformat] >> BIO_printf(bio_err, "bytes written: %8ju\n", >> BIO_number_written(out)); >> ^~~ >> %8llu >> 2 errors generated. >> >> >> The thing is though this is *our* BIO_printf, and we do seem to expect a >> uint64_t for a "%ju". So I'm not sure how to fix that. > > I'd suggest -Wno-format. With rationale that a) Xcode is arguably > somewhat unreasonable, b) we can rely on Linux builds to catch warnings > that actually matter. Another possibility *could* be suppressing corresponding __attribute__ in bio.h, but it's public header, so question would be if change can be justified in minor release. Just in case, suppress specifically for Xcode [which would cover even iOS]. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] OS/X builds failing
> apps/enc.c:568:54: error: format specifies type 'uintmax_t' (aka > 'unsigned long') but the argument has type 'uint64_t' (aka 'unsigned > long long') [-Werror,-Wformat] > BIO_printf(bio_err, "bytes written: %8ju\n", > BIO_number_written(out)); > ^~~ > %8llu > 2 errors generated. > > > The thing is though this is *our* BIO_printf, and we do seem to expect a > uint64_t for a "%ju". So I'm not sure how to fix that. I'd suggest -Wno-format. With rationale that a) Xcode is arguably somewhat unreasonable, b) we can rely on Linux builds to catch warnings that actually matter. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Style guide updates
> Multiline conditionals, such as in an if, should be broken before the > logical connector and indented an extra tabstop. For example: One can wonder if it would be appropriate to explicitly say that preferred way to organize multi-line conditionals with same chain condition per line even if pair of them fit into line. I mean if (this-is-long && that && even-that) vs. if (this-is-long && that && even-that) with first example being preferred. And this is even if 'this-is-long' is short. [Or should one tolerate following provided that 'this' is short enough? if (this && that && even-that) ] Either way, suggestion is that *if* you find yourself breaking condition to multiple lines, you should take it step further, if not all the way. But it shouldn't preclude things like if (this-is-long && (that || (even-that && not-the-one)) && don't-forget-this) This actually means that one should be able to write second example as if (this-is-long && (that && even-that)) but it would imply that logical relation between 'that' and 'even-that' is strong enough to justify the parentheses. [Also note that in above examples additional indentation is just two spaces. It's not a coincidence, it's also kind of suggestion, for if-specific indentation.] ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Style guide updates
>> - Use size_t for sizes of things > > How do you feel about ssize_t? One has to keep in mind that ssize_t is not part of C language specification, but POSIX thing. C specification defines ptrdiff_t with [presumably] desired properties. However, there is natural ambiguity originating from fact that size_t customarily "covers" twice as much space. So if you are to rely on positivity of signed value, object has to be small enough. In other words you would have to perform sanity checks before you do so. So it's not exactly walk on roses. I mean if one assumes the premise that signed is "easier" to handle. Well, one can make all kind of practical arguments about practicality of such situation, i.e. what it takes to run into ptrdiff_t vs. size_t ambiguity, and argue that it never happens. Well, while it would be case on most systems, there are two cases, arguably not that impractical. 64-bit VMS, where we have sizeof(size_t)
Re: [openssl-project] Style guide updates
> What else needs to be updated? Relax requirement that argument names in function definition has to match function declaration to permit adding 'unused_' prefix prior unused arguments. ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project
Re: [openssl-project] Style guide updates
> - Don't use else after return? I'd argue against making it an absolute requirement. To give an example. You don't a problem with returns in switch, so why should it be a problem with returns from switch-like if-elseif chain? ___ openssl-project mailing list openssl-project@openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project