Re: Please tackle the Right Thing
On 23/01/2021 04.17, Ángel wrote: >> control. In that case, the unrelated webserver would happily answer the >> openpgpkey subdomain request, but simply not find the required directory >> structure, giving a 404. My proposed solution would give the user a >> chance to still enjoy the WKD direct method. > > That's the point where the fact that a WKD server MUST have a policy > file become useful for a fetching-only client. If it's a real WKD > server the file shall be there, if it's a 404, that's probably > meaningless. Very good point. So that could be the second definite point to decide that the advanced method should be working and not fall back to direct. > GnuPG first tries to directly fetch the key from the url where it's > supposed to be. If it's found, it finishes there. If that's a 404, it > then check that there is a policy file (and if there's not, the process > caches in memory that there is no WKD on that place and won't contact > that server again) > On the other hand, flowcrypt first tries to read the policy file, and > only after that succeeds, does it go for the public key. Obviously another case where the draft is not clear enough, as it leads to the same setup working with some clients, but not others. The current draft has this to say about the policy file checking: [Section 3.1] The server MUST serve a Policy Flags file as specified below. That file is even required if the Web Key Directory Update Protocol is not supported. [Section 4.5] A site supporting the Web Key Directory MUST serve this file; it is sufficient if that file has a zero length. Clients may use this file to check for Web Key Directory support. > On this line, a few days ago I suggested changing the draft to require > fallback to direct if such file is missing (as opposed to considering > that the openpgpkey subdomain exists just when having an A/AAA record): > > [...] > > and over the course of the days, I have only become more convinced that > this would be a good idea. I agree it's a nice possibility to explicitly control the fallback cases. How about this suggested wording to specify client behavior: [Section 3.1] There are two variants on how to form the request URI: The advanced and the direct method. For either method, client implementations MUST first request the Policy Flags file at its respective location, described below. Implementations MUST first try the advanced method. If that results in a successful HTTP response (e.g. status code 2xx) for the Policy Flags file, it proves the intention to use the chosen method, so the client MUST NOT fall back to a different method, even when the request for the key itself indicates an error (e.g. not found). If the Policy Flags file is inaccessible, they MUST fall back to the direct method. If the required sub-domain exists, but other errors occur during the connection, implementations SHOULD output an error message pointing out the failure reason to the end user. Such other errors include, for example, invalid, expired or misconfigured TLS certificates and HTTP failure codes (4xx or 5xx). [Section 4.5] A site supporting the Web Key Directory MUST serve this file; it is sufficient if that file has a zero length. Clients MUST use this file to check for Web Key Directory support, before sending requests for any actual keys. Probably still rough around the edges and maybe not quite clear enough, but it's a starting point to attract comments on the approach. By the way, is there something like a repository to send and discuss pull requests against the WKD draft document? Or is it just hand-crafted text edited by the submitter based on suggestions? Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Please tackle the Right Thing
Hi all, On 21/01/2021 01.29, Ángel wrote: >> If that does not conclude with a successful HTTP response (e.g. >> status code 2xx), they MUST fall back to the direct method. If the >> required sub-domain exists, but other errors occur during the >> connection, they SHOULD output an error message pointing out the >> failure reason to the end user. Such other errors include, for >> example, invalid, expired or misconfigured TLS certificates and HTTP >> failure codes (4xx or 5xx). > > Suitable return codes for fetching a key would be 200 (for successful > keys) and 404 (the key is not in the server). In both cases, if it is a > valid wkd server, the server shouldn't fall back to direct. Restricting to only the 200 OK status code would probably be fine. I looked at the other 2xx codes and probably no others would apply to WKD. Not quite sure about 228 IM Used (not familiar with RFC 3229). I tend to disagree regarding the 404 case though. As this thread has shown, there might be legitimate use cases where a WKD user has enough control over the domain's web server to set up the direct method. But there might be a wildcard subdomain entry with a webserver and (valid) TLS setup, totally unrelated to WKD, which is not under the same user's control. In that case, the unrelated webserver would happily answer the openpgpkey subdomain request, but simply not find the required directory structure, giving a 404. My proposed solution would give the user a chance to still enjoy the WKD direct method. > You could also have a 304 if the client was refreshing a key. Maybe 201 > if a web-based submission protocol was added in the future. Agree, 304 kind of makes sense, although the WKD client first needs to implement the associated caching / Last-Modified header logic. Not sure it that's worth the effort to explicitly mention it in the WKD protocol. Other 2xx codes could be discussed. 201 Created doesn't make much sense for the GET request, but could also convey that a key was just auto-generated on the fly, e.g. for opportunistic encryption? I would understand a 204 No Content status to mean "yes, this is a WKD server and the requested user is known. There is just deliberately no key offered." In that case stopping without fallback would be desired. All of these somehow acknowledge that the requested .well-known/... resource does make sense to the responding server. Hence my proposal for a generous 2xx in the specification. > I think the main status that would bring such trouble would be 401, > 403, 5xx, although there could be some exotic cases (e.g. 407). > Erroring to the user on any status code the client does not know how to > handle seems the safe procedure. Agree, let's not complicate the protocol with hard-to-implement, very specific error handling rules. One more needed change if the above proposal is accepted: ---SNIP--- (page 4, first paragraph in the current draft version 11) Sites which do not use the advanced method but employ wildcard DNS for their sub-domains MUST make sure that the "openpgpkey" sub-domain is not subject to the wildcarding. This can be done by inserting an empty TXT RR for this sub-domain. ---SNIP--- That MUST becomes a SHOULD, in order to avoid traffic by falling back early. But with the new fallback cases, it's no longer *required*. Regarding where this is discussed, I hope Werner will pick up relevant pieces for a draft version 12 in order to unify the now differing implementations. IIUC, he is the main (and only?) draft author, so before IETF gets formally involved, the draft proposal can be iterated easily. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Fundraising
Hi Robert, you are not alone. On 22/01/2021 03.20, Robert J. Hansen via Gnupg-users wrote: > I have never understood why people apologize for doing something they > know is wrong, and then do it anyway. You could see that starting a new > thread was appropriate; you know that starting a new thread is easy; you > apologized for your inappropriate behavior; and then behaved > inappropriately. Your apology is not accepted, as it is clearly insincere. Well said. I didn't want to make even more noise about this, but that's just what I was thinking. Kind regards André -- Greetings... From: André Colomb ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Please tackle the Right Thing
Hi all, after some more thought I came up with a possible wording to clarify the fallback behavior. Assuming that an opportunistic approach is preferred, so the direct method should be used not only based on the existence of openpgpkey as a SRV or other record. Here goes: ---SNIP--- (page 3, second paragraph in the current draft version 11) There are two variants on how to form the request URI: The advanced and the direct method. Implementations MUST first try the advanced method. If that does not conclude with a successful HTTP response (e.g. status code 2xx), they MUST fall back to the direct method. If the required sub-domain exists, but other errors occur during the connection, they SHOULD output an error message pointing out the failure reason to the end user. Such other errors include, for example, invalid, expired or misconfigured TLS certificates and HTTP failure codes (4xx or 5xx). ---SNIP--- The last "SHOULD" clause would allow for Sequoia's current behavior to silently switch over, but shows what the Right Way would encompass. Regarding GnuPG, the second "MUST" clause requires a change to fall back after later connection errors. I think that this logic still holds just in case SRV records are to be used again. So what do you think? I'm not subscribed to any IETF mailing lists, but feel free to propose this in the relevant circles. I hereby renounce my rights on the modified text :-) Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD Checker
Hi Stefan, On 18/01/2021 17.12, Stefan Claas via Gnupg-users wrote: > I repeat here once again GitHub has a *valid* SSL cert. You are right on that point. Absolutely right, seriously. It's actually their web server configuration which is suboptimal. Those two statements are universally true, while the rest of this thread was only applicable to a specific context :-) Good night. André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD proper behavior on fetch error
Hi Neal, On 18/01/2021 10.14, Neal H. Walfield wrote: > First, I don't think WKD is a strong authentication method. It is > sufficient for doing key discovery for opportunistic encryption (i.e., > it's a reasonable guess), but I wouldn't want someone to rely on it to > protect them from an active adversary, or phishing attempts. That's a very good point. In that regard, the spec should maybe present the two methods as equal alternatives to be tried in a standardized order until either one succeeds. Requiring to report configuration failures is really out of scope for a protocol at that level, and the end user can hardly do anything about it anyway. Nevertheless, a big fat warning in the log / console would be appropriate. > In short: I understand the motivation for the subdomain. I understand > why one should first check there. But, I think we do our users a > disservice by not falling back to the direct method in the case of > DNS errors. I suppose you mean other errors besides DNS? Whichever method was intended to be used, the (weak) trust anchor is always the returned DNS response, and therefore both methods would be equally screwed if a failure can be induced at that level. Pointing the DNS response for either example.com or openpgpkey.example.com to a malicious webserver is no different. Both would need to be done for e.g. the ACME (Let's Encrypt) verification server's perspective as well, which is harder than a local network attack. We need to remember that WKD is only a convenience mechanism for discovery, not any kind of authentication. Sending encrypted e-mail to a domain which was also used to retrieve the encryption public key adds no protection against MITM, but only transport obscurity. But that might still be better than no encryption at all, e.g. to set up an out-of-band key verification. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD proper behavior on fetch error
On 18/01/2021 00.43, Stefan Claas wrote: > But what you say I was thinking about as well. My proposal was to include > in the policy file fingerprint(s) of key(s) and generate an .ots file, from > opentimestamps.org, from the policy file and put that .ots file somewhere. > In the old days it was common, prior starting encrypted comms to compare > fingerprints over other channels. If you are coordinating the use of a separate channel to compare fingerprints, you can also just coordinate where the public keys are to be downloaded. As others have pointed out[1], it's even easier to set up than WKD (no rules to follow). And if you're not using the whole thing for e-mail, then you're probably not using an e-mail client with automatic WKD retrieval. So there is no benefit of using WKD over making up your own URL and telling that to your communication partners. [1]: https://lists.gnupg.org/pipermail/gnupg-users/2021-January/064633.html > And regarding secure domains, would you consider VPS servers secure > too for WKD? I don't know about the servers, my point was about the domain control. Whoever can change the DNS records can just have them point to a different server with their own (malicious) content. GitHub Pages as a free web hosting service will certainly not give you the same security guarantees as a hosting provider where you pay money to administer a domain of your own. > BTW. I did not received yet your reply for my two other accounts, hence the > late reply. Sorry, I don't quite understand. Would you like a reply to be addressed directly in addition to the mailing list? Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD proper behavior on fetch error
On 17/01/2021 21.39, Juergen Bruckner via Gnupg-users wrote: > And as far as Sequoia is concerned, Stefen's explanations only confirmed > that this is software that I definitely don't want to use. > Software that accepts an invalid digital certificate as correct, has no > place in an environment where security and confidentiality are concerned. > This is an a b s o l u t e NO-GO. To be fair, it's not quite that bad. Sequoia does recognize the invalid certificate as such, as Neal pointed out. It just doesn't scream out loud about it. Instead it goes on silently trying the direct method instead, for which everything is configured correctly in Stefan's setup. That is not following the current WKD draft correctly, as interpreted by the majority of those who spoke up IIRC. But so far no scenario was brought up where it poses an obvious security risk. More like hiding the problem from an admin trying to deliberately set up the advanced method and possibly ending up with some forgotten remains of the direct method having been used before. In my opinion, the WKD spec needs clear rules about cases when to switch to the direct method. And making it hinge solely on proper DNS configuration is perfectly fine. Having enough control over the domain is one more prerequisite (besides the CA stuff) which an impostor would need to get around. After all, the corresponding web server is trusted to deliver the correct OpenPGP public key for authenticated communication. @Stefan, are you aware that in your scheme involving sac001.github.io, whoever convinces GitHub to give them control over that subdomain, can silently replace those public keys and start a man-in-the-middle attack? You could not even rely on the TLS layer, because GitHub probably will not revoke their wildcard certificate just for you. Hijacking a GitHub Pages user name seems more likely than taking over a well secured domain hosting account. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD proper behavior on fetch error
Hi Stefan, On 17/01/2021 19.41, Stefan Claas via Gnupg-users wrote: > Please try to accept that GitHub (and maybe in the future others as well) > has *no* bad certificate! The only thing which could be considered "bad" > or at least sub-optimal for a global ML, like this one, Is the support in > form of the GnuPGP ecosystem devs. GitHub's web server, *in your specific use case* is sending a certificate proving it is an apple when you're asking for it under the name "orange". That makes the certificate *invalid* for that connection request as it could not be distinguished from a man in the middle attack asking your browser to "Please try to accept that this apple is an orange". Don't you find it strange that you are the only one still insisting that it's valid when several very knowledgeable people have explained to you in many different ways why it's simply not true? And please tone down on the GnuPG criticism. It's your right to dislike the software or even Werner Koch personally. But this is not the right place for anti-publicity or constant personal stabs against people who have patiently spent a lot of time to help and educate you. Please try to keep the discussion productive. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD proper behavior on fetch error
Am 15. Januar 2021 01:56:04 MEZ schrieb raf via Gnupg-users : >But of course, you're not asking for that. You're just >asking for something to work. There must be other ways. >Accepting invalid certificates might just have been my >first thought at how to deal with this. But that would >enable the advanced method to work (in situations where >it shouldn't). If I remember correctly (possibly not), >you wanted the direct method to work, and github.io's >mis-configuration of certificates caused the advanced >method to be attempted and fail, before the direct >method could even be attempted. I'll try to complete your summary. The DNS wildcard entry for *.example.github.io leads to the advanced method being tried. We can't change that entry, and therefore with the current protocol draft, it makes no sense forcefully wanting to use the direct method. It's easy to set up the advanced method there. But GitHub uses an invalid TLS certificate for openpgpkey.example.github.io. That's what needs fixing and it is also out of our control. So basically Stefan's request is to change the protocol to work around a misconfiguration because both DNS and the TLS certificate are controlled by a company that offers the service totally unrelated to WKD. Such a workaround could hurt the ecosystem because it may hide a misconfiguration in setups where the operator does have control over these things and just needs to notice. >OK. I just had a look at https://wiki.gnupg.org/WKD and >it doesn't refer to "advanced" or "direct" methods. It >seems to consider the "direct" method as the main >method, and the "advanced" method as a "Stopgap method" >which is "Not recommended - but a temporary >workaround". So having an additional mechanism to >disable the "advanced" method sounds reasonable. Or >maybe the wiki page needs to be updated(?). Sorry, you just misread that part. The stopgap solution is to use a server operated by openpgp.org instead of your own web server. For that to work, you must set up the advanced method for WKD on your domain's DNS. That method is perfectly fine and in some scenarios even easier to use. Kind regards André Hi raf, thanks for your perspective on the matter. -- Greetings... From: André Colomb ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD & Sequoia
On 14/01/2021 00.06, Stefan Claas wrote: > Maybe, I don't know, readers here on the ML are asking themselves now why do > we > have two methods, e.g. what is their purpose and what informations can > one gain from > an IMHO very nice WKD checker, Wiktor has created. Quoting from your own mail: "As you said this is a draft It should formulated this way IMHO that it allows the greatest flexibility in a protokoll, to fulfill all use cases, when it comes to WKD." https://lists.gnupg.org/pipermail/gnupg-users/2021-January/064645.html Nobody wants to remove any method, as that would reduce flexibility. The "advanced method" is not more complicated to set up, it's just a matter of preference really. > I think I have explained, at least for an expert like you, my set-up > for 300baud.de, I would use. I repeat, it's not clear to me yet. But let's stop here and discuss that when you have the basics up and running. > As soon as time permits I will do this, even if this cost me > money I can spend for other things. But I gives me then a better > overview and I can correct myself while thinking my > set-up would be equally to GitHub's set-up. In case I get stucked I > would like to ask you > for advise. Please note: I will not use the advanced method, I like to > see if this will work > with sequoia-pgp and GnuPG. You don't need to spend money just to prove anything to the ML subscribers. But when you do try, I offer to help with any problems coming up. You should not rule out the advanced method yet. Depending on your setup, it might actually be the easier route if wildcard domains are involved. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD proper behavior on fetch error
Hi Stefan, On 14/01/2021 08.01, Stefan Claas via Gnupg-users wrote: > The greatest benefit would have been if the author of WKD, namly Werner Koch, > had been so kind to explain to us why WKD needs two methods and what > security implications it has when an application falls back to a valid > direct-method, > instead of people defending him or his implementation. :-) I think Werner would have participated in the discussion already if other people's explanations had been incorrect. It's an open standard, and your focus on one person who happens to be the registered author doesn't help. If you insist on Werner's personal opinion, then you should maybe contact him directly instead of the GnuPG-Users list. Knowing well that he has no obligation to reply to anyone. Hopefully my (and others') attempts to explain / defend the WKD specification were still useful to you. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD proper behavior on fetch error
Hi Ángel, thanks for your contribution with a clear focus. On 14/01/2021 01.47, Ángel wrote: > Probably the most important part of the rule: "all implementations of > WKD should behave in the same way". I don't mind if it was gnupg that > was changed to behave like sequoia, but given identical conditions, > ideally all clients (and the draft reading) should produce the same > result (find key X, an error, etc.). I agree with that. And the next draft version SHOULD be very clear about this to avoid future discussions :-) > I would recommend to remove the or_else case and fail with an error if > the advanced method is (supposedly) set up but fails. At least, I think > there should be a diagnostic e.g. "WKD advanced method configured but > broken. Connection to openpgpkey.foo.com (1.2.3.4) failed: Bad > certificate. Trying direct method" although I would prefer a hard > error. Definitely, the decision which method to try should be very simple, as the WKD draft intended. Only one decision point instead of many paths leading back to a change of method. > (Of course, if the user explicitly requested the client/library to only > use the direct method, ignore certificate errors, etc. it'd be fine to > do so) That's an excellent suggestion, giving Sequoia an option to force trying one method or the other. I don't know if adding as many command line switches as gpg has is your cup of tea, but e.g. an environment variable could be used to really make it a "debugging" type of option. The great benefit is that Sequoia can then act as a WKD checker, which should always examine the intended, but possibly misconfigured, method or even both. > PPS: Another benefit would be that we could have avoided this long > thread. :-) I couldn't resist trying to help Stefan understand where the error lies, so apologies for my share of the message flood :-) Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD & Sequoia
Am 13. Januar 2021 21:44:07 MEZ schrieb Stefan Claas via Gnupg-users : >Hi Juergen, > >looks like you are a bit upset, like probably others as well. I hope others don't mind me speaking in their names. Stefan, we are upset by you making false accusations about which software does something right or wrong. Both softwares are reacting differently to an error which lies in your TLS certificate usage (as several people have proven multiple times). You're not even to blame for that root cause, because it is not under your control. Don't only look at the end result, but please try to understand that the cause lies deeper than just the spec or the clients you tried. >I am not aware how their network is set-up and it is not my business, >but would you not agree that it would be very nice to have a wildcard >subdomain solution, for all their inhouse offices and employees email >addresses, while managing themselves key distribution? It's a little unclear what *exactly* you mean with "a wildcard subdomain solution". WKD can work perfectly with wildcards involved, both on the DNS and TLS levels. But such things can be misconfigured and the spec even explicitly mentions one possible pitfall including a solution. Reactions to that kind of misconfiguration should also be standardized in the spec. That's all there is to criticize, IMHO. Kind regards André -- Greetings... From: André Colomb ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD & Sequoia
On 13/01/2021 17.56, Stefan Claas wrote: >> What are droplets? For which domain did you generate a wildcard >> certificate? What are the DNS settings on that domain? I could take a >> look at what responses are returned from the real domain, but need some >> information at least which OpenPGP user ID should be fetchable over WKD >> from that domain. If you're even interested in learning about how to >> set up WKD properly. > > Digital Ocean calls their VPS servers droplets and If I would set them up > as a test rig, I would use three, like '300baud.de', 'foo.300baud.de' > and 'bar.300baud.de'. In 300baud.de I would set up the WKD directory and > the SSL cert, with an entry for wildcard subdomains which would cover then > hosts foo and bar. In the WKD directory I would put then a couple of keys with > proper sample email addresses from all three hosts. That's a lot of "ifs". Right now, 300baud.de has neither A nor nor CNAME record, so there is no server IP address to contact. Obviously there is also no wildcard record either, as e.g. www.300baud.de does not resolve. It's not clear to me which (sub)domain you would want to use in a fictional OpenPGP key's user ID? > With this set-up, without noodling around with records settings at my domain > service (for ease of use and managing WKD) I stronly assume that this > set-up follows the direct method and works with sequoia-pgp properly and > should fail currently with GnuPG and gpg4win,same as it fails with GitHub. It's actually pretty easy. If the openpgpkey... subdomain resolves (explicit entry or DNS wildcard), then the advanced method is used. Otherwise the simple method. That's the only difference, and it does not depend on whatever your certificate contains. Depending on the chosen method, you need to make sure that there is a web server answering with a *valid* TLS certificate and with the proper expected directory structure. There is no reason at all to "strongly assume" any malfunction or bug in GnuPG and I assure you that it's possible to make either method work. The only difference for Sequoia is that it ignores your expressed intent to use the advanced method if something is misconfigured, and falls back to the simple method. GnuPG does not do that, because it correctly follows the specification word by word. > IIRC the (old) WKD specs did not mention nor did they said that it was > required > to noodle around witth domain settings, regarding the openpgpkey folder when > setting up records for hosts with a domain service provider. WKD is still an Internet *Draft*, so it's expected to find corner cases like yours that are not yet 100 % unambiguous. That's what the drafting process and public discussion is intended for. Different interpretations should not be possible, and you found a case where Sequoia and GnuPG really do differ. But it still does *not* say one needs to "noodle around with domain settings". It points you to the right spice to add just in case your domain settings are already a noodle soup. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD & Sequoia
Hi Stefan, On 13/01/2021 17.07, Stefan Claas wrote: > On Wed, Jan 13, 2021 at 10:22 AM André Colomb wrote: > >> So the core problem, as with Stefan's case, is the lack of control over >> the domain's DNS settings. Which the WKD mechanism relies upon to >> delegate trust to the domain operators. > > Hi Andre, I wouldn't formulate it this way. I already mentioned that I am able > to set up for my 300baud.de domain a couple of droplets and use as suggested > a valid wildcard subdomain cert, like I explained with the bund.de example and > I am pretty sure that GnuPG and gpg4win will then fail, same as with GitHub. Sorry, I have no clue what is configured, what works and what should work regarding WKD on your 300baud.de setup. Can we please stick to one real example, not something made up about bund.de? What are droplets? For which domain did you generate a wildcard certificate? What are the DNS settings on that domain? I could take a look at what responses are returned from the real domain, but need some information at least which OpenPGP user ID should be fetchable over WKD from that domain. If you're even interested in learning about how to set up WKD properly. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD & Sequoia
Hi Neal, thanks for chiming in with details about your implementation. It's now clear that the wrong certificate in fact triggers an alarm, which is good. Only the fall-back behavior differs from GnuPG. Since Stefan set up the direct method as well, that leads to his setup actually working with Sequoia. On 13/01/2021 10.12, Neal H. Walfield wrote: > So, the hostname mismatch is correctly identified, and it correctly > returns an error. > > Where sq's behavior diverges from gpg's is that sq then tries the > direct method, but gpg does not. I agree that this is the culprit why the two implementations differ. > The I-D says "Only if the required sub-domain does not exist, they > SHOULD fall back to the direct method." The text doesn't say: "If > there is an error, they SHOULD fallback to the direct method unless > the required sub-domain does not exist, in which case they MUST NOT > fall back to the direct method." So, strictly speaking, I don't think > Sequoia is violating the specification. The way I read it, "SHOULD fall back" means that it can opt not to fall back at all. The sentence begins with "Only if ... does not exist", so the whole SHOULD statement just doesn't apply if the subdomain does exist. Proper behavior when the subdomain exists, but some other error occurs, is undefined in the spec. There is certainly room for improvement / clarification here. > But, I don't want to be overly pedantic. Even if the spec were to > prohibit falling back to the direct method when the subdomain exists, > what exactly would this prohibition gain us? The whole point in my opinion is to give the domain admin control over the WKD resolution process. By blocking the openpgpkey subdomain from resolving, they can avoid needless HTTPS request handling, as I explained in detail before: https://lists.gnupg.org/pipermail/gnupg-users/2021-January/064622.html > (If we overlooked possible attacks, I'd be happy to hear about them.) I don't see any big *security* implications either, but I'm really no expert on that topic. There seems to be good reasons for why the I-D specifies it exactly as it does, namely to give a way to control which server automated WKD requests go to and keeping the load as small as possible. > On the other hand, implementing this prohibition means that a DNS > server can prevent its clients from using WKD by forcing all > openpgpkey subdomains to resolve to 127.1. That's hard to notice, > because everything else still appears to work. One can't really prohibit anyone from *trying* to request a resource over some HTTPS URL, especially not in a protocol specification. But the current WKD draft tries to specify at which point a well-behaved WKD client makes the decision on the "correct" method. > Practically speaking, we helped an organziation deploy WKD, and they > had a similar problem to what Stefan is observing: the admins setup > the direct method, but it didn't work, because their DNS automatically > resolved all unknown subdomains to serve a 404. Unfortunately, they > had outsourced management of their DNS and couldn't (or didn't know > how) to disable this behavior. IIRC, in the end, they spun up an > https server for openpgpkeys.domain. So the core problem, as with Stefan's case, is the lack of control over the domain's DNS settings. Which the WKD mechanism relies upon to delegate trust to the domain operators. I think that is a legitimate concern regarding the current WKD Internet Draft. At least a clarification and maybe some adjustments to the advised fall-back behavior would be in order. Let's see what Werner has to say about it and if there are yet unclear reasons for the currently specified way. Kind regards André -- Greetings... From: André Colomb ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD for GitHub pages
On 12/01/2021 23.47, Stefan Claas wrote: > Mmmh ... github.io or GitHub does *not* have issues with wildcard > domains ... Here we are back at you denying facts, or maybe just generalizing too much. As several others have put it already: When "browsing" to openpgpkey.sac001.github.io with whatever reasonable HTTPS client, you are directed to an IP address. The web server at that IP address presents a certificate for (among others) *.github.io. This certificate is *invalid* for the originally entered domain name. No matter how many times you deny it. For sac001.github.io, the certificate is *valid*. Nobody ever questioned that. But it doesn't mean the above is untrue. Stay safe. André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD for GitHub pages
On 12/01/2021 23.33, Stefan Claas via Gnupg-users wrote: > On Tue, Jan 12, 2021 at 11:32 PM Remco Rijnders wrote: >> I don't see the valid SSL certificate you keep on insisting is there. I totally agree with that. It's valid for the sac001 subdomain, but INVALID for anything below that, which GitHub still happily (and wrongfully) uses it for when asked though. > Hi, I suggest that you visit my https://sac001.github.io page and see what > it is all about. (BTW. I am also not affilated in any form with Brave ...) Sorry, that didn't enlighten me at all. So what is it all about? What does it have to do with timestamping? On a side note: Your sac001 account carries your full name, same as used on this mailing list. You are probably the only one using WKD in this context on github.io. So whatever new account you create, people could very soon find out who is behind that scheme :-) So, only anonymous in theory. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD for GitHub pages
Hi Stefan, On 12/01/2021 23.16, Stefan Claas wrote: > Andre, please appoligze that I snipped your reply and that I only > give a short reply, your explanations of server/client IO was > welcome. I'm happy if it helps keeping this discussion constructive and not turning into a flame war :-) > I think I do undertsand the American Way Of Life quite a bit, > meaning that U.S. citizens are more open to privacy related > things with security software then maybe us old Sauerkrauts, > so to speak. Therefore I doubt that an IMHO very cool billion > dollar company like GitHub, according to the reply I got > from them, would see WKD usage as harm for their service, > when used by many people. I could be wrong of course (in > the future) (Me too Sauerkraut...) But you're missing the point. GitHub has no business whatsoever with e-mail. WKD is all about e-mail and you are probably among the first to use it for something unrelated to e-mail. So they don't give a Koffer about some e-mail-related protocol except for maybe implementing it (hopefully sometime) for their own employees / @github.com e-mail account users. > Even if there would be no github.io pages available I hope > that I showed here something interesting for the GnuPG > community. Interesting yes, to the community, yes. But not to the billion dollar company whose offer has nothing to do with e-mail. Not interesting in the sense of "we will invest time and money and risk breaking other users' setups by changing something in our infrastructure" because of some creative WKD use case. By the way, there might be other free web hosting providers you could use to serve a couple of bytes via HTTPS. It's very likely that they do not have the same issues with wildcard domains and invalid TLS certificates as github.io. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD for GitHub pages
e client queries DNS for the github.io domain, gets back an NS record (a name server for the github.io zone). 2. Client asks the returned DNS server about example.github.io, gets back an IP address for the (web) server. 3. Client contacts the web server on port 443, initiating a TLS handshake. Gets back a TLS certificate issued for *.github.io. 4. Client checks that the contacted DNS name is actually covered by that certificate. (OK for example.github.io, not for deeper levels) 5. Client sends an HTTP request over the established TLS connection, asking for the well-known URL's path component. The server answers "404 Not found" or similar. 6. Client decides that the simple method failed and goes back to step 2, this time trying the openpgpkey.example.github.io DNS name. Step 5 will succeed this time, returning the OpenPGP certificate and public key. Now you need to know that the "handshake" part consists of several back-and-forth data transfers, which is why surfing over satellite links is awful and stuff like QUIC / HTTP/2 is being developed to try and reduce these round trips. There are also many points where things can go wrong, e.g. the web server simply not answering on port 443 because it is really only an e-mail server and WKD is handled somewhere else (where we get in the second round). That involves the connection needing to time out first. So repeating steps 2 to 5 is what I mean with "overhead", which may very well cause user-noticeable delays. In contrast to that, *first* querying the sub-sub-domain openpgpkey.example.github.io would ideally return NXDOMAIN and we can switch immediately to trying the direct method. Less time and energy wasted. I say ideally, because on github.io specifically, that doesn't happen. Which is fine in itself. But the WKD spec lets us interpret that as "cool, the openpgpkey subdomain resolved, so let's use advanced method!" If it now fails at a later step (TLS certificate validity in this case), sane implementations rightfully report a misconfiguration and abort, because it may just as well be an MITM attacker fooling with the TLS certificate. What could be done in the WKD spec and / or GnuPG is to fall back to the simple method not only when the openpgpkey subdomain is unresolvable, but also if any other error happens during the advanced method. I don't see any obvious *security* implications in that. But providers like GitHub would then go through two complete HTTPS connections before the client notices "WKD just isn't set up properly there". The current WKD draft tries to avoid that duplicated server load by aborting early based on the DNS response. What Sequoia could do is fix their TLS host name check (step 4 above) to only match one level with a wildcard, slightly increasing security. But that is their call and I'm not knowledgeable enough on official TLS validity rules to point a finger. Just GnuPG and curl choking on your example indicates that it might be the Right Thing to do. What GitHub could do (easily) is follow the WKD recommendation and specifically block any "openpgpkey" sub-subdomain from resolving. Just in case someone is crazy enough (no offense Stefan :-) to try abusing their free web page service for WKD. Remember it is *their* domain even if they grant you free usage of a subdomain. And WKD specifically delegates some trust to the *domain owner*, so they have every right to not care about OpenPGP at all and let WKD requests fail ungracefully. Even the right to serve an invalid wildcard certificate for sub-subdomains (which is still bad though). Sorry for the long read, but I hope it clarifies the situation. Regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD for GitHub pages
Hi Stefan, maybe I'm not the only one here who doesn't fully follow what your "proposal" actually is. For me, it sounds like you are misunderstanding some things and therefore think you are making a superior proposal where it is actually based on wrong assumptions. On 12/01/2021 18.05, Stefan Claas via Gnupg-users wrote: > please ... openpgpkey is *not* a part of a real (sub)domain, which a > user of any domain service has to define in a record. I do not understand this statement at all. Could you please elaborate? > Please accept also that a modern OpenPGP software like sequoia-pgp > can handle this *adequately* with the direct method first! It seems adequate for *you*, but as I explained it would put a burden on both the client and the involved webservers to handle it that way. In case the advanced method is available, and the direct method is not, testing for the direct method first is not a cheap operation. It has also been pointed out repeatedly in this thread that Sequoia apparently does not properly check the TLS certificate, which you have proven with your example setup. That could be called "modern" or "insecure". It has nothing to do with the ordering of the two methods. > Additionally I have received from GitHub a very nice reply, which I and > I guess all will accept here! > > Quote: "... however I don't believe GitHub is in a position to try and > persuade > a software author to change or fix their software." I agree they shouldn't try that. Your question to them probably hinted at something being the problem which is not in their control. While actually the real problem is something else which they could control on their side (see below). > At least the global OpenPGP community is now aware of my proposal > and I repeat here once again: GitHub (which I am not affiliated with in > any form) has a *proper* SSL cert and github.io pages are properly > working subdomain sites, wiich GnuPG's and gpg4win's WKD implementation This is plain wrong, as Ingo has pointed out. But let me explain to you why I think so. The certificate is issued for *.github.io. So it is valid for anything like example.github.io, openpgpkey.github.io, whatever.github.io. But it is NOT VALID for any deeper level of subdomain, like foo.bar.github.io or openpgpkey.example.github.io. That's just how TLS certificate validity is defined. However, GitHub apparently still presents that certificate when making an HTTPS connection to the deeper subdomains, e.g. openpgpkey.example.github.io. For this connection, the certificate is definitely NOT VALID, as curl or gnupg do point out. Sequoia seems to apply different rules for the hostname check, so it seems to "just work" for you. In fact, it should only accept a certificate for openpgpkey.example.github.io or *.example.github.io. So there are two "bugs" involved here. 1. GitHub presenting an invalid certificate for the sub-subdomain and 2. Sequoia not noticing that. Neither of these are bugs in GnuPG. If you can accept these facts, then it makes sense to further discuss what could be changed where to make your desired setup work. Maybe that discussion will lead to a concise change proposal. One more question: You're talking about OpenPGP key discovery setups for families and small groups, IIUC. And that should involve WKD and GitHub. But how should these people actually get working e-mail addresses @example.github.io? WKD very specifically ties the key discovery to the control over the involved domain. It moves part of the trust relationship to the domain administrator. So who is actually in control over those e-mail addresses? I hope this mail will not upset you. Just trying to clarify what you might have misunderstood that leads to people not understanding or agreeing with your proposal. I don't mind to be proven wrong if it was in fact my misunderstanding. Kind regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD for GitHub pages
On 12/01/2021 09.25, Stefan Claas via Gnupg-users wrote: > It would be nice to know why the advanced method was added. In case > the direct method would not be sufficent or would have security issues > I would think that than one replaces the direct method with advanced > one and then we only need only one method, in order that this works. A domain is not automatically tied to a webserver. It might so far only be used for e-mail and just to set up WKD, one might not want to run a webserver under the second-level domain itself. Therefore the standardized "openpgpkey" subdomain, which can easily point to a different IP. That makes it easy to completely separate the infrastructure needed for WKD from anything else, like a webserver for a web page, webmail or other services. In addition, that separate server might serve WKD keys for a bunch of different domains through redirects, hence it makes sense to separate the URLs per domain. It just gives the admin additional flexibility by not forcing them to make a certain URL under the main domain work. > And if we must have two methods, why is the order not, like one would > think: check direct first and if this does not work check advanced? > I must admit I do not understand the programming logic. That's easy: If openpgpkey.example.org exists, we can be certain that example.org exists as well. So the check for the openpgpkey subdomain must come first if its mere existence decides which method is tried. Otherwise you would get HTTPS connections for every WKD request on the example.org server, which fail if the direct method is not supported. Just to make another HTTPS connection to openpgpkey.example.org to try the advanced method next. That's a lot of overhead on both the client and server side, compared to the two DNS queries you need to make either way. Hope that helps. André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD for GitHub pages
Hi Stefan, your key seems to work fine over that WKD setup. > Now Wiktor's WKD checker gives the proper > results in the first part, not sure why not in the > second part. You don't need the "Advanced" method if the direct one already works. They basically exist to provide flexibility for server admins to decide whether they want to issue a TLS certificate for the whole domain matching the e-mail address, or just serve the WKD stuff through a dedicated "openpgpkey" subdomain. The latter could be easier if the WKD webserver should be isolated from other things on the domain. In your setup, the valid TLS certificate for sac001.github.io is the only one you'll get, so the "Direct" method fits perfectly. Nice idea actually, but you'd have to check if GitHub actually allows such use for "arbitrary" data distribution. Good night. André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: WKD for GitHub pages
Hi Stefan, > I just started to set-up a github-page and have also verified > the page via Brave. I tried to set-up WKD for the page, like > I did in the past for my 300baud.de Domain, but fetching > the key with GnuPG does not work for me. :-( You could try the online WKD checker here: https://metacode.biz/openpgp/web-key-directory It reports that the policy file is missing, which I think is a hard requirement, no? Also make sure that the MIME content type and Access-Control-Allow-Origin headers are set correctly. Kind regards, André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Future OpenPGP Support in Thunderbird
Hi Patrick, >The Thunderbird developers and I have therefore agreed that it's much >better to implement OpenPGP support directly in Thunderbird. The set of >functionalities will be different than what Enigmail offers, and at >least initially likely be less feature-rich. But in my eyes, this is by >far outweighed by the fact that OpenPGP will be part of Thunderbird and >no add-on and no third-party tool will be required. Great news overall and thanks for the announcement. Thunderbird with direct OpenPGP integration has long been overdue IMHO. So according to the wiki page [1] I understand that the secret keys will be managed by Thunderbird. That is quite a limitation I think, in contrast to reusing a GPG agent of some sort. Depending on the chosen alternative, it might offer better OS integration, a long time proven secure process architecture, possible reuse with only one central key store and most of all integration with hardware tokens. I personally would not entrust my private keys to a mail application that also displays HTML and possibly executes JavaScript etc. after what we have seen with Efail for example. So could you please elaborate or extend the wiki page to clear up how hardware tokens fit into the new picture? Thanks and kind regards. André [1]: https://wiki.mozilla.org/Thunderbird:OpenPGP:2020 -- Greetings... From: Andre Colomb ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Why exactly does pinentry fails with gpg-agent and ssh support?
On 2018-01-22 18:06, André Colomb wrote: >> the systemd user service takes care of automatically launching the >> gpg-agent when the user connects to it via the ssh-agent protocol, so >> this isn't required when using systemd. > > I can't see how it does that in my packaged Ubuntu version (2.1.15), > there is no gpg-agent.socket unit file anywhere? Seems like the relevant systemd unit file examples were added in commit 57e95f5413e21cfcb957af2346b292686a5647b7, shortly after 2.1.15 (included in Debian / Ubuntu) was released. As far as I can see, the new socket-activated user units should be installed with current packages in Debian testing and Ubuntu bionic. I might try manually upgrading to 2.2.4-1ubuntu1 and report any findings. Regards André -- Greetings... From: André Colomb ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Why exactly does pinentry fails with gpg-agent and ssh support?
Hello Daniel, I'm on Ubuntu 17.10 with GnuPG 2.1.15, by the way. Daniel Kahn Gillmor wrote on 2018-01-22 12:53 (UTC+0100) > It may also depend on how the session itself is started. Maybe one of > you is starting the user session in non-graphical mode (either a vt > login, or maybe ssh?), while the other one is starting it directly from > a graphical display manager? The session is started by GDM3, using the vanilla gnome-session scripts (not the adapted ubuntu-session, also based on GNOME 3). The systemd user unit file is copied from /usr/lib/systemd/user/gpg-agent.service and the Upstart-specific "initctl" command line commented out. The main difference I see here is that I have enabled the user unit by symlinking from ~/.config/systemd/user/default.target.wants/, whereas the Ubuntu package includes the symlink in /usr/lib/systemd/user/graphical-session-pre.target.wants/. acolomb@barnov:~$ systemctl --user status gpg-agent.service Loaded: loaded (/home/acolomb/.config/systemd/user/gpg-agent.service; enabled; vendor preset: enabled) > do you have dbus-user-session installed? (it is recommended) Yes. (from your other message:) > the systemd user service takes care of automatically launching the > gpg-agent when the user connects to it via the ssh-agent protocol, so > this isn't required when using systemd. I can't see how it does that in my packaged Ubuntu version (2.1.15), there is no gpg-agent.socket unit file anywhere? Any other ideas on how to debug this? What logging should I enable for gpg-agent and how? Btw. it affects both my Yubikey as well as file-based authentication subkeys, so not specific to scdaemon apparently. Regards André -- Greetings... From: André Colomb ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Why exactly does pinentry fails with gpg-agent and ssh support?
On 2018-01-22 08:43, Werner Koch wrote: >> As far as I understand, because I use `systemd`'s user service, whenever >> I want to unlock an authentication key I need to run the command >> `gpg-connect-agent updatestartuptty /bye`. > > Although I have no experience with the peculiarities of the --supervised > mode, there is no need to run the updatestartuptty command. That command > is only used to switch gpg-agent's default $DISPLAY and tty to the one > active in the shell you run this command. This is required because the > ssh-agent protocol has no way to tell gpg-agent (or ssh-agent) the > DISPLAY/tty which shall be used to pop-up the Pinentry. I can confirm that it actually IS necessary to send "updatestartuptty" for ssh-agent functionality to work in this scenario. The gpg-agent process started by systemd's user session has no $DISPLAY and no $GPG_TTY set (looking at /proc/###/environ). Its cmdline does not contain --supervised either. I always wondered why I got the message "agent refused operation" when using an SSH key from gpg-agent. Restarting gpg-agent manually after logging in was my workaround thus far, but today I found out that updatestartuptty suffices. Strange thing is, I could use the GPG part of gpg-agent already before issuing that command. Why does that behave differently? Can something be done to the systemd user unit file so the process gets told the correct $DISPLAY at least? Kind regards André -- Greetings... From: André Colomb ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Local-signing without (offline) private master key
Damien Goutte-Gattat wrote on 2016-09-12 14:16 (UTC+0200) > If you're already using GnuPG >= 2.1.10 (with support for the TOFU > model), I would argue this is your best option. This sounds reasonable. I'm on Ubuntu 16.04, GnuPG 2.1.11, so the TOFU stuff seems to work fine. It seems hard to discover the current TOFU ratings for individual keys. The man page only says "see: [trust-model-tofu]" in some places, and there is no option to show the trust status except for the classic WoT checking. Looking at the SQLite database at least gives some indication, but is not easy data to interpret. Did I miss some option here, or are any such additions planned? Regards André -- Greetings... From: André Colomb signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Local-signing without (offline) private master key
Hi all, this is my first post to GnuPG-users, please be gentle :-) My OpenPGP setup currently includes an offline master key (see attached public key) with three subkeys on a Yubikey USB "smartcard". Amongst them is a signing subkey with "usage: S" flag, but only the master key has the Certify capability (usage: SC). Now I want to import someone else's key to verify a signature. In order to verify that signature, I need to at least locally sign the owner's key, AFAIK. However, I would need my offline master key (read: really inconvenient) to issue a signature. What is the recommended practice if I only want to verify message integrity, but don't have the master key with Certify ability available? One solution that comes to mind would be to add a new certification subkey that I keep on my machine instead of the smartcard, and only use it for local signatures. Would that make sense or what complications should I expect? Building a Web of Trust with an offline master key seems rather difficult, even just to verify incoming emails. Maybe the upcoming TOFU trust model would help my usage pattern? Thanks for any pointers or explanation. Kind regards, André -- Greetings... From: André Colomb 0x9F45D0FB.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users