Re: Ok this is a stupid questions
On 2/26/2019 at 3:28 PM, "Stefan Claas" wrote:And maybe another FOSS point? How about issuing Warrant Canaries? I have seen that VeraCrypt does this. = Yes. The latest one is here: https://www.idrix.fr/VeraCrypt/canary.txt Interesting, but it still boils down to *trust*. I would trust WK and the GnuPG team even if they didn't *sign* a Warrant Canary (i / we all, sort-of trust the verification of the new GnuPG releases, with his sig), And if we *don't trust*, then signing a Warrant Canary with the same signing key as the GnuPG release, wouldn't help ;-) vedaal ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Ok this is a stupid questions
Am Tue, 26 Feb 2019 13:57:01 -0500 schrieb ved...@nym.hush.com: > On 2/26/2019 at 10:29 AM, "Stefan Claas" wrote: >> I have learned in the past trust nobody. Therefore I would >> not rely on people from the GnuPG ecosystem and what they say. > It depends on how realistic your threat model is. Well, mine is actually very low, otherwise I would only read the list via Tor, for tips and tricks and don't publish keys on key servers, nor use smtp to submit encrypted messages. ;-) > For example, has anyone you know, ever checked how the > compilers work? (Reviewed gcc's source code, and the hardware > necessary to make it run, to ensure that nothing is > 'added/subtracted/altered' when it gets to machine language? Even > more difficult when it is a proprietary compiler.) You bring up an interesting question, imho ... Let's assume the tool chain is in good condition, but do you / we know if FOSS coders use online computers to code and do we know if their computers are hacked too? And if so, do coders have always checksums handy (on paper) for comparison or are superior Linux tools availabe which would detect changes immediately? And maybe another FOSS point? How about issuing Warrant Canaries? I have seen that VeraCrypt does this. Regards Stefan ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
RE: Ok this is a stupid questions
Hello As a follow up to my previous post, let me emphasize the size expansion is not related to the compression that is applied during encryption. My issue is that I have files that are being transferred to me, and for a cause that I am trying to track down, gpg begins to decrypt before the file has fully arrived. From my perspective it appears to send gpg into a tailspin. The process seems to be able to continue for a week or more and does not seem to complete. >From a design perspective, I expect the usual replies of "if that is not what >you want to do then don't do it". Yes I know. What I am looking to do is to >be able to understand why it goes into this race condition instead of erroring >out. My ask of the gpg listers, is has anyone ever seen this behavior? From: Michael Holly Sent: Monday, February 25, 2019 8:14 AM To: gnupg-users@gnupg.org Subject: Ok this is a stupid questions So I completely preface this question is not a valid use case for gpg. I know, I get it. I have a potential issue that I'm trying to diagnose. I'm trying to understand how gpg will react to the input file size changing during the encrypt or decrypt step. Right now it appears that the gpg process goes a bit crazy and the 200 MB file I am decrypting becomes 1.2 TB or greater. Here is the order of the events 1. File lands on my system. 2. PGP decrypt is invoked on the file. 3. Since the file is not truly done being sent to me, the file grows in size. 4. GPG seems to expand the decrypted file many times over. What I suspect is that instead of erroring out, GPG starts the decrypt process over and appends the new output to the previous cycle.. I have not tested this, but will soon. I just wanted to see if anyone else has seen this happen. Thanks Michael ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Ok this is a stupid questions
On 2019-02-25 at 14:13 +, Michael Holly wrote: > What I suspect is that instead of erroring out, GPG starts the decrypt > process over and appends the new output to the previous cycle.. I > have not tested this, but will soon. > > I just wanted to see if anyone else has seen this happen. > Not that it couldn't happen, but I find strange gpg would do that. Erroring out would make more sense. Note that GnuPG can work in filter mode, so you can do cat incomplete_file | gpg -d > output_file (*) in which case it really can't start over. I don't think it would process things differently, but worth trying. How are you invoking gpg? Which version are you running? (*) Yes, this is an useless use of cat™ In fact, it's quite likely cat will be faster than whatever is transferring the file, piping eg. wget -O - would make more sense. (**) Remember that even though you are getting an incomplete output, unless the gpg terminates with no error after verifying the data, **there's no guarantee about the contents** Don't pipe that output to bash or otherwise treat as trusted data! Wait to the next command for that (after verifying that gpg is ok with what was provided). Cheers Ángel ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Ok this is a stupid questions
On 2/25/2019 at 2:29 PM, "justina colmena via Gnupg-users" wrote: That's why I have to call foul play on proprietary operating systems. Encryption is theoretical only: in practice useless, moot, crippled, broken, and terminally back-doored with all the malware, adware, spyware, worms, viruses, trojans, keyloggers, and screenscrapers inherent to such systems as Google Android, Microsoft Windows, and Apple OS. The Democrats will stop at nothing to keep it that way at all costs, and the Republicans just don't care. = Maybe *proprietary* encryption is theoretical only.What problems do you have with GnuPG as a FOSS program ? Ordinarily, I'm on the cautious, [maybe even borderline paranoid ;-) ] side of things, and I don't just trust things lightly. But I *DO* trust GnuPG, WK, and the host of other people who have put the time and effort into GnuPG, releasing the source code routinely so that it can be compiled by the end user on FOSS platforms (Linux, Ubuntu. etc.) You sound capable enough to review source-code, and use a Linux variant. Why do you think GnuPG is useless if you check the source-code, run it on hardware you trust, and a Linux variant you trust, with a Chromium/Iron browser, and avoid anything google or microsoft or apple or any non-FOSS product? If I misunderstand you, and your beef is not with GnuPG, only with Google, Android, MS, apple etc.then I apologize. That said, can i ask you to trim your posts from the political rants, much as they may be deserved. There are other forums ideally suited to that. Thanks. vedaal ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users
Re: Ok this is a stupid questions
On February 25, 2019 5:13:32 AM AKST, Michael Holly wrote: > So I completely preface this question is not a valid use case for gpg. > I know, I get it. > > I have a potential issue that I'm trying to diagnose. I'm trying to > understand how gpg will react to the input file size changing during > the encrypt or decrypt step. > > Right now it appears that the gpg process goes a bit crazy and the 200 > MB file I am decrypting becomes 1.2 TB or greater. > > Here is the order of the events > > > 1. File lands on my system. > > 2. PGP decrypt is invoked on the file. > > 3. Since the file is not truly done being sent to me, the file > grows in size. > > 4. GPG seems to expand the decrypted file many times over. > > What I suspect is that instead of erroring out, GPG starts the decrypt > process over and appends the new output to the previous cycle.. I > have not tested this, but will soon. > > I just wanted to see if anyone else has seen this happen. > > Thanks > > Michael News media questions? Many times it is the case that large files are compresssed before being encrypted, and there are certain information-theoretical reasons to do so. Aside from efficiency and possibly a slightly better security, it is absolutely impossible to compress files after they are encrypted because the repetitive or redundant patterns, on which the compression is based, are precisely what is obfuscated and concealed by the encryption. In any case, if the file was compressed before encryption, then it will have to be expanded back to its original size after decryption. Then there is the base64 ASCII armor, which causes a ciphertext expansion to the tune of some 35% by using only 6 of the 8 bits of each byte plus extra formatting for new lines and such. So how did the Firstlook Media reporters from The Intercept come to give up their GPG keys and go so mainstream corporate? They never got along all that well with the military, and they're not even remotely "alternative" anymore if they ever were. It's all establishment Democrat party line mainstream media, and "Don't you dare try to get smart and buck the labor union!" Holed up in Brazil somewhere pushing that atrocious "7me" spyware app on my Android phone as if that gay male reporter is suddenly a good Christian sitting on the church pew keeping the Sabbath so obediently on the Seventh Day and circumcising his kids under the law of Moses. That's why I have to call foul play on proprietary operating systems. Encryption is theoretical only: in practice useless, moot, crippled, broken, and terminally back-doored with all the malware, adware, spyware, worms, viruses, trojans, keyloggers, and screenscrapers inherent to such systems as Google Android, Microsoft Windows, and Apple OS. The Democrats will stop at nothing to keep it that way at all costs, and the Republicans just don't care. -- Una Milicia bien regulada, estando necesaria a la seguridad de un Estado libre, el derecho del pueblo de tener y de portar Armas, no será infringido. https://www.colmena.biz/~justina/ signature.asc Description: PGP signature ___ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users