Sorry, I need to invert the quoted message so answers make sense.

On 10/3/19 5:00 AM, Matthew Woehlke wrote:
On 01/10/2019 20.47, Roland Hughes wrote:
If they targeted something which uses XML documents to communicate, they
don't need to brute force attempt everything, just the first 120 or so
bytes of each packet until they find the one which returns

<?xml version=

and they are done.
That seems like a flaw in the encryption algorithm. It seems like there ought to be a way to make it so that you can't decrypt only part of a message. Even an initial, reversible step such as XOR-permuting the message with some well-known image of itself (e.g. "reversed") might suffice?

Not a flaw in the algorithm, just seems to be a flaw in the communications. This isn't partially decrypting a packet. It is encrypting every possible combination of key+algo supported by TLS/SSL into a fingerprint database. You then use a sliding window of the fingerprint size performing keyed hits against the fingerprint database. You "dust for prints."

To really secure transmitted data, you cannot use an open standard which
has readily identifiable fields. Companies needing great security are
moving to proprietary record layouts containing binary data. Not a
"classic" record layout with contiguous fields, but a scattered layout
placing single field bytes all over the place. For the "free text"
portions like name and address not only in reverse byte order, but
performing a translate under mask first. Object Oriented languages have
a bit of trouble operating in this world but older 3GLs where one can
have multiple record types/structures mapped to a single buffer (think a
union of packed structures in C) can process this data rather quickly.
How is this not just "security through obscurity"? That's almost universally regarded as equivalent to "no security at all". If you're going to claim that this is suddenly not the case, you'd best have some *really* impressive evidence to back it up. Put differently, how is this different from just throwing another layer of encry^Wenciphering on your data and calling it a day?

Well, first we have to shred some marketing fraud which has been in existence for a very long time.

https://en.wikipedia.org/wiki/Security_through_obscurity

"Security through obscurity (or security by obscurity) is the reliance in security engineering on design or implementation secrecy as the main method of providing security to a system or component."

I wonder if Gartner was paid to market this fraud. They've certainly marketed some whoppers in their day. Back in the 90s declaring Microsoft Windows an "open" platform when it was one of the most proprietary systems on the market. Can't believe nobody went to prison over that.

At any rate the peddlers of encryption have been spewing this line. In fact this line is much truer than the peddlers of encryption wish to admit. When you press them on it they are forced to perform a "Look at the Grouse" routine.

https://www.youtube.com/watch?v=493jZunIooI

_ALL_ electronic encryption is security by obscurity.

Take a moment and let that sink in because it is fact.

Your "secrecy" is the key+algorithm combination. When that secret is learned you are no longer secure. People lull themselves into a false sense of security regurgitating another Urban Legend.

"It would take a super computer N years running flat out to break this encryption."

I first heard that uttered when the Commodore SuperPET was on the market.

https://en.wikipedia.org/wiki/Commodore_PET

I believe they were talking about 64-bit encryption then. Perhaps it was 128-bit? Doesn't matter. IF someone wants to do a brute force attack and they have 6-30 million infected computers in their botnet, they can crush however many bits you have much sooner than encryption fans are willing to believe.

They can easily build fingerprint databases with that much horsepower assuming they buy enough high quality storage. You really need Western Digital Black for that if you are single or paired driving it. I haven't seen anyone use a SAN for high speed high volume data collection so I don't know how well those hold up. During my time at CTS one of the guys was running high speed data collection tests with a rack of pressure and leak testers running automated tests as fast as they could. Black would make it roughly a year. Blue around 6 months. Red was just practicing how to replace a drive.

One of the very nice things about today's dark world is that most are script-kiddies. If they firmly believe they have correctly decrypted your TLS/SSL packet yet still see garbage, they assume another layer of encryption. They haven't been in IT long enough to know anything about data striping or ICM (Insert Character under Mask).


If you are using XML, JSON or any of the other trendy text based
open standards for data exchange, you've made it easy for the hackers.
They don't have to put any human noodling into determining if they
cracked your transmission or not. It can be fully automated. As soon as
one of the attempts returns something like this

<firstName>John</firstName>

or this

"firstName" : "John"

a tiny bit of code which runs through the result looking for an opening
tag with a matching closing tag or colon delimited quoted strings can
tell a brute force method to stop and notify the hacker of success.
Isn't the fix for this just to encrypt your data twice (using different
keys, or maybe better, different algorithms)? Offhand, it seems like
this should exponentially increase the difficulty of decryption for each
layer added.

Be extremely careful with that knee-jerk response. Most people using it won't have the patience to really test it.

Back in 2012 I was working on the IPGhoster project. We were initially doing 3-layers of encryption well, because who was supposedly buying (and funding) it wanted at least that. We had some great stuff too. Not just OpenSource but libraries which supposedly cost 6 figures as well. Every packet had to have a unique set of key+algorithm. No two packets used the same set for the duration of the session. (Yeah, there was some serious threading going on to make real-time feel like real-time.)

I'm good when it comes to architecting solutions. Some clients even call me brilliant but that may have more with trying to reduce my rate than honesty. I'm not stating this to brag. I'm giving you a frame of reference so you understand when I say calling Mr. Keith, the guy came up with our edge cases for testing, a genius is a severe understatement. He came up with a set of test cases and sure enough, this system which worked fine with simple XML, JSON, email and text files started producing corrupted data at the far end with the edge cases.

Moral of the story: You can never be certain 2 or more encryption solutions will play well together.

There were no "software bugs" in the libraries. Those got pounded on. His edge cases found the flaws in the algorithms. Flaws which wouldn't show up until they were compounded by flaws in other algorithms. No, I don't still have the test cases nor do I remember what they were. I was a hired gun on that project and they owned everything. What they did and/or who they told is up to them.

Maybe all of those things have since been fixed, but I would not make such an assumption.

Even if all of that stuff has been fixed, you have to be absolutely certain the encryption method you choose doesn't leave its own tell-tale fingerprint. Some used to have visible oddities in the output when they encrypted groups of contiguous spaces, nulls, etc. Plus, there are quite a few places like these showing up on-line. (Admittedly this one is hash but I didn't feel like doing much of the kind of searches which could land me on the FBI "follow them" list.)

https://www.onlinehashcrack.com/hash-identification.php

discussions like this

https://security.stackexchange.com/questions/3989/how-to-determine-what-type-of-encoding-encryption-has-been-used

Ah, here's one which could be put to nefarious use.

https://codebeautify.org/encrypt-decrypt


Personally, I vote for data striping and ICM primarily because the script kiddies can't really be bothered with anything that isn't already a "free" OpenSource tool _and_, as far as I know, it doesn't trigger any of those multi-level nested encryption data not coming out right on the other end problems.


At any rate, I've started writing some of this up in a blog series. (Well, one post right now, but more.) I did order a 6TB WD Black from NewEgg today. If time allows before I head off for next contract I'll spend a few days coding up the fingerprint generator mostly to see if it fits on a 6TB drive and how long it takes to create using limited resources. I might use 2-3 machines or may just leave it on this desktop.

If you take nothing else away from this discussion, take these points:

1) You don't have to crack encryption if you can find a fingerprint and that scan isn't a lot of work.

2) Given the size of botnets which have already been discovered, fingerprint databases can be generated quickly.

3) Phone App and Web developers should never use XML, JSON or any "free text" standard nor should they write systems which expect to get it off the IP stack in that format. If they do, then they are the security breach. If the exact same key+algorithm is used for an entire session, a hacker only needs to identify a single fingerprint to decrypt ever packet.

--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

_______________________________________________
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest

Reply via email to