Re: [Interest] TLS/SSL XML encryption security

2019-10-19 Thread Giuseppe D'Angelo via Interest

Il 19/10/19 14:35, Roland Hughes ha scritto:

Actually it is of immense interest and value to the hundreds, perhaps
thousands of Qt developers currently working on autonomous vehicles and
attempting to integrate that with current infotainment systems. What's
of little to no value are discussions of QML, JavaScript and iDiot Phone
apps.


How about you just ignore such discussions, and maybe also stop 
insulting people working in those markets?




What we really need here are two interest lists.

qt-interest-qml-javascript-and-idiot-phones

qt-interest-things-which-actually-matter


What we need is fewer completely delusional and off-topic threads.



On the latter is where Qt can be discussed along with the architecture
required for embedded systems which are either making human lives better
or will have human lives entrusted to them.

Surgical robots and other medical devices


Which are being built with QML:


https://www.qt.io/medec-built-with-qt





Train/rail control systems


Which are being built with QML:


https://www.qt.io/ulstein-built-with-qt






autonomous vehicles and infotainment systems


Which are being built with QML:


https://resources.qt.io/customer-stories-automotive/qt-testimonial-mbition-michael-chayka





scientific test equipment, water purification control systems and all
other manner of embedded control systems.


Which are being built with QML:


https://www.qt.io/bosch-built-with-qt





The only time QML and/or JavaScript ever appears in that universe is
when management is incompetent beyond description.

STOP INSULTING PEOPLE.

--
Giuseppe D'Angelo | giuseppe.dang...@kdab.com | Senior Software Engineer
KDAB (France) S.A.S., a KDAB Group company
Tel. France +33 (0)4 90 84 08 53, http://www.kdab.com
KDAB - The Qt, C++ and OpenGL Experts



smime.p7s
Description: Firma crittografica S/MIME
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-19 Thread Roland Hughes


On 10/18/19 8:45 AM, Rolf Winter wrote:

This is not really about Qt anymore and overall has little to no value
in itself. Could you move this discussion somewhere else please.


Actually it is of immense interest and value to the hundreds, perhaps 
thousands of Qt developers currently working on autonomous vehicles and 
attempting to integrate that with current infotainment systems. What's 
of little to no value are discussions of QML, JavaScript and iDiot Phone 
apps.


What we really need here are two interest lists.

qt-interest-qml-javascript-and-idiot-phones

qt-interest-things-which-actually-matter

On the latter is where Qt can be discussed along with the architecture 
required for embedded systems which are either making human lives better 
or will have human lives entrusted to them.


Surgical robots and other medical devices

Train/rail control systems

autonomous vehicles and infotainment systems

scientific test equipment, water purification control systems and all 
other manner of embedded control systems.


The only time QML and/or JavaScript ever appears in that universe is 
when management is incompetent beyond description.


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-18 Thread Rolf Winter
This is not really about Qt anymore and overall has little to no value
in itself. Could you move this discussion somewhere else please.

On Fri, Oct 18, 2019 at 2:46 PM Roland Hughes
 wrote:
>
>
> On 10/17/19 4:48 PM, Matthew Woehlke wrote:
> > On 17/10/2019 09.56, Roland Hughes wrote:
> >> This presents the perfect challenge. Once "The Mother Road" it is now
> >> difficult to navigate having many turns, stops and 30 MPH stretches.
> >> Most importantly there are huge sections without cellular/wireless
> >> coverage. Some sections satellite coverage doesn't work. The vehicle
> >> will have to retain all of the knowledge it needs for the trip because
> >> updates will be sparse.
> > I think you overestimate the difficulty of doing this. My ten year old
> > car has maps of the entire US in onboard memory. IIRC it fits on a
> > single DVD. Yes, this is now 10 years out of date, and doesn't include
> > things like speed limits, but I doubt we're talking about an amount of
> > data that can't fit on a single SSD. The car knows where it is from a
> > combination of GPS, inertial guidance, and the assumption that it is on
> > a road. Combine this with the car *knowing* what it is trying to do and
> > being able to actually "see" the road and street signs, and you have a
> > system that should be able to navigate at least as well as a human under
> > most conditions. This isn't guessing, it's experience... based on
> > technology that was close to mainstream *ten years ago*.
>
> Not really no.
>
> https://www.foxnews.com/story/couple-stuck-in-oregon-snow-for-3-days-after-gps-leads-them-astray
>
> While I understand your position and experience mine has been
> significantly different. Just a few years ago I was heading out to
> Clive, IA. I dutifully updated my Garmin. Before leaving my yard I tried
> to set the destination. According to Garmin, Clive, IA (a suburb of Des
> Moines) did not exist. Could not find it by zipcode or name. Could not
> even find the hotel I was staying at. I had to drive to Des Moines and
> "wing it." I have a 16Gig SSD in there and Garmin is pretty good about
> letting you know when an update won't fit.
>
> When I got to Clive and found the hotel I saw the water tower for Clive
> which had to be at least 60 years old. The hotel seemed even older.
> Adding insult to injury the following morning when I was about to pull
> out of the parking lot Garmin actually showed me the street I was on.
>
> Driving out to Oregon over one Thanksgiving I got to a spot in the
> mountains where it seemed only the AM radio got signals. No cell
> service. Even the Satellite stuff didn't seem to work. I heard the
> Interstate was closed due to the snow and a "mega load" being stuck on a
> bridge and unable to climb the next icy rise. I found out the Interstate
> was now closed via the big orange gate across it as I came over a rise.
>
> Driving to Chicago the evening before meeting with clients I have
> numerous times gotten to enjoy Garmin (my model also receives the FM
> signals) trying to route me up a ramp which was subject to nightly
> rolling closures as workers stripe/resurface. Few things more enjoyable
> than encountering that late at night while having the nav system
> continually try to re-route you back to said closed ramp.
>
> >
> > BTW, I believe Google Navigation has already solved the "retain the data
> > you need for the whole trip" problem. Combine this with some form of
> > version control system so that the vehicle can frequently download
> > updates for its entire operational area, and I just don't see how
> > "spotty network coverage" is going to be an issue. (Maybe for someone
> > who *lives* in an area with no coverage. Well, such people may just not
> > be able to use autonomous vehicles. I doubt that is going to deter the
> > folks working on AV's.)
> Where this flies apart is the innocent sounding phrase "operational
> area." The once a year family road trip to visit some part of America
> (or insert country here if your culture has annual road trips) now
> defines a new operational area which will require an update right when
> all connective services have disappeared.
> >
> > Yes, situations will come up that it can't handle, at which point it
> > will have to get the human involved. Until we have something approaching
> > "real AI", that will be the case.
>
> Yeah . . . GM isn't going to give you a steering wheel.
>
> https://www.wired.com/story/gm-cruise-self-driving-car-launch-2019/
>
> While I applaud them and hope the laws change so anyone with one of
> these vehicles without a steering wheel can get falling down drunk in a
> bar, pour themselves into the front seat and slur "Home Jeeves" without
> getting a DUI, I also live in mortal fear of such a system built by the
> lowest cost labor GM can find. The billing rates I hear from the
> recruiters reaching out to me about Qt contracts to work on this stuff
> in Michigan at the Big 3 locations are _NOT_ bringing in seasoned pr

Re: [Interest] TLS/SSL XML encryption security

2019-10-18 Thread Roland Hughes


On 10/17/19 4:48 PM, Matthew Woehlke wrote:

On 17/10/2019 09.56, Roland Hughes wrote:

This presents the perfect challenge. Once "The Mother Road" it is now
difficult to navigate having many turns, stops and 30 MPH stretches.
Most importantly there are huge sections without cellular/wireless
coverage. Some sections satellite coverage doesn't work. The vehicle
will have to retain all of the knowledge it needs for the trip because
updates will be sparse.

I think you overestimate the difficulty of doing this. My ten year old
car has maps of the entire US in onboard memory. IIRC it fits on a
single DVD. Yes, this is now 10 years out of date, and doesn't include
things like speed limits, but I doubt we're talking about an amount of
data that can't fit on a single SSD. The car knows where it is from a
combination of GPS, inertial guidance, and the assumption that it is on
a road. Combine this with the car *knowing* what it is trying to do and
being able to actually "see" the road and street signs, and you have a
system that should be able to navigate at least as well as a human under
most conditions. This isn't guessing, it's experience... based on
technology that was close to mainstream *ten years ago*.


Not really no.

https://www.foxnews.com/story/couple-stuck-in-oregon-snow-for-3-days-after-gps-leads-them-astray

While I understand your position and experience mine has been 
significantly different. Just a few years ago I was heading out to 
Clive, IA. I dutifully updated my Garmin. Before leaving my yard I tried 
to set the destination. According to Garmin, Clive, IA (a suburb of Des 
Moines) did not exist. Could not find it by zipcode or name. Could not 
even find the hotel I was staying at. I had to drive to Des Moines and 
"wing it." I have a 16Gig SSD in there and Garmin is pretty good about 
letting you know when an update won't fit.


When I got to Clive and found the hotel I saw the water tower for Clive 
which had to be at least 60 years old. The hotel seemed even older. 
Adding insult to injury the following morning when I was about to pull 
out of the parking lot Garmin actually showed me the street I was on.


Driving out to Oregon over one Thanksgiving I got to a spot in the 
mountains where it seemed only the AM radio got signals. No cell 
service. Even the Satellite stuff didn't seem to work. I heard the 
Interstate was closed due to the snow and a "mega load" being stuck on a 
bridge and unable to climb the next icy rise. I found out the Interstate 
was now closed via the big orange gate across it as I came over a rise.


Driving to Chicago the evening before meeting with clients I have 
numerous times gotten to enjoy Garmin (my model also receives the FM 
signals) trying to route me up a ramp which was subject to nightly 
rolling closures as workers stripe/resurface. Few things more enjoyable 
than encountering that late at night while having the nav system 
continually try to re-route you back to said closed ramp.




BTW, I believe Google Navigation has already solved the "retain the data
you need for the whole trip" problem. Combine this with some form of
version control system so that the vehicle can frequently download
updates for its entire operational area, and I just don't see how
"spotty network coverage" is going to be an issue. (Maybe for someone
who *lives* in an area with no coverage. Well, such people may just not
be able to use autonomous vehicles. I doubt that is going to deter the
folks working on AV's.)
Where this flies apart is the innocent sounding phrase "operational 
area." The once a year family road trip to visit some part of America 
(or insert country here if your culture has annual road trips) now 
defines a new operational area which will require an update right when 
all connective services have disappeared.


Yes, situations will come up that it can't handle, at which point it
will have to get the human involved. Until we have something approaching
"real AI", that will be the case.


Yeah . . . GM isn't going to give you a steering wheel.

https://www.wired.com/story/gm-cruise-self-driving-car-launch-2019/

While I applaud them and hope the laws change so anyone with one of 
these vehicles without a steering wheel can get falling down drunk in a 
bar, pour themselves into the front seat and slur "Home Jeeves" without 
getting a DUI, I also live in mortal fear of such a system built by the 
lowest cost labor GM can find. The billing rates I hear from the 
recruiters reaching out to me about Qt contracts to work on this stuff 
in Michigan at the Big 3 locations are _NOT_ bringing in seasoned pros.


Oh. Poke around on the Web. There is a passable video from Microsoft 
automotive division (the division without any customers because Ford 
fired them over the shit job they did on Sync). It talks about the 
volume of sensor readings they are currently getting per second and 
stuffing into an on-board SQL server. That is just straight forward 
motion without navig

Re: [Interest] TLS/SSL XML encryption security

2019-10-17 Thread Matthew Woehlke
On 17/10/2019 09.56, Roland Hughes wrote:
> This presents the perfect challenge. Once "The Mother Road" it is now
> difficult to navigate having many turns, stops and 30 MPH stretches.
> Most importantly there are huge sections without cellular/wireless
> coverage. Some sections satellite coverage doesn't work. The vehicle
> will have to retain all of the knowledge it needs for the trip because
> updates will be sparse.

I think you overestimate the difficulty of doing this. My ten year old
car has maps of the entire US in onboard memory. IIRC it fits on a
single DVD. Yes, this is now 10 years out of date, and doesn't include
things like speed limits, but I doubt we're talking about an amount of
data that can't fit on a single SSD. The car knows where it is from a
combination of GPS, inertial guidance, and the assumption that it is on
a road. Combine this with the car *knowing* what it is trying to do and
being able to actually "see" the road and street signs, and you have a
system that should be able to navigate at least as well as a human under
most conditions. This isn't guessing, it's experience... based on
technology that was close to mainstream *ten years ago*.

BTW, I believe Google Navigation has already solved the "retain the data
you need for the whole trip" problem. Combine this with some form of
version control system so that the vehicle can frequently download
updates for its entire operational area, and I just don't see how
"spotty network coverage" is going to be an issue. (Maybe for someone
who *lives* in an area with no coverage. Well, such people may just not
be able to use autonomous vehicles. I doubt that is going to deter the
folks working on AV's.)

Yes, situations will come up that it can't handle, at which point it
will have to get the human involved. Until we have something approaching
"real AI", that will be the case.

That said, I like your viability test :-).

-- 
Matthew
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-17 Thread Christian Kandeler
On Thu, 17 Oct 2019 08:56:41 -0500
Roland Hughes  wrote:

> On this particular topic I won't respond again. 

Thank you.


Christian
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-17 Thread Roland Hughes


On 10/9/19 5:00 AM, Thiago Macieira wrote:

On Tuesday, 8 October 2019 09:26:19 PDT Roland Hughes wrote:


A non-broken random generator will produce 2^128   possibilities in 128
bits. You CANNOT compare fast enough

Does not matter because has nothing to do with how this works. Not the
best, not the worst, just a set it and forget it automated kind of
thing. It's taking roughly 8 bytes out of the packet and doing a keyed
hit on the database. If found great! If not, it slides the window down
one byte and performs a new 8 byte keyed hit.

First of all, you don't understand how modern cryptography works. AES, for
example, is usually encrypted in blocks of 16 bytes. You can't slide down one
byte. You can only slide down by block granularity.


I understand plenty Thiago. While I may not barn dance anymore, this is 
a long way from my first rodeo. This isn't a lark either. It came 
through proper channels. I will work on it. The last request of this 
type coming through those channels turned out to be the IP Ghoster 
project where Bill Gates' next door neighbor happily paid $250/hr for as 
many hours as I could give him June through October of 2012. Bill Gates 
told them the part I was working on couldn't be done just like you are 
telling me this cannot be done. Guess what? It got done and became the 
foundation for multiple products.


On this particular topic I won't respond again. Those who issue such 
requests also lurk here and while I can write a couple more blog posts 
on it, no further open discussion about what we are exploring. Sounds 
like they found something and want me to independently confirm.




That doesn't mean you can decode it. The chance of a random match is still
infinitesimal.


It's a long way from infinitesimal.




56TB - ~9lbs
https://www.bhphotovideo.com/c/product/1466481-REG/owc_other_world_computing
_owctb2sre56_0s_56tb_thunderbay_4_raid.html/specs

4 kg / 56 TB  = 71.4 picograms/byte. That's actually pretty good.


I hate Wikipedia. You can never find the same thing twice. Somewhere on 
there is a page about all of the current drive storage technologies an 
they discuss the 4-bit stacking used by some drive manufacturers for the 
mega drives. This drive didn't do that. I was also was looking for the 
synthetic molecule with 9 electrons article someone sent me. It was 
something which I believe fell out of the synthetic motor oil research. 
They could read and write the electrons just fine but were having 
difficulty keeping it stuck to a spinning platter.


On your road to becoming an architect you have to learn one thing. Don't 
focus on the minutia.


We have hundreds, possibly thousands of Qt consultants and developers 
taking low paying contracts in Michigan, California and a few other 
states working on "autonomous vehicles." None of these vehicles can be 
truly viable until the storage problem is solved. Oh sure, that fleet of 
tractor trailers which runs 10 miles down an Interstate from factory to 
warehouse is doable. Shouldn't need more than 4TB total storage for 
everything it needs to know. You can even forcibly put 
antenna/transmitters down the length of it to ensure 100% wireless 
coverage so the trucks can remain relatively simple applications themselves.


https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html

Uber will keep using AGILE and mowing people down because they refuse to 
create the 4 Holy Documents up front, but others are doing the 
development correctly. Really smart people have been working on this 
storage problem since 2013 or before.


First off, definition of VIABLE:

The vehicle must be able to make 3 round trips within a year the full 
length of Route 66.


https://www.historic66.com/

The first must be during the peak of summer storm/tornado season.

The second must be during deer season.

The third must be during the winter after snow has begun falling in both 
Chicago and the mountains.


This presents the perfect challenge. Once "The Mother Road" it is now 
difficult to navigate having many turns, stops and 30 MPH stretches. 
Most importantly there are huge sections without cellular/wireless 
coverage. Some sections satellite coverage doesn't work. The vehicle 
will have to retain all of the knowledge it needs for the trip because 
updates will be sparse.


The smart people working on such things know this is the test. They have 
architected for it. Deer season also provides the perfect randomization. 
That's when deer are running scared and most likely to total out a ride.


Really smart people been working on this for a long time. We are less 
than 5 (most likely 3) years from having seemingly limitless data storage.


IBM single atom data storage - 2017

https://techcrunch.com/2017/03/08/storing-data-in-a-single-atom-proved-possible-by-ibm-researchers/

If it dampens your enthusiasm somewhat that they’re thinking of looking 
into molecules rather than single atoms for more practical setups of 
this idea, do

Re: [Interest] TLS/SSL XML encryption security

2019-10-08 Thread Thiago Macieira
On Tuesday, 8 October 2019 09:26:19 PDT Roland Hughes wrote:
> > That DOES work with keys produced by OpenSSL that was affected by the
> > Debian bug you described. That's because the bug caused the problem space
> > to be extremely restricted. You said 32768 (2^15) possibilities.
> 
> Unless the key range 2^15 has been physically blocked from the
> generation algorithm, the database created for that still works ~ 100%
> of the time when the random key falls in that range. The percentage
> would depend on how many Salts were used for generation or them having
> created the unicorn, a perfectly functioning desalinization routine.

Sure, but that database containing 2^15 entries is no better than any other 
database with 2^15 entries generated randomly. The chances of getting a hit 
are just as infinitesimal as with the original table, except for software 
still using the broken OpenSSL version.

> > A non-broken random generator will produce 2^128  possibilities in 128
> > bits. You CANNOT compare fast enough
> 
> Does not matter because has nothing to do with how this works. Not the
> best, not the worst, just a set it and forget it automated kind of
> thing. It's taking roughly 8 bytes out of the packet and doing a keyed
> hit on the database. If found great! If not, it slides the window down
> one byte and performs a new 8 byte keyed hit.

First of all, you don't understand how modern cryptography works. AES, for 
example, is usually encrypted in blocks of 16 bytes. You can't slide down one 
byte. You can only slide down by block granularity. 

Second, it seems you don't understand how modern cryptography works. The 
latter blocks depend on the state from previous ones. So even if you knew the 
key being used, unless you had the entire traffic from the beginning, you 
couldn't decode it. A random match in the middle of a transmission won't get 
you a decode, it has to be at a known state on both sides.

Third, it seems you don't understand how fast modern computers are (or, 
rather, how fast they *aren't*). You CANNOT scan a meaningful fraction of the 
2^128 space within the current lifetime of the universe, with computers that 
we have today or are likely to have in the next decade.

The only way your attack works is against 15-year-old ciphers, things like 
3DES or RC4. There's a reason they are deprecated and disabled in all modern 
OpenSSL versions. There may be people out there running old versions and not 
caring or not knowing that they are insecure. Security requires paying 
attention to security disclosures and keeping your software up-to-date.

> > So it can happen. But the chance that it does happen and that the captured
> > packet contains critical information is infinitesimal.
> 
> When you are targeting a DNS address which has the sole purpose of
> providing CC authorization requests and responding to them, 100% of the
> packets contain critical information. Even the denials are important
> because you want to store that information in a different database. If
> you ever compromise any of those cards, sell them on the Dark Web cheap
> because they are unreliable.

Ok, I will grant you that if you choose your victim well, the chances that the 
intercepted packet contains critical information is big.

That doesn't mean you can decode it. The chance of a random match is still 
infinitesimal.

> > Crackers don't attack the strongest part of the TLS model, which is the
> > encryption. They attack the people and the side-channels.
> 
> Kids do.

Yeah, because they have no clue what they're doing. They have no hope of 
cracking proper security this way.

> Nah, this isn't a lease storage space type of attack. If they are well
> funded and willing to risk their own computer room with a rack they will
> get one or more of these.

I just used the public AWS numbers as a benchmark, since they are well-known 
and I could do math with them. I have no clue how much it costs to run a DC 
for 2 exabytes of storage. Just the power bill will be huge.

> They will start out with an HP or other SFF desktop and a 6+TB drive.

An 8 TB drive is 2^43 bytes. That means it can store 2^39 16-byte entries, 
assuming no overhead. We're talking about a 2^128 problem space: that's 2^89 
times bigger. 618,970,019,642,690,137,449,562,112 times bigger.

Even one trillionth of that is still 618,970,019,642,690.1 times bigger.

> 56TB - ~9lbs
> https://www.bhphotovideo.com/c/product/1466481-REG/owc_other_world_computing
> _owctb2sre56_0s_56tb_thunderbay_4_raid.html/specs

4 kg / 56 TB  = 71.4 picograms/byte. That's actually pretty good.

> In order to plan against attack one has to profile just who the attacker
> is and what motivates them. For the large "corporate" organizations, you
> are correct. They are looking to score a billion dollars per week and
> aren't interested in a slow walk. The patient individuals who aren't
> looking to "get rich quick" are much more difficult to defend against.
> The really stupid ones get 

Re: [Interest] TLS/SSL XML encryption security

2019-10-08 Thread Roland Hughes


On 10/8/19 5:00 AM, Thiago Macieira wrote:

On Monday, 7 October 2019 18:08:27 PDT Roland Hughes wrote:

There was a time when a Gig of storage would occupy multiple floors of
the Sears Tower and the paper weight was unreal.

Have you ever heard of Claude Shannon?

Nope.

Anyway, you can't get more data into storage than there are possible states of
matter. As far as our*physics*  knows, you could maybe store a byte per
electron. That would weigh 5 billion tons to store 16 * 2^128  bytes.


The same physics, when incorrectly applied "prove" bumblebees cannot fly?

https://www.snopes.com/fact-check/bumblebees-cant-fly/

What I really loved was the science text my generation had in 4th grade 
which taught kids meat naturally contained maggots. Scientists had 
"proven" if you just left meat out maggots would magically grow from it.


https://www.google.com/search?client=ubuntu&hs=2Tv&channel=fs&ei=CbicXZO3GJCo_QaUrJCoBQ&q=spontaneous+meat+naturally+contained+maggots&oq=spontaneous+meat+naturally+contained+maggots&gs_l=psy-ab.3...15501.20615..21681...1.2..0.164.1803.0j13..01..gws-wiz...0i71.mkiA8iHPvYk&ved=0ahUKEwjT37W5iY3lAhUQVN8KHRQWBFUQ4dUDCAo&uact=5



>
How about you do some math before spouting nonsense?


Considering and attempting to prove nonsense is what is required when 
you are at the architect level. At the Chicago Stock Exchange when they 
were running PDP machines they wanted to use 2 machines to run the 
trading floor having process shared memory between them. Digital 
Equipment Corporation, makers of the PDP and its operating system told 
them it was nonsense, couldn't be done. They did it. Ported it to the 
VAX (completely different hardware and OS), the Alpha ("same" OS, 
different hardware) and the Godforsaken Itanium.


At Navistar (though it wasn't named Navistar then) they wanted the IBM 
order receiving system to directly send orders to the VMS based order 
processing/inventory management/picking ticket system. Both DEC and IBM 
told them it was complete nonsense, couldn't be done. We did it. Long 
before RJE was talked about.




At any rate, enough rows in the DB to achieve a 1% penetration rate
gives them 10,000 compromised credit cards via an automated process. A
tenth of a percent is 1,000. Not a bad haul.

Sure. How many entries in the DB do you need to generate a 0.1% hit rate?

I don't know how to calculate that, so I'm going to guess that you need one
trillionth of the total space for that.


Depends on what you find when testing and probing. Some were richly 
rewarded with the Debian bug limiting keys to a range of 32768. If the 
current OpenSSL library isn't blocking keys below 32769, the database 
and tools created to exploit that weakness still work for any key in 
that range.


If there is a ToD sensitivity in the random generator, shouldn't be, but 
on this Debian system looks like there might be, then one can 
dramatically reduce the DB size needed and reduce the target range to 
all traffic within a window.



I don't doubt that there are hackers that have dedicated DCs to cracking
credit card processor traffic they may have managed to intercept. But they are
not doing that by attacking the encryption.
Some are and some aren't. The fact so many deny the possibility is the 
reason.


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-08 Thread Roland Hughes


On 10/8/19 5:00 AM, Thiago Macieira wrote:

On Monday, 7 October 2019 15:43:21 PDT Roland Hughes wrote:

No. This technique is cracking without cracking. You are looking for a
fingerprint. That fingerprint is the opening string for an xml document
which must be there per the standard. For JSON it is the quote and colon
stuff mentioned earlier. You take however many bytes from the logged
packet as the key size the current thread is processing and perform a
keyed hit against the database. If found, great! If not, shuffle down
one byte and try again. Repeat until you've exceeded the attempt count
you are willing to make or found a match. When you find a match you try
key or key+salt combination on the entire thing. Pass the output to
something which checks for seemingly valid XML/JSON then either declare
victory or defeat.

That DOES work with keys produced by OpenSSL that was affected by the Debian
bug you described. That's because the bug caused the problem space to be
extremely restricted. You said 32768 (2^15) possibilities.
Unless the key range 2^15 has been physically blocked from the 
generation algorithm, the database created for that still works ~ 100% 
of the time when the random key falls in that range. The percentage 
would depend on how many Salts were used for generation or them having 
created the unicorn, a perfectly functioning desalinization routine.


A non-broken random generator will produce 2^128  possibilities in 128 bits.
You CANNOT compare fast enough


Does not matter because has nothing to do with how this works. Not the 
best, not the worst, just a set it and forget it automated kind of 
thing. It's taking roughly 8 bytes out of the packet and doing a keyed 
hit on the database. If found great! If not, it slides the window down 
one byte and performs a new 8 byte keyed hit.


This is *NOT* a real time attack. Everything is independent. The sniffer 
wakes up once per day, checks how much space is left in one or more 
directories, if there is room for more packets, it reaches out and 
sniffs a few more. Either way, it goes back to sleep for a day.





These attacks aren't designed for 100% capture/penetration. The workers
are continually adding new rows to the database table(s). The sniffed
packets which were not successfully decrypted can basically be stored
until you decrypt them or your drives fail or that you decide any
packets more than N-weeks old will be purged.

You seem to be arguing for brute-force attacks until one gets lucky. That is
possible. But the chances of being lucky in finding a key are probably worse
than winning $1 billion in the lottery. Much worse.
Not really, but I haven't had time to write this stuff because people 
keep interrupting me with direct mails. There are several things I want 
to deep dive on first. One of which is poking at some desalinization 
routines. The other, which really shouldn't be because such a thing 
would be taking us back to the 1970s is the few things I ran made it 
seem like the Salt had a ToD sensitivity.


So it can happen. But the chance that it does happen and that the captured
packet contains critical information is infinitesimal.
When you are targeting a DNS address which has the sole purpose of 
providing CC authorization requests and responding to them, 100% of the 
packets contain critical information. Even the denials are important 
because you want to store that information in a different database. If 
you ever compromise any of those cards, sell them on the Dark Web cheap 
because they are unreliable.

The success rate of such an attack improves over time because the
database gets larger by the hour. Rate of growth depends on how many
machines are feeding it. Really insidious outfits would sniff a little
from a bunch of CC or mortgage or whatever processing services,
spreading out the damage so standard track back techniques wouldn't
work. The only thing the victims would have in common is that they used
a credit card or applied for a mortgage but they aren't all from the
same place.

Sure, it improves, but probably slowly. This procedure is limited by computing
power, storage and the operating costs. Breaking encryption by brute-force
like you're suggesting is unlikely to produce a profit: it'll cost more than
the gain once cracked.

Crackers don't attack the strongest part of the TLS model, which is the
encryption. They attack the people and the side-channels.

Kids do.

If correct that means they (the nefarious people) could have started
their botnets, or just local machines, building such a database by some
time in 2011 if they were interested. That's 8+ years. They don't_need_
100% coverage.

No, they don't need 100% coverage. But they need coverage such that the
probability of matching is sufficient that it'll pay the operating costs. In 8
years, assuming 1 billion combinations generated every second, we're talking
about 242 quadrillion combinations generated. Assuming 64 bits per entry and
no overhead, that's

Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Thiago Macieira
On Monday, 7 October 2019 18:08:27 PDT Roland Hughes wrote:
> There was a time when a Gig of storage would occupy multiple floors of
> the Sears Tower and the paper weight was unreal.

Have you ever heard of Claude Shannon?

Anyway, you can't get more data into storage than there are possible states of 
matter. As far as our *physics* knows, you could maybe store a byte per 
electron. That would weigh 5 billion tons to store 16 * 2^128 bytes.

We have absolutely no clue how to have that many electrons in one place 
without protons and without violating the Pauli Exclusion principle.

> According to this undated (I *hate* that!) BBC Science article at some
> point in time Google, Amazon, Microsoft and Facebook combined had 1.2
> million terabytes of storage. By your calculations, shouldn't putting
> that much storage on one coast shifted the planet's orbit? 

How about you do some math before spouting nonsense?

1.2 million terabytes is 2^60 bytes. Which is NOWHERE NEAR the mass I talked 
about for 2^132 bytes. At the estimate I used of 21 ng/byte, the total is only 
25200 metric tonnes.

> As I said, the hackers don't need the entire thing. If they are sniffing
> a CC processor handling a million transactions per day (not unreasonable
> especially during back-to-school, on Saturday or during holiday shopping
> season)
> 
> https://www.statista.com/statistics/261327/number-of-per-card-credit-card-tr
> ansactions-worldwide-by-brand-as-of-2011/
> 
> At any rate, enough rows in the DB to achieve a 1% penetration rate
> gives them 10,000 compromised credit cards via an automated process. A
> tenth of a percent is 1,000. Not a bad haul.

Sure. How many entries in the DB do you need to generate a 0.1% hit rate?

I don't know how to calculate that, so I'm going to guess that you need one 
trillionth of the total space for that.

One trillionth of 2^128 possibilities is roughly 2^88. Times 16 bytes per 
entry, with no overhead, we have 2^92 bytes. Times 1 picogram per byte is 5 
billion tons. More importantly, 2^92 bytes is orders of magnitude more storage 
than exists today. The NSA Datacentre in Utah is estimated to handle 12 
exabytes, so let's estimate the total storage in existence today is 100 
exabytes. That's 50 million times too little to store one trillionth of the 
problem space.

I don't doubt that there are hackers that have dedicated DCs to cracking 
credit card processor traffic they may have managed to intercept. But they are 
not doing that by attacking the encryption.

-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products



___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Thiago Macieira
On Monday, 7 October 2019 15:43:21 PDT Roland Hughes wrote:
> No. This technique is cracking without cracking. You are looking for a
> fingerprint. That fingerprint is the opening string for an xml document
> which must be there per the standard. For JSON it is the quote and colon
> stuff mentioned earlier. You take however many bytes from the logged
> packet as the key size the current thread is processing and perform a
> keyed hit against the database. If found, great! If not, shuffle down
> one byte and try again. Repeat until you've exceeded the attempt count
> you are willing to make or found a match. When you find a match you try
> key or key+salt combination on the entire thing. Pass the output to
> something which checks for seemingly valid XML/JSON then either declare
> victory or defeat.

That DOES work with keys produced by OpenSSL that was affected by the Debian 
bug you described. That's because the bug caused the problem space to be 
extremely restricted. You said 32768 (2^15) possibilities.

A non-broken random generator will produce 2^128 possibilities in 128 bits. 
You CANNOT compare fast enough

> These attacks aren't designed for 100% capture/penetration. The workers
> are continually adding new rows to the database table(s). The sniffed
> packets which were not successfully decrypted can basically be stored
> until you decrypt them or your drives fail or that you decide any
> packets more than N-weeks old will be purged.

You seem to be arguing for brute-force attacks until one gets lucky. That is 
possible. But the chances of being lucky in finding a key are probably worse 
than winning $1 billion in the lottery. Much worse.

So it can happen. But the chance that it does happen and that the captured 
packet contains critical information is infinitesimal.

> The success rate of such an attack improves over time because the
> database gets larger by the hour. Rate of growth depends on how many
> machines are feeding it. Really insidious outfits would sniff a little
> from a bunch of CC or mortgage or whatever processing services,
> spreading out the damage so standard track back techniques wouldn't
> work. The only thing the victims would have in common is that they used
> a credit card or applied for a mortgage but they aren't all from the
> same place.

Sure, it improves, but probably slowly. This procedure is limited by computing 
power, storage and the operating costs. Breaking encryption by brute-force 
like you're suggesting is unlikely to produce a profit: it'll cost more than 
the gain once cracked.

Crackers don't attack the strongest part of the TLS model, which is the 
encryption. They attack the people and the side-channels.

> If correct that means they (the nefarious people) could have started
> their botnets, or just local machines, building such a database by some
> time in 2011 if they were interested. That's 8+ years. They don't _need_
> 100% coverage.

No, they don't need 100% coverage. But they need coverage such that the 
probability of matching is sufficient that it'll pay the operating costs. In 8 
years, assuming 1 billion combinations generated every second, we're talking 
about 242 quadrillion combinations generated. Assuming 64 bits per entry and 
no overhead, that's 2exabytes to store. Current cost of Amazon S3 Infrequent 
Access storage is 1¢/GB, so it would cost $20M per month.

And that amounts to 1.3% of the problem space. Which is why we don't use 64-
bit keys.

If we talk about 128-bit keys, it's $40M per month for a coverage rate of 7.5 
* 10^(-22). You'll reach 1 part per billion coverage in 10 trillion years, 
assuming constant processing power.

Calculating assuming an infinite Moore's Law is left as an exercise for the 
reader.

-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products



___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes


On 10/7/19 6:21 PM, Thiago Macieira wrote:

On segunda-feira, 7 de outubro de 2019 07:06:07 PDT Roland Hughes wrote:

We now
have the storage and computing power available to create the database or
2^128  database tables if needed.

Do you know how ludicrous this statement is?

Let's say you had 128 bits for each of the 2^128  entries, with no overhead,
and each bit weighed 1 picogram (a 8 GB RAM DIMM weighs 185 g, which is 21 ng/
byte). You'll need a storage of 4.3 * 10^25  kg, or about 7.3 times the mass of
the Earth.

Let's say that creating such a table takes an average of 1 attosecond per
entry, or one million entries per nanosecond. Note I'm saying your farm is
producing 10^18  entries per second, reaching at least 1 exaflops, producing
about 16 exabytes per second of data. You'll need 10 trillion years to
calculate.

The only way this is possible is if you significantly break the problem such
that you don't need 2^128  entries. For example, 2^80  entries would weigh
"only" 155 million tons and that's only 16 yottabytes of storage, taking only
14 days to run in that magic[*] farm, with magic connectivity and magic
storage.

[*] After applying Clarke's Third Law.


LOL,

Glad I could help you vent!

There was a time when a Gig of storage would occupy multiple floors of 
the Sears Tower and the paper weight was unreal.


This Gorilla 128GB USB 3 thumb drive weighs almost exactly the same as 
the Lexar 32GB 2.0 thumb drive (I didn't put them on the scale, just 
hand balanced) yet one holds 4 times the other. They both appear to 
weigh less than this LS-120 Super Floppy which only holds 120MEG.


The 6TB drive which just arrived I did put on the postage scale and it 
weighed 22 ounces.  According to this link the 12 TB weighs 1.46 lbs. or 
almost the same, just a skooch over 23 ounces. The new 15TB is 660 grams


https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/data-center-drives/ultrastar-dc-hc600-series/data-sheet-ultrastar-dc-hc620.pdf

This 60TB RAID array does tip the scales at 22lbs though.

https://www.polylinecorp.com/lacie-60tb-6big-6-bay-thunderbolt-3-raid-array.html

I cannot find a weight for the Nimbus 100TB

https://nimbusdata.com/products/exadrive-platform/advantages/

According to this undated (I *hate* that!) BBC Science article at some 
point in time Google, Amazon, Microsoft and Facebook combined had 1.2 
million terabytes of storage. By your calculations, shouldn't putting 
that much storage on one coast shifted the planet's orbit? 


As I said, the hackers don't need the entire thing. If they are sniffing 
a CC processor handling a million transactions per day (not unreasonable 
especially during back-to-school, on Saturday or during holiday shopping 
season)


https://www.statista.com/statistics/261327/number-of-per-card-credit-card-transactions-worldwide-by-brand-as-of-2011/

At any rate, enough rows in the DB to achieve a 1% penetration rate 
gives them 10,000 compromised credit cards via an automated process. A 
tenth of a percent is 1,000. Not a bad haul.


Please keep in mind that what they need is the architecture and a 
functional sampling. They don't need everything to achieve that.



--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes


On 10/7/19 6:21 PM, Thiago Macieira wrote:

On segunda-feira, 7 de outubro de 2019 05:31:17 PDT Roland Hughes wrote:

Let us not forget we are at the end of the x86 era when it comes to what
evil-doers will use to generate a fingerprint database, or brute force
crack.

https://www.technologyreview.com/s/613596/how-a-quantum-computer-could-break
-2048-bit-rsa-encryption-in-8-hours/

[Now Gidney and Ekerå have shown how a quantum computer could do the
calculation with just 20 million qubits. Indeed, they show that such a
device would take just eight hours to complete the calculation.  “[As a
result], the worst case estimate of how many qubits will be needed to
factor 2048 bit RSA integers has dropped nearly two orders of
magnitude,” they say.]

Oh, only 20 million qubits? That's good to know, because current quantum
computers have something like 100 or 200.

Not 100 million qubits, 100 qubits.


Kids these days!

When I started in IT a Gigabyte wasn't even conceivable. The term 
Terabyte hadn't even been created so it was beyond science fiction.




Yes, I know that Shor's Theorem says it could solve the prime multiplication
that is in the core of RSA and many other public key encryption mechanisms in
O(1) time. But no one has ever proven the Theorem and put it into practice,
yet.

And there are all the quantum-resistant algorithms, some of which are already
deployed (like AES), some of which are in development.

A bullet resistant vest is resistant until someone builds a better bullet.



While there are those here claiming 128-bit and 256-bit are
"uncrackable" people with money long since moved to 2048-bit because 128
and 256 are the new 64-bit encryption levels. They know that an entity
wanting to decrypt their sniffed packets doesn't need the complete
database, just a few fingerprints which work relatively reliably. They
won't get everything, but they might get the critical stuff.

You're confusing algorithms. RSA asymmetric encryption today requires more
than 1024 bits, 2048 recommended, 4096 even better. AES is symmetric
encryption and requires nowhere near that much, 128 is sufficient, 256 is very
good. Elliptic curves are also asymmetric and require much less than 1024
bits.
No, I wasn't, but sorry for causing confusion. I didn't mean OpenSource 
or published standard when I said "people with money." Just skip that.



Haven't you noticed a pattern over the decades?

X-bit encryption would take a "super computer" (never actually
identifying which one) N-years running flat out to crack.

A few years later

Y-bit encryption would take a "super computer" (never actually
identifying which one) N-years running flat out to crack (without any
mention of why they were/are wrong about X-bit).

Oh! You wanted "Why?" Sorry.
Again, you're deliberately misleading people here. The supercomputers*are*  
identified. And the fact that technology progresses is no surprise. It's

*expected*  and accounted for. That's why the number of bits in most ciphers is
increasing, that's why older ciphers are completely dropped, that's why we're
getting new ones and new versions of TLS.
You know. I have *never* heard them identified. The Y-bit encryption is 
what I hear each and every time someone spouts off about how secure 
something is. They never identify the machine and they never under any 
circumstances admit that the very first combination tried at "random" 
just might succeed. The calculation/estimate *always* assumes it is the 
last possible entry which will decrypt the packet and that such a feat 
will *always* be the case.


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes
From the haze of the smoke. And the mescaline. - The Airborne Toxic 
Event "Wishing Well"


On 10/7/19 3:46 PM, Matthew Woehlke wrote:

On 04/10/2019 20.17, Roland Hughes wrote:



Even if all of that stuff has been fixed, you have to be absolutely
certain the encryption method you choose doesn't leave its own tell-tale
fingerprint. Some used to have visible oddities in the output when they
encrypted groups of contiguous spaces, nulls, etc. Plus, there are quite
a few places like these showing up on-line.

Again, though, it seems like there ought to be ways to mitigate this. If
I can test for successful decryption without decrypting the*entire*
message, that is clear grounds for improvement.


Sorry for having to invert part of this but the answer to this part 
should make the rest clearer.


I've never once interjected the concept of partial decryption. Someone 
else tossed that Red Herring into the soup. It has no place in the 
conversation.


The concept here is encrypting a short string which is a "fingerprint" 
known to exist in the target data over and over again with different 
keys and in the case of some methods salts as well. These get recorded 
into a database. If the encrypted message is in a QByteArray you use a 
walking window down the first N bytes performing keyed hits to find a 
matching sequence and when found you generally know what was used, sans 
a birthday collision.


Some people like to call these "Rainbow Tables" but I don't. This is a 
standard Big Data problem solving technique.


As for the nested encryption issue, we never did root cause analysis. We 
encountered some repeatable issues and moved on. It could have had 
something to do with the Debian bug where a maintainer "fixed" some 
Valgrind messages by limiting the keys to 32768. We were testing 
transmissions across architectures and I seem to remember it only broke 
in one direction. Long time ago. Used a lot of Chardonnay to purge those 
memories.



On 10/3/19 5:00 AM, Matthew Woehlke wrote:

On 01/10/2019 20.47, Roland Hughes wrote:

To really secure transmitted data, you cannot use an open standard which
has readily identifiable fields. Companies needing great security are
moving to proprietary record layouts containing binary data. Not a
"classic" record layout with contiguous fields, but a scattered layout
placing single field bytes all over the place. For the "free text"
portions like name and address not only in reverse byte order, but
performing a translate under mask first. Object Oriented languages have
a bit of trouble operating in this world but older 3GLs where one can
have multiple record types/structures mapped to a single buffer (think a
union of packed structures in C) can process this data rather quickly.

How is this not just "security through obscurity"? That's almost
universally regarded as equivalent to "no security at all". If you're
going to claim that this is suddenly not the case, you'd best have
some *really* impressive evidence to back it up. Put differently, how
is this different from just throwing another layer of
encry^Wenciphering on your data and calling it a day?

_ALL_ electronic encryption is security by obscurity.

Take a moment and let that sink in because it is fact.

Your "secrecy" is the key+algorithm combination. When that secret is
learned you are no longer secure. People lull themselves into a false
sense of security regurgitating another Urban Legend.

Well... sure, if you want to get pedantic. However, as I see it, there
are two key differences:

- "Encryption" tries to make it computationally hard to decode a message.

- "Encryption" (ideally) uses a different key for each user, if not each
message, such that compromising one message doesn't compromise the
entire protocol. (Okay, granted this isn't really true for SSL/TLS
unless you are also using client certificates.)

Thanks for agreeing.


...and anyway, I think you are undermining your own argument; if it's
easy to break "strong encryption", wouldn't it be much *easier* to break
what amounts to a basic scramble cipher?


No. This technique is cracking without cracking. You are looking for a 
fingerprint. That fingerprint is the opening string for an xml document 
which must be there per the standard. For JSON it is the quote and colon 
stuff mentioned earlier. You take however many bytes from the logged 
packet as the key size the current thread is processing and perform a 
keyed hit against the database. If found, great! If not, shuffle down 
one byte and try again. Repeat until you've exceeded the attempt count 
you are willing to make or found a match. When you find a match you try 
key or key+salt combination on the entire thing. Pass the output to 
something which checks for seemingly valid XML/JSON then either declare 
victory or defeat.


If the fingerprint isn't in the data, you cannot use this technique. You 
can't, generally, just Base64 your XML/JSON prior to sending it out 
because they usually create tables 

Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Matthew Woehlke
On 04/10/2019 20.17, Roland Hughes wrote:
> On 10/3/19 5:00 AM, Matthew Woehlke wrote:
>> On 01/10/2019 20.47, Roland Hughes wrote:
>>> To really secure transmitted data, you cannot use an open standard which
>>> has readily identifiable fields. Companies needing great security are
>>> moving to proprietary record layouts containing binary data. Not a
>>> "classic" record layout with contiguous fields, but a scattered layout
>>> placing single field bytes all over the place. For the "free text"
>>> portions like name and address not only in reverse byte order, but
>>> performing a translate under mask first. Object Oriented languages have
>>> a bit of trouble operating in this world but older 3GLs where one can
>>> have multiple record types/structures mapped to a single buffer (think a
>>> union of packed structures in C) can process this data rather quickly.
>>
>> How is this not just "security through obscurity"? That's almost
>> universally regarded as equivalent to "no security at all". If you're
>> going to claim that this is suddenly not the case, you'd best have
>> some *really* impressive evidence to back it up. Put differently, how
>> is this different from just throwing another layer of
>> encry^Wenciphering on your data and calling it a day? 
>
> _ALL_ electronic encryption is security by obscurity.
> 
> Take a moment and let that sink in because it is fact.
> 
> Your "secrecy" is the key+algorithm combination. When that secret is
> learned you are no longer secure. People lull themselves into a false
> sense of security regurgitating another Urban Legend.

Well... sure, if you want to get pedantic. However, as I see it, there
are two key differences:

- "Encryption" tries to make it computationally hard to decode a message.

- "Encryption" (ideally) uses a different key for each user, if not each
message, such that compromising one message doesn't compromise the
entire protocol. (Okay, granted this isn't really true for SSL/TLS
unless you are also using client certificates.)

...and anyway, I think you are undermining your own argument; if it's
easy to break "strong encryption", wouldn't it be much *easier* to break
what amounts to a basic scramble cipher?

> One of the very nice things about today's dark world is that most are
> script-kiddies. If they firmly believe they have correctly decrypted
> your TLS/SSL packet yet still see garbage, they assume another layer of
> encryption. They haven't been in IT long enough to know anything about
> data striping or ICM (Insert Character under Mask).

So... again, you're proposing that replacing a "hard" (or not, according
to you) problem with an *easier* problem will improve security?

I suppose it might *in the short term*. In the longer term, that seems
like a losing strategy.

> He came up with a set of test cases and sure enough, this system which
> worked fine with simple XML, JSON, email and text files started
> producing corrupted data at the far end with the edge cases.

Well, I would certainly be concerned about an encryption algorithm that
is unable to reproduce its input. That sounds like a recipe guaranteed
to eventually corrupt someone's data.

> Even if all of that stuff has been fixed, you have to be absolutely
> certain the encryption method you choose doesn't leave its own tell-tale
> fingerprint. Some used to have visible oddities in the output when they
> encrypted groups of contiguous spaces, nulls, etc. Plus, there are quite
> a few places like these showing up on-line.

Again, though, it seems like there ought to be ways to mitigate this. If
I can test for successful decryption without decrypting the *entire*
message, that is clear grounds for improvement.

-- 
Matthew
___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Thiago Macieira
On segunda-feira, 7 de outubro de 2019 07:06:07 PDT Roland Hughes wrote:
> We now
> have the storage and computing power available to create the database or
> 2^128 database tables if needed.

Do you know how ludicrous this statement is?

Let's say you had 128 bits for each of the 2^128 entries, with no overhead, 
and each bit weighed 1 picogram (a 8 GB RAM DIMM weighs 185 g, which is 21 ng/
byte). You'll need a storage of 4.3 * 10^25 kg, or about 7.3 times the mass of 
the Earth.

Let's say that creating such a table takes an average of 1 attosecond per 
entry, or one million entries per nanosecond. Note I'm saying your farm is 
producing 10^18 entries per second, reaching at least 1 exaflops, producing 
about 16 exabytes per second of data. You'll need 10 trillion years to 
calculate.

The only way this is possible is if you significantly break the problem such 
that you don't need 2^128 entries. For example, 2^80 entries would weigh 
"only" 155 million tons and that's only 16 yottabytes of storage, taking only 
14 days to run in that magic[*] farm, with magic connectivity and magic 
storage.

[*] After applying Clarke's Third Law.
-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products



___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Thiago Macieira
On segunda-feira, 7 de outubro de 2019 05:31:17 PDT Roland Hughes wrote:
> Screaming about the size of the forest one will hide there tree in
> doesn't change the security by obscurity aspect of it. Thumping the desk
> and claiming a forest which is 2^128 * 2^key-bit-width doesn't mean you
> aren't relying on obscurity, especially when they know what tree they
> are looking for.

It's not the usual definition of "security by obscurity". That's usually 
applied to something that is not secure at all, just unknown. Encryption 
algorithms hide nothing in their implementation.

They do hide the key, true. The important thing is that it takes more time to 
brute-force the key than an attacker could reasonably dedicate.

> Let us not forget we are at the end of the x86 era when it comes to what
> evil-doers will use to generate a fingerprint database, or brute force
> crack.
> 
> https://www.technologyreview.com/s/613596/how-a-quantum-computer-could-break
> -2048-bit-rsa-encryption-in-8-hours/
> 
> [Now Gidney and Ekerå have shown how a quantum computer could do the
> calculation with just 20 million qubits. Indeed, they show that such a
> device would take just eight hours to complete the calculation.  “[As a
> result], the worst case estimate of how many qubits will be needed to
> factor 2048 bit RSA integers has dropped nearly two orders of
> magnitude,” they say.]

Oh, only 20 million qubits? That's good to know, because current quantum 
computers have something like 100 or 200.

Not 100 million qubits, 100 qubits.

Yes, I know that Shor's Theorem says it could solve the prime multiplication 
that is in the core of RSA and many other public key encryption mechanisms in 
O(1) time. But no one has ever proven the Theorem and put it into practice, 
yet.

And there are all the quantum-resistant algorithms, some of which are already 
deployed (like AES), some of which are in development.

> While there are those here claiming 128-bit and 256-bit are
> "uncrackable" people with money long since moved to 2048-bit because 128
> and 256 are the new 64-bit encryption levels. They know that an entity
> wanting to decrypt their sniffed packets doesn't need the complete
> database, just a few fingerprints which work relatively reliably. They
> won't get everything, but they might get the critical stuff.

You're confusing algorithms. RSA asymmetric encryption today requires more 
than 1024 bits, 2048 recommended, 4096 even better. AES is symmetric 
encryption and requires nowhere near that much, 128 is sufficient, 256 is very 
good. Elliptic curves are also asymmetric and require much less than 1024 
bits.

> Haven't you noticed a pattern over the decades?
> 
> X-bit encryption would take a "super computer" (never actually
> identifying which one) N-years running flat out to crack.
> 
> A few years later
> 
> Y-bit encryption would take a "super computer" (never actually
> identifying which one) N-years running flat out to crack (without any
> mention of why they were/are wrong about X-bit).
> 
> Oh! You wanted "Why?" Sorry.

Again, you're deliberately misleading people here. The supercomputers *are* 
identified. And the fact that technology progresses is no surprise. It's 
*expected* and accounted for. That's why the number of bits in most ciphers is 
increasing, that's why older ciphers are completely dropped, that's why we're 
getting new ones and new versions of TLS.

-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products



___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Henry Skoglund
Yes, I also know about the lizardmen from Phobos who can crack SSL/TLS 
keys instantly.
If you can show some code all this would be much more credible. After 
all, this is a Qt mailing list, not a science fiction one.


Rgrds Henry


On 2019-10-07 16:06, Roland Hughes wrote:


On 10/7/19 5:00 AM, Thiago Macieira wrote:
You do realise that's not how modern encryption works, right? You do 
realise
that SSL/TLS rekeys periodically to avoid even a compromised key from 
going
further? That's what the "data limit for all ciphersuits" means: 
rekey after a

while.

yeah.
You're apparently willfully ignoring the fact that the same cleartext 
will not
result in the same ciphertext when repeated in the transmission, even 
between

two rekey events.


No. We are working from two completely different premises. It appears 
to me your premise is they have to be able to decrypt 100% of the 
packets 100% of the time. That's not the premise here.


The premise here is they don't need it all to work today. They just 
need to know that a Merchant account service receives XML/JSON and 
responds in kind. The transaction is the same for a phone app, Web 
site, even when you are standing at that mom & pop retailer physically 
using your card. They (whoever they are) sniff packets to/from the IP 
addresses of the service logging them to disk drives.


The dispatch service hands 100-1000 key combinations out at a time to 
worker computers generating fingerprints for the database. These 
computers could be a botnet, leased from a hosting service or machines 
they own. The receiver service stores the key results in the database.


A credit card processing service of sufficient size will go through a 
massive number of salts and keys, especially with the approaching 
Holiday shopping season. 1280 bytes "should" be more than enough to 
contain a credit card authorization request so this scenario is only 
interested in fast cracking a single packet. Yes, the CC number and 
some other information may well have additional obfuscation but that 
will also be a mechanical process.


Periodically a batch job wakes up and runs the sniffed packets against 
the database looking for matching fingerprints. When it fast cracks 
one it moves it to a different drive/raid array, storage area for the 
next step. This process goes in steps until they have full CC 
information with a transaction approval, weeding out the declined cards.


When the sniffed packet storage falls below some threshold the sniffer 
portion is reactivated to retrieve more packets.


This entire time workers are adding more and more entries to the 
fingerprint database.


These people don't need them all. They are patient. This process is 
automated. They might even configure it to send an email when another 
100 or 1000 valid CCs so they can either sell them on the Dark Web or 
send them through the "buying agent" network.


Yeah, "buying agent" network might need a bit of explanation. Some of 
you may have seen those "work from home" scams where they want a 
"shipping consolidation" person to receive items and repackage them 
into bulk packs for overseas (or wherever) shipping. They want a fall 
person to receive the higher end merchandise which they then bulk ship 
to someone who will sell it on eBay/Amazon/etc.


The CC companies constantly scan for "unusual activity" and call you 
when your card has been compromised. This works when the individuals 
are working with limited information. They have the CC information, 
but they don't have the "where you shop" information. The ones which 
have the information about where you routinely use the card can have a 
better informed "buying agent" network and slow bleed the card without 
tripping the fraud alert systems. If you routinely use said card at 
say, Walmart, 2-3 times per week for purchases of $100-$500 they can 
make one more purchase per week in that price range until you are 
maxed out or start matching up charges with receipts.


The people I get asked to think about are playing a long game. They 
aren't looking to send a crew to Chicago to take out $100 cash 
advances on a million cards bought on the Dark Web or do something 
like this crew did:


https://www.mcall.com/news/watchdog/mc-counterfeit-credit-cards-identity-theft-watchdog-20160625-column.html 



Or the guy who just got 8 years for running such a ring in Las Vegas. 
That's the most recent one turning up in a quick search.


Maybe they are looking to do just that, but are looking for more 
information?


At any rate, the "no-breach" scenario is being seriously looked at. 
Yes, the Salt will change with every packet and the key might well 
change with every packet but these players are only looking to crack a 
subset of packets. Most organizations won't have the infrastructure to 
utilize a billion compromised credit cards. They can handle a few 
hundred to a few thousand per month.


In short, they don't need _everything_. They just need enough to get 
that

Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes


On 10/7/19 5:00 AM, Thiago Macieira wrote:
You do realise that's not how modern encryption works, right? You do 
realise

that SSL/TLS rekeys periodically to avoid even a compromised key from going
further? That's what the "data limit for all ciphersuits" means: rekey after a
while.

yeah.

You're apparently willfully ignoring the fact that the same cleartext will not
result in the same ciphertext when repeated in the transmission, even between
two rekey events.


No. We are working from two completely different premises. It appears to 
me your premise is they have to be able to decrypt 100% of the packets 
100% of the time. That's not the premise here.


The premise here is they don't need it all to work today. They just need 
to know that a Merchant account service receives XML/JSON and responds 
in kind. The transaction is the same for a phone app, Web site, even 
when you are standing at that mom & pop retailer physically using your 
card. They (whoever they are) sniff packets to/from the IP addresses of 
the service logging them to disk drives.


The dispatch service hands 100-1000 key combinations out at a time to 
worker computers generating fingerprints for the database. These 
computers could be a botnet, leased from a hosting service or machines 
they own. The receiver service stores the key results in the database.


A credit card processing service of sufficient size will go through a 
massive number of salts and keys, especially with the approaching 
Holiday shopping season. 1280 bytes "should" be more than enough to 
contain a credit card authorization request so this scenario is only 
interested in fast cracking a single packet. Yes, the CC number and some 
other information may well have additional obfuscation but that will 
also be a mechanical process.


Periodically a batch job wakes up and runs the sniffed packets against 
the database looking for matching fingerprints. When it fast cracks one 
it moves it to a different drive/raid array, storage area for the next 
step. This process goes in steps until they have full CC information 
with a transaction approval, weeding out the declined cards.


When the sniffed packet storage falls below some threshold the sniffer 
portion is reactivated to retrieve more packets.


This entire time workers are adding more and more entries to the 
fingerprint database.


These people don't need them all. They are patient. This process is 
automated. They might even configure it to send an email when another 
100 or 1000 valid CCs so they can either sell them on the Dark Web or 
send them through the "buying agent" network.


Yeah, "buying agent" network might need a bit of explanation. Some of 
you may have seen those "work from home" scams where they want a 
"shipping consolidation" person to receive items and repackage them into 
bulk packs for overseas (or wherever) shipping. They want a fall person 
to receive the higher end merchandise which they then bulk ship to 
someone who will sell it on eBay/Amazon/etc.


The CC companies constantly scan for "unusual activity" and call you 
when your card has been compromised. This works when the individuals are 
working with limited information. They have the CC information, but they 
don't have the "where you shop" information. The ones which have the 
information about where you routinely use the card can have a better 
informed "buying agent" network and slow bleed the card without tripping 
the fraud alert systems. If you routinely use said card at say, Walmart, 
2-3 times per week for purchases of $100-$500 they can make one more 
purchase per week in that price range until you are maxed out or start 
matching up charges with receipts.


The people I get asked to think about are playing a long game. They 
aren't looking to send a crew to Chicago to take out $100 cash advances 
on a million cards bought on the Dark Web or do something like this crew 
did:


https://www.mcall.com/news/watchdog/mc-counterfeit-credit-cards-identity-theft-watchdog-20160625-column.html

Or the guy who just got 8 years for running such a ring in Las Vegas. 
That's the most recent one turning up in a quick search.


Maybe they are looking to do just that, but are looking for more 
information?


At any rate, the "no-breach" scenario is being seriously looked at. Yes, 
the Salt will change with every packet and the key might well change 
with every packet but these players are only looking to crack a subset 
of packets. Most organizations won't have the infrastructure to utilize 
a billion compromised credit cards. They can handle a few hundred to a 
few thousand per month.


In short, they don't need _everything_. They just need enough to get 
that much



And don't forget the Initialisation Vector. Even if you could compute the
fingerprint database, you still need to multiply it by 2^128  to account for
all possible IVs.


Perhaps. A few of those won't be used, such as low-values and 
high-values. That also assumes none of t

Re: [Interest] TLS/SSL XML encryption security

2019-10-07 Thread Roland Hughes


On 10/7/19 5:00 AM, Konrad Rosenbaum wrote:

Hi,

On 10/5/19 2:17 AM, Roland Hughes wrote:

_ALL_  electronic encryption is security by obscurity.

Take a moment and let that sink in because it is fact.

Okay, out with it! What secret service are you working for and why are
you trying to sell everybody on bullshit that weakens our collective
security?


SCNR, Konrad


LOL,

Konrad,

I haven't had any active clearance in a very long time, assuming nobody 
was lying during those projects early in my career.


This is a world of big data. Infobright, OrientDB, Riak, etc. OpenSource 
and massive, some with data compression up to 40:1. That's assuming you 
don't scope your attacks to the 32TB single table limit of PostgreSQL. 
We have botnets available to evil doers with sizes in the millions.


Screaming about the size of the forest one will hide there tree in 
doesn't change the security by obscurity aspect of it. Thumping the desk 
and claiming a forest which is 2^128 * 2^key-bit-width doesn't mean you 
aren't relying on obscurity, especially when they know what tree they 
are looking for.


Removing the tree is how one has to proceed.

Let us not forget we are at the end of the x86 era when it comes to what 
evil-doers will use to generate a fingerprint database, or brute force 
crack.


https://www.technologyreview.com/s/613596/how-a-quantum-computer-could-break-2048-bit-rsa-encryption-in-8-hours/

[Now Gidney and Ekerå have shown how a quantum computer could do the 
calculation with just 20 million qubits. Indeed, they show that such a 
device would take just eight hours to complete the calculation.  “[As a 
result], the worst case estimate of how many qubits will be needed to 
factor 2048 bit RSA integers has dropped nearly two orders of 
magnitude,” they say.]


While there are those here claiming 128-bit and 256-bit are 
"uncrackable" people with money long since moved to 2048-bit because 128 
and 256 are the new 64-bit encryption levels. They know that an entity 
wanting to decrypt their sniffed packets doesn't need the complete 
database, just a few fingerprints which work relatively reliably. They 
won't get everything, but they might get the critical stuff.


Haven't you noticed a pattern over the decades?

X-bit encryption would take a "super computer" (never actually 
identifying which one) N-years running flat out to crack.


A few years later

Y-bit encryption would take a "super computer" (never actually 
identifying which one) N-years running flat out to crack (without any 
mention of why they were/are wrong about X-bit).


Oh! You wanted "Why?" Sorry.

I get this list in digest form. Most of the time I don't read it. Only a 
tiny fraction of my life revolves around Qt and small systems. This 
whole security thing came up in another part of my world, then I 
actually read something here.


*nix did it wrong. No application should be allowed to open its own 
TCP/IP or network connection. No application should have any knowledge 
of transport layer security, certificates or anything else. Unisys and a 
few other "big systems" platforms are baking into their OS a Network 
Software Appliance. This allows system managers to dynamically change 
transport layer communications protocols on a whim. Not just transport 
layer security, but what network is in use, even non-TCP based things 
like Token Ring, DECNet, left-handed-monkey-wrench, etc.


All of that is well and good. It's how things should have been done to 
start with.


The fly in the ointment is developers using "human interpretable" data 
formats for transmission. Moving to a non-IP based network (meaning not 
running a different protocol on top of IP but running a completely 
different network protocol on machines which don't even have the IP 
stack software installed) can buy you a lot, but if you are a high value 
target and that network runs between data centers someone will 
eventually find a way to tap into it.


Even if that is not your point of penetration some people/developers 
store this human readable stuff on disk. My God, CouchDB actually stores 
JSON! Yeah, that's how you want to see someone storing a mass quantity 
of CC information along with answers to security questions and mother's 
maiden name.


My having to ponder all of this is how we got here.


--
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-06 Thread Konrad Rosenbaum

Hi,

On 10/5/19 2:17 AM, Roland Hughes wrote:

_ALL_ electronic encryption is security by obscurity.

Take a moment and let that sink in because it is fact.



Okay, out with it! What secret service are you working for and why are 
you trying to sell everybody on bullshit that weakens our collective 
security?



SCNR, Konrad

___
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest


Re: [Interest] TLS/SSL XML encryption security

2019-10-05 Thread Giuseppe D'Angelo via Interest

Il 05/10/19 02:17, Roland Hughes ha scritto:

Sorry, I need to invert the quoted message so answers make sense.

On 10/3/19 5:00 AM, Matthew Woehlke wrote:

On 01/10/2019 20.47, Roland Hughes wrote:


If they targeted something which uses XML documents to communicate, they
don't need to brute force attempt everything, just the first 120 or so
bytes of each packet until they find the one which returns


That seems like a flaw in the encryption algorithm. It seems like
there ought to be a way to make it so that you can't decrypt only part
of a message. Even an initial, reversible step such as XOR-permuting
the message with some well-known image of itself (e.g. "reversed")
might suffice?


Of course! Everyone in charge of security at Google, Amazon, Apple, 
Facebook, Microsoft is a complete moron and didn't think of this 
already, as they're happily sending plain XML and JSON from their servers!


Or maybe it has to do with the fact that modern encryption algorithms 
are designed to be resilient against these attacks, so there is no point 
at obfuscating the data you're sending? I'm really not sure. I'll go 
with "everyone else is a complete moron".




Not a flaw in the algorithm, just seems to be a flaw in the
communications. This isn't partially decrypting a packet. It is
encrypting every possible combination of key+algo supported by TLS/SSL
into a fingerprint database. You then use a sliding window of the
fingerprint size performing keyed hits against the fingerprint
database. You "dust for prints."


Sure; there are (at most) ~10^80 =~ 2^266 atoms in the observable 
universe. So you need roughly ALL THE MATTER IN THE UNIVERSE to store 
every possible combination of 256 bit keys+algorithms into a fingerprint 
database.




To really secure transmitted data, you cannot use an open standard which
has readily identifiable fields. Companies needing great security are
moving to proprietary record layouts containing binary data. Not a
"classic" record layout with contiguous fields, but a scattered layout
placing single field bytes all over the place. For the "free text"
portions like name and address not only in reverse byte order, but
performing a translate under mask first. Object Oriented languages have
a bit of trouble operating in this world but older 3GLs where one can
have multiple record types/structures mapped to a single buffer (think a
union of packed structures in C) can process this data rather quickly.

How is this not just "security through obscurity"? That's almost
universally regarded as equivalent to "no security at all". If you're
going to claim that this is suddenly not the case, you'd best have
some *really* impressive evidence to back it up. Put differently, how
is this different from just throwing another layer of
encry^Wenciphering on your data and calling it a day?


It's not. It's security by obscurity. I'll grant it may be a legitimate 
use of obfuscation, which of course doesn't work ALONE -- it works when 
the rest of your stack is also secure. And in the case of TLS/SSL, the 
rest of the stack is secure WITHOUT using security by obscurity.




Well, first we have to shred some marketing fraud which has been in
existence for a very long time.

https://en.wikipedia.org/wiki/Security_through_obscurity

"Security through obscurity (or security by obscurity) is the reliance
in security engineering on design or implementation secrecy as the main
method of providing security to a system or component."

I wonder if Gartner was paid to market this fraud. They've certainly
marketed some whoppers in their day. Back in the 90s declaring Microsoft
Windows an "open" platform when it was one of the most proprietary
systems on the market. Can't believe nobody went to prison over that.

At any rate the peddlers of encryption have been spewing this line. In
fact this line is much truer than the peddlers of encryption wish to
admit. When you press them on it they are forced to perform a "Look at
the Grouse" routine.

https://www.youtube.com/watch?v=493jZunIooI

_ALL_ electronic encryption is security by obscurity.

Take a moment and let that sink in because it is fact.


"Let that sink in" is the official "I've just told you a very very 
appealing lie/logical fallacy/... and I don't want to get caught".





Your "secrecy" is the key+algorithm combination. When that secret is
learned you are no longer secure. People lull themselves into a false
sense of security regurgitating another Urban Legend.


*JUST THE KEY*. Every other part of the system (SSL version, key 
derivation algorithms, encryption algorithms, etc.) can be known, and 
the system still be secure. The secret key is an input of the system, 
and NOT part of its design (which is standardized) or the implementation 
(which can be open source and thus examinable), which therefore doesn't 
make it (according to your own quote) security by obscurity.






"It would take a super computer N years running flat out to break this
encryption."

Re: [Interest] TLS/SSL XML encryption security

2019-10-04 Thread Roland Hughes

Sorry, I need to invert the quoted message so answers make sense.

On 10/3/19 5:00 AM, Matthew Woehlke wrote:

On 01/10/2019 20.47, Roland Hughes wrote:
   

If they targeted something which uses XML documents to communicate, they
don't need to brute force attempt everything, just the first 120 or so
bytes of each packet until they find the one which returns

That seems like a flaw in the encryption algorithm. It seems like 
there ought to be a way to make it so that you can't decrypt only part 
of a message. Even an initial, reversible step such as XOR-permuting 
the message with some well-known image of itself (e.g. "reversed") 
might suffice?


Not a flaw in the algorithm, just seems to be a flaw in the 
communications. This isn't partially decrypting a packet. It is 
encrypting every possible combination of key+algo supported by TLS/SSL 
into a fingerprint database. You then use a sliding window of the 
fingerprint size performing keyed hits against the fingerprint 
database. You "dust for prints."



To really secure transmitted data, you cannot use an open standard which
has readily identifiable fields. Companies needing great security are
moving to proprietary record layouts containing binary data. Not a
"classic" record layout with contiguous fields, but a scattered layout
placing single field bytes all over the place. For the "free text"
portions like name and address not only in reverse byte order, but
performing a translate under mask first. Object Oriented languages have
a bit of trouble operating in this world but older 3GLs where one can
have multiple record types/structures mapped to a single buffer (think a
union of packed structures in C) can process this data rather quickly.
How is this not just "security through obscurity"? That's almost 
universally regarded as equivalent to "no security at all". If you're 
going to claim that this is suddenly not the case, you'd best have 
some *really* impressive evidence to back it up. Put differently, how 
is this different from just throwing another layer of 
encry^Wenciphering on your data and calling it a day? 


Well, first we have to shred some marketing fraud which has been in 
existence for a very long time.


https://en.wikipedia.org/wiki/Security_through_obscurity

"Security through obscurity (or security by obscurity) is the reliance 
in security engineering on design or implementation secrecy as the main 
method of providing security to a system or component."


I wonder if Gartner was paid to market this fraud. They've certainly 
marketed some whoppers in their day. Back in the 90s declaring Microsoft 
Windows an "open" platform when it was one of the most proprietary 
systems on the market. Can't believe nobody went to prison over that.


At any rate the peddlers of encryption have been spewing this line. In 
fact this line is much truer than the peddlers of encryption wish to 
admit. When you press them on it they are forced to perform a "Look at 
the Grouse" routine.


https://www.youtube.com/watch?v=493jZunIooI

_ALL_ electronic encryption is security by obscurity.

Take a moment and let that sink in because it is fact.

Your "secrecy" is the key+algorithm combination. When that secret is 
learned you are no longer secure. People lull themselves into a false 
sense of security regurgitating another Urban Legend.


"It would take a super computer N years running flat out to break this 
encryption."


I first heard that uttered when the Commodore SuperPET was on the market.

https://en.wikipedia.org/wiki/Commodore_PET

I believe they were talking about 64-bit encryption then. Perhaps it was 
128-bit? Doesn't matter. IF someone wants to do a brute force attack and 
they have 6-30 million infected computers in their botnet, they can 
crush however many bits you have much sooner than encryption fans are 
willing to believe.


They can easily build fingerprint databases with that much horsepower 
assuming they buy enough high quality storage. You really need Western 
Digital Black for that if you are single or paired driving it. I haven't 
seen anyone use a SAN for high speed high volume data collection so I 
don't know how well those hold up. During my time at CTS one of the guys 
was running high speed data collection tests with a rack of pressure and 
leak testers running automated tests as fast as they could. Black would 
make it roughly a year. Blue around 6 months. Red was just practicing 
how to replace a drive.


One of the very nice things about today's dark world is that most are 
script-kiddies. If they firmly believe they have correctly decrypted 
your TLS/SSL packet yet still see garbage, they assume another layer of 
encryption. They haven't been in IT long enough to know anything about 
data striping or ICM (Insert Character under Mask).




If you are using XML, JSON or any of the other trendy text based
open standards for data exchange, you've made it easy for the hackers.
They don't have to put any human noodling into de