Heartbleed Cloud Flare Key Challenge

Despite some attempts to extract private key material (that would immediately void your TLS digital certificate)  from a Heartbleed compromised computer, researchers were unable to do so, and said it was an unlikely occurrence.

Shooting from the hip is fairly dangerous, because shortly after that the same researchers decided to test their results with a crowd challenge  at Cloudflare, and voila, quite a few people figured out how to exploit it.

Here’s one that worked.

I thought it was interesting that the trolls were sending fake keys in actual Heartbleed traffic which on first glance would plausibly look like a key, if one didn’t do the math. Doing the actual PKI math however, yielded the results.

Fixing TLS part 4

The Danger of the MITM attack

While the data should be confidential, it is even more important that the data can not be easily changed in a MITM (man in the middle) attack. Yet we see actions that allow this to happen and companies that will not put out effort to fix this because there is no demand from citizens to do so.  If you have been hacked at the local coffee shop and had your Facebook account stolen, that’s a different matter. You will become a convert.

This is clearly a patient safety issue, but if you actually knew the patient safety error rate and were in the Health Information Technology business you would not be surprised.  People are killed every day by mistakes and it’s not unusual to sue, so costs go up.  Lawyers and insurance companies profit, doctors have to pay more for insurance, and sometimes stop practicing, but have to tack on costs.  Error rates stay steady, despite many attempts at improvement.

One case that was brought to my attention was the identification of items in the Internet of Things.  Here is one classic example,  keeping track and count of surgical instruments and sponges.  Why is this done?

Because they accidentally get left in patients. Seems hard to believe, but it’s the case in a pay per procedure world.  Or a more unusual case, faulty sterilization.  One would expect that in conjunction with simply poor practices, since many people have through the centuries died from infections from surgical instruments until they discovered germ theory and instituted sterilization.  But normal sterilization doesn’t work for Mad Cow prions.  It does not kill the prions, so you have to keep track of those instruments.  And recently people were exposed to CJD in a southern U.S. hospital. So it happens.  Numbering and having a record of the instruments and their sterilization history would be a step in the right direction.

The importance of c=US X.500 NGO National Identity and how it can be done with root and other certificates used for TLS and for S/MIME, plus other vulnerabilities such as code signing for computer software.

By putting in additional validation of trust anchors at the c=US level, beyond that of the FICAM Federal Bridge cross certificates, the current maintainers of the X.509v3 standard that are the ISO brain trust behind TLS seek to add expertise to help users and companies make sense of encryption and assure that it is being done properly, that fill a gap not addressed by groups like FICAM or browser industry forums such as CAB.

These forums already play an important role, but there is an additional 4th corner that has been called for, which is the original model of the X.500 directory applied on a national scale by a NGO. We already have U.S. Government directories and X.500 applied to the Federal Bridge that serves a specific purpose.

The technical deep dive (out of the Matrix now) into reality

Recent research work has turned up serious problems. This is reality, not what your software is telling you. If you can read and understand that paper, you are out of the matrix.

Sometimes the problem is in the client code, sometimes in the server.  Some problems look very much like they have been inserted by the NSA, others just may be sloppy programming.

The bottom line is that U.S. companies are now losing money in a Post Snowden world due to failing to do the right thing and shore up the trust framework for digital certificates by creating individual trust islands in Aerospace, Pharma, Education and Energy, and failing to support a fairly simple NGO  national approach, already present in the standards which is the point of c=US. They fight to keep people ignorant, and people don’t want to be bothered, so they become victims.  Companies have an ethical and fiduciary responsibility to do better, but as the 2008 crash indicates, it’s not about being ethical.

Worse yet is the emphasis on DNS to fill the gap, which can be manipulated fairly simply to the end user and has no commercial history of being a trusted authority in the delivery of certificates.  You can be spoofed, you will be spoofed, it is a standard part of the Hacker tool kit.

One problem in particular, bad random numbers is known to be a problem for generating good crypto, and thus there are papers written by NIST on how to do good random numbers. That did not stop them from approving, and recently rejecting (after a Snowden disclosure), a bad random number generator that was specified for compliance with Federal standards.

This is why testing and math trumps trust. You probably won’t read the paper on Frankencerts but you want to know that there is a community of people working on the problem. It is complex. It is beyond the scope of users to understand. But it is broken along with trust and a lot of money is being lost.

This why we need an extra leg of security with X.509 certificates so that they are manageable by end users, along with them being curated by companies like Google, Microsoft, and non-profits like Mozilla in the ecosystem to make things simple.

Fixing TLS part 3

TLS was designed for E-Commerce

TLS negotiation is  made as user friendly as possible by browsers and the web sites that offer encrypted connections. For email, it simply means changing to the encrypted ports versus the standard ports for SMTP and POP/IMAP.

After reading the Wikipedia article one should realize that TLS only encrypts the transport layer of the Internet version of the OSI stack that allows one to communicate between different nodes on the Internet.

The average user only understands what the application or operating system tells them. They may want to know why a certificate presented to them is invalid when they get the massive red warning notice due to an invalid certificate. Or they may just click through and have their data stolen. But in the examples given, there would have been no warning. A bad certificate would not have woken up the software dogs, and they would not have barked.

While TLS is very important, it is only part of the entire security stance and may not protect a client and server against an active attack from a hacker. More and more attacks are active and persistent, not just mindless bots.

For this reason it’s worth understanding how the smartest brokers on Wall Street were hoodwinked by a high stakes variation of this game.  Ever shop for an airline ticket and then go back 5 minutes later to find that it is $50 dollars more?  How one creates a fair Internet is very important here.  It’s not the protocols which are the problem but applying what NIST is supposed to do, not allowing a fat finger on the balance scale.

If that TLS layer is broken so is the security of the entire email message. Or website. That is why we should specify other forms of encryption that encrypt the messages themselves, regardless of whether the transport is also encrypted. This is why one needs S/MIME which is a personal, not website SSL certificate.

It is more expensive for web sites to offer an encrypted transport connection since it uses more computer processing, is a bit slower, and one has to buy a certificate. Yet over the past few years it has become the norm.

The details of all of this have been kept in the background, and despite having an encrypted connection, banks and such still need to identify you as a customer, because you don’t have your own personal  X.509v3 client authentication certificate and Directory entry, which identifies you as the other end of the connection.

There  are other multiple web based  methods at different levels of assurance to do that. Other trust frameworks that might rely on Google to be your identity provider.  Or a second factor of authentication, or a biometric, etc. It does not really matter. They all perform the same function to uniquely determine you, to a relying party with a manageable degree of certainty.  The major difference is that the Web does not maintain what is called “state”. It is the Alzheimer’s version of state. So to get state, it transfers it, such as with REST.  Or it starts at a known part of the state machine, and uses tokens like the movie “Momento” to remember who you are.  Cookies.  Those cookies are persistent, (and in the case of the NSA) builds a profile so they can always send malware to your machine in an automated fashion if desired.

So where you are, Starbucks, or at home, you get the same “view” of the Internet. Like the traders, it is a constructed reality. The situation changes when you get to the deep Web via Tor.

For free or $20 you can get a client certificate to encrypt your mail, but it does not prove the message came from you, it only turns on the encryption.

The mail arrives encrypted but with no proof of who sent it.  Personna not validated. So now you are a dog that knows how to encrypt which is clever, but not trustworthy. You want a personal identity certificate that you can use to assert your claim of identity which will cost more money. And you can revoke this if someone steals your identity. It makes it harder for them to claim to be you, which is fairly simple with a stolen password, or password + other stuff.

Why should I prove who I am to the Internet?

In short, because otherwise you as an individual  will be profiled to get to the same place.

In fact you are already profiled by cookies in the browser, and browsers regularly violate “do not track” digital privacy policies to do so.

Do you want Facebook to represent you to the Internet? That’s backwards.  Because they own your data you gave them for free. And they sold it. That means you are the product not the customer.

The question is whether you want to maintain control over your identity and independently prove who you are, or what your server is, or that a specific device is a specific device based on Identity, or not use identity at all.

Maybe a probabilistic knowledge based approach like Lexix Nexis  will ask questions based on what you purchased on your credit card and thus  have an algorithm assign a score based on probability of you providing the right answers.

Or you can simply tell people who you are, and tell them they need to accept that because your can prove it with a high value x.509v3 certificate. In fact they should accept that, and go away because that is the sina qua non gold standard.

Or not provide any identifiers, but the chance of being anonymous (a blank) is actually fairly slim. Basically two or three unique datapoints, age, zip code, etc. can identify anyone in the U.S. with 95% accuracy according to the U.S Government  ID management web site. Professional spies have Facebook profiles set up years in advance so they “exist” on the Internet.

That’s because there is a cost of proving you are you to the Internet.  You don’t have to pay directly like with a certificate. But if you don’t identify yourself concretely with a high value certificate that is recognized by almost anyone,  in terms of the Internet trusted community, you are living in a van down by the river letting companies define your identity according to the marketing profiles. Nothing wrong with that for billions of netizens, but kind of fly by night with no real fixed address and corporations that represent you,  already have enough paid lobbyists in Washington compared to citizens.

So are you a rat that use some other method that data mines your friend’s email addresses and so on for a social graph, or an active consumer of big data Twitter sentiment on your product, which is primarily your own branding in social media, i.e. you?

How does one make the world adjust to you on your terms, and not their standard operating rules of service which means you are not and will never be a VIP on any branded  service.

Think of how Oprah does it. She was on television so billions know her dog’s name, and you think you know her.

Does she fly in coach to just to hang out with people? No, she has her own jet and does not go through security. She is a VIP by design which is a protocol. TLS also has VIP and coach.

Her identity is entirely public on public networks. Yet she still enjoys privacy at any one of the many houses she owns and cares to stay at, but she probably is not there.

If you don’t have a personal assistant to deal with people, the web performs that function (using TLS) for you to make appointments, show you mail, and so on.  It is very democratic that way.

But what privacy do you actually have?  It should be clear now that privacy is a fluid, negotiated concept like staying in a very nice hotel.  Expectations are important and need to be met.  If you stay at a cheap hotel, your expectations are different.  Paying nothing and couch surfing might also be more or less private. It is the terms of service that matter.

There is another cost to keeping that data up to date, protecting it, and keeping it accurate for people you want to have use it.  Identity documents are rated in terms of how accurately they represent the subject at levels of assurance.

The more you want companies to trust you are you, the more money you have to spend to have them investigate your credentials to prove they are genuine before they create a certificate to present to the other end of the connection, which is called the “relying party” because it relies that the information that you present is genuine. Even Caroline Kennedy, (who most people know quite well), still has to present her credentials to the government of Japan as the U.S. Ambassador per protocol.

In the typical TLS connection over a browser, what one is actually certifying is the DNS name of the server, or group of servers that answers your request. That’s one step to prevent a fake website. That’s a good thing, but not sufficient in some cases.

But they don’t know who you are, except if they start to collect more data on you. So they start collecting data on you.  Unless you present credentials of the Internet kind and you determine what data they should have.

You might be a cat lying on the keyboard and ordering cat toys.

A high level of assurance client certificate fixes that because it attests (under penalty of Federal Law) that the person using the computer is who they say they are. This is why Federal employees swipe a PIV smartcard to use any computer at work.

But you know your bank, (maybe) because they use an identity certificate (which costs more) unless you are being phished or attacked via a Man in the Middle Attack.  If you are part of a continuous persistent attack instead of a single phish then the situation is far worse since the fake software you download will keep you imprisoned in fairly nasty place.

So everything is set up in the Internet and ISO  protocols to make sure this does not happen. Except it does happen because companies don’t follow the protocols.  And then they lie about it in their privacy policies that they do the right thing. And then they lose millions of dollars like Target, when (even though they passed their security audit), they get hacked, and customer data is stolen.

Sometimes this happens to an entire country such as the beginning of the Syrian civil war where it began by capturing Facebook and Twitter updates that the protesters used to organize rallies. And companies sell products to do this exact thing.

Many companies should be held responsible for using software that put their users or customer’s data at risk by not actively validating the security of their products.

Fixing TLS in applications will have to be enforced via the FTC so write your Congress people to demand this.

All these companies publish in their privacy policy  something to the effect that they “take their customer’s personal data very seriously” which is an obvious inducement to trust them as an implied contract of performance.

If they lie about that, and actually don’t take your privacy seriously that’s false advertising and they can be fined. And it is better if we clean this up right now, with my solution below,  because companies are actually bleeding money from a lack of trust in the cryptography technology  by European and US consumers.  And it is not the math, which few people understand, but how it is being applied at a policy and business level, with data either:

1. Leaked to the NSA intentionally as a business arrangement

2. Captured by the NSA because data was not encrypted internally between data centers, which was recently patched by Google, and Yahoo.

3. Or encrypted, but decrypted on the fly though unknown or engineered flaws in the software.

4. Or where standards were actually subverted by the NSA and published by NIST who is their agency partner. NIST recently retracted one of these standards on random number generation

5. The NSA sold the idea to large commercial companies that while the TLS certificates that they created were normally broken, (given the defaults of the tool kit that created them) that it was broken in a way that it was broken in a specific way to have a back door that only the NSA could access. In other words, key escrow rejected by the Internet in 1994.

In fact, any back door, even a good one, is eventually leaked or compromised.  Backdoors in 1994 were much simpler and limited to “Internet wizards” who had to keep things running in complicated programs like SendMail.  But those were phased out.

A very clever cryptographic back door called Clipper backed by Congress and the Executive was then introduced and rejected as insecure in the 1990’s. What is different in this case was the NSA engaged partners and the users supplied the data for free services like webmail. The company that engineered the backdoor was going to patent it.

Most disturbingly is the sole use of TLS in medical records transfer in which patient safety can be compromised by data being altered in transit. There are two areas of concern.

1. Patient records or prescriptions used in day to day medical records

2. Millions of devices that exist in the Internet of Things that operate in hospitals via wireless security, (sometimes using  WEP authentication that you would not use for your home wireless router).

This was demonstrated in the popular television show Homeland, but the actual research was done by a hacker on pacemakers and insulin pumps to improve security.  He is now dead and not contributing to the discussion.


Fixing TLS, Part 2

The original design of the Internet as independent nodes

We are not now leveraging the power of the Internet as nodes that communicate directly with each other, having powerful computers on each end. The PC was a huge technological breakthrough, add the Internet and it is even more powerful.

In fact we want both end to end communication and web services, and for very specific reasons, because we don’t necessarily want to Tweet the inside temperature where you live every 10 minutes, just because one can.

But if you live by yourself and go away for a few weeks, and come back to a $600 extra heating bill because it was set to 80 degrees instead of 62 degrees F just because someone was not tall enough to read the thermostat setting (a real life example but not thankfully not in my house!) it might be a good thing to read the temperature on your smart phone or computer and check remotely, or for stay at home elder care.  But does that have to go through a web service?

If the wags are right,  the average user won’t figure this out because convenience will trump privacy, and so they will pay more, and not know why. The methodology and theory behind user controlled personal data, (especially in health care) is only a few years old.

Yet we also have a strong DIY and maker culture. So they need secure frameworks, not necessarily a bespoke branded cloud service to tell them data points of interest. Do you have to go to a web site to find out what time you ran a marathon, or can they simply just mail it to you? Then if you want to paste the results to Facebook, isn’t that your own decision on data to share, rather than having a web services application do that through an API?

It’s not always out and out identity theft, (although that is a severe problem), but sometime it is just skimming. Low intensity skimming of data.  Sometime skimming of money.  And Michael Lewis makes it clear in his book how the smartest brokers on Wall Street fell into this trap and how IEX solved the problem.

Not surprisingly there is a direct  link to 9/11, and “straight through trading” they were at that very moment were discussing on the morning of  9/11 at the Windows on the World restaurant at the Risk Waters Conference when the planes hit the towers.

The result the terrorist attack was that Wall Street broke down due to poor planning on wiring on Wall Street for running fiber for orders, and the process of building data centers (already in progress) in New Jersey was vastly sped up with faster connections right to the exchanges. So that sets the stage for our discussion of TLS.

That extra speed of moving everything to New Jersey allowed orders to be subject to “front running” as controlled by algorithms. In this case the only law in place was the speed of light. In order to fix this problem, and not allow a computer to intercept your order and change the market before the order could be executed meant introducing a delay.

Without an equal execution time, a familiar security flaw is introduced called a “race condition”

What is not obvious is that we have, in the process of “shrinking” the country to be contained in a box by use of micro-electronics, at the same time affected the world outside to which it is connected when we use it to control devices.

Rules that apply to security in programming also apply to large networks when the connections to each are blindingly fast.

Thus the high speed traders bought faster and faster connections because it essentially put themselves inside the ordinary, honest trader’s computer able to anticipate every major move.

Like the movie Tron. The high speed traders effectively “bought the future” by use of higher speed connections to the on going reality of the market and what the other traders saw microseconds later was the HS trader’s reality, not what had just appeared on their screen in terms of prices.

Their algorithms faked buys and sells, to form a pattern that is visible in market churn, ending up in the famous Flash Crash.  They created a market “game” in virtual reality, only when it came to actually buying or selling, they were already there, coming from the microsecond future and shaving off a few units, but at huge volume.

Software that kills people

Here we are talking about race conditions in networks, but race conditions in critical software can kill people or expose them to harm such as the faulty programming in X-Ray machines such as the Therac-25 that killed patients.

It’s only a matter of time that the public finds out they are subject to the same factors on the Internet as the brokers found out in Michael Lewis’s book.

The story is an old one, traders used to stand out on the balcony of the  Philadelphia  stock exchange near the Delaware river. When they spotted a ship of goods making it’s way up the river, they already had the manifest of goods and understood how the surplus of those goods would determine the market price, but they did not have the exact timing information for the necessary trade on how to sell high or buy low.

So after “flagging” the ship with light based information via the telescope and identity of the ship, they ran downstairs and then sold with the foreknowledge the price would go down due to a plentiful supply as opposed to scarcity that might have existed hours before.



Fixing TLS (the backbone of e-commerce encryption) Part 1



The emphasis on the need to improve the security of the TLS protocol implementations that provide encryption in web browsers can not be overestimated. At the same time, other applications are also at risk.

If you don’t understand what role TLS plays in securing e-commerce in web browsers and other software here is a basic introduction. If that is too complex, you may still get something of value in reading the rest of this post how this relates to fair markets versus insider trading and what the government is doing in response.

The take away is that a user does not have to do anything  very difficult make TLS work if we limit ourselves to web browsers and HTTPS. Most everything is done on the server side.

The  web, and web services have a huge attack surface. Some times one needs to make things simpler and more secure, sometimes complexity is just part of the picture.

With web services the server and back end databases in the cloud do the heavy lifting that used to be done by desktop applications. Sometimes this works well, other times it exposes data we would prefer is kept private. Given the power of current computers, we barely touch what they can do. How can we unleash that power?

Thought. “How much control do I have over my own laptop or desktop, and how much control do I have over a cloud application”?   

TLS/Web Services  are not always the best approach, and users are painfully understanding what this ease of use and convenience actually costs in terms of security when it is compromised.

The protocols underlying TLS  have to be good, and the implementations have to be good or the system will be manipulated by various actors.

TLS security  is not a matter of trust, but of applied math, (that given some interest) can be tracked down and proven or disproven. Given little interest, the user can only hope that the right thing was done since there may be no recourse.

The trust results from an independent audit of the product, (or one’s own evaluation), drilled down to whatever level one desires, and the further one goes, the more evidence there is that the standards already in place were applied to correct specifications with no compromises.

In other words, the architect, and the builders did the right thing and did not skimp corners. That is all summed up, and in the case of software, is a moving target of patches.


The high risk strategy behind the rationale of “Trust”

I listened to the MIT meeting convened by PCAST as the experts attempted to balance the requirements of security and privacy, along with benefits  A great takeaway was presented by Microsoft Research which was actively looking at developing a mathematical model in terms of privacy, specifically protected information held in databases.

Note this approach does not rely in any way on “trust” but the ability to prove mathematically if privacy is protected in the release of data in an aggregate format, which typically is already de-identified.

However HHS, and of course Direct Trust and others promote the idea of trust. This is of course an approach valid for business relationships in which contracts exist between entities to flush out the details. This exacts  a cost to define those details.  If it was the Mafia, you would have to kill someone to establish trust and then enforce the oath of silence, or Omerta.

In this case the “just trust me” approach is not so different from the Mafia, except it is entirely legal.  Where it is different from the Mafia is that there are multiple legal policies which typically act to remove your rights in a “contract of adhesion”, as opposed to going to the local Consigliere, boss or underboss which in fact could resolve disputes in the early days when the legal system was itself not responsive.

The entire point is that from a consumer point of view, one should never get bad service as a matter of culture. The downside is of course the illegality of the supply chain and the control of the boss.

So what does a trust relationship look like between HIPAA business associates?  Effectively HIPAA assigns responsibility not just to Covered Entities but to BA’s that have access to patient data.

But when relationships are between states, or states and the Federal Government, where does the end user gain any traction into the process?

From an organizational point of view, the Privacy Officer is the right place to start. They can respond to data breaches.  They want to make sure that data is secure.  They are willing to explain to an end user. This is a good role.

Now, what happens in a different scenario? What happens if you factor “trust” out of the model, and only go with what is provable?

Starting with NIST Levels of Assurance.  How much additional overhead needs to exist?

Here we begin to bump up the “epsilon of infinity” that leads to cartel formation as described by Microsoft Research.  I.E. “trust” translates into business relationships, that then becomes a cartel.

And here we get to the NSA proposition that was made to all the commercial players that the security would be weakened to allow for back doors.  In practice. This was perhaps valid from the risk perspective of the President looking to avoid the next version of the plague, but not from the perspective of the patient.

But let’s hone that down a little further.  It’s not access to patient data, which is specifically allowed for Law Enforcement and National Security purposes at the provider.  The patient has no rights to protect that information.  It’s something else entirely, it is the “unattributed” access, not official access at the provider, by anyone, including state sponsored actors while the data is in motion over the Internet.

And data sent over the Internet, is inherently untrusted, (except by those who don’t care, or are singularly uninformed)

.As a result there are mechanisms to build secure networks and data flows using existing technology, like TLS X.509v3 and web browsers, or S/MIME email messages which can encrypt medical messages using the Direct Project applicability statement.

This means there is a conflict between the Internet which securely refuses to trust “trust” which is based on the one of the original and most thoughtful papers every written on security that explores the depth of the problem.  The name of the classic paper is “Reflections on Trusting Trust”







Jury Duty

I sent in my Jury Duty form and noted at the top that I was randomly selected.  Turns out that is the requirement written into the law.

But it did not seem random to me and I noted that on the form.

I went in to talk to the administrative staff and asked about this.  Is this random?  Two things stand out.

  1. It has to be random, that’s the law
  2. It is done on a computer

Ok, that seems fairly obvious, if it is done on the computer and it’s not random, then ipso facto, it does not comport to the law. But is it good enough?


Is  the administrative branch of the court that does this not aware of what “random” means on a computer?

It has a precise definition and is defined by NIST as part of the standards for cryptography.

So I asked to speak to the IT guy.  Because what does it mean in fact if it is not random? Not pseudo random, but random. Did their software vendor use the right RNG?  Does the software have a backdoor in it to favor selection of certain jurors? And whether it does or does not, did they ever test it to make sure it did not, or did they just buy it off the shelf and start using it? Is there an audit report?

The fact that NIST recalled a random number generator, which was the default in B-SAFE which the deterministic random number generator has generally thought to be back-doored as well as broken since 2006 gives one pause that the selection process might not be random.

They are going to check on it and call me back.