ECC’s Importance
Elliptic Curve Cryptography (ECC) looks like a good alternative and a replacement for a more common RSA dominated one, especially when it comes to devices with “weak” CPU’s, the ones that you can usually find in IoT world.
ECC used to be predominantly proprietary and patented by companies like Certicom, but now there are public standards and implementations that can be used without paying expensive license fees.
The standards have been created by organizations such as Crypto Forum Research Group (CFRG), IETF’s TLS Working Group (TLS WG) and finally by the governmental standard body – NIST, which is referred often by other compliance standards such as PCI. It means that if you want to be PCI compliant and use ECC you ought to follow NIST recommendations.
What’s the problem then? – you might ask, just go ahead and use the
standards and its implementations in popular open source tools such as
OpenSSL.
The problem is that I can’t trust to any of the standard organizations listed above. Why I can’t trust them and what can be done to make your ECC solution secure in the nearest future is described below.
The abundance of materials related to the topic and its complexity doesn’t allow making this blog short, so be patient please.
NIST Curves Drama
Actors
Since I’ve used the word “Drama” in the title, I should probably describe actors. Not all of them are new, e.g. you can easily find popular crypto protocol participants Alice and Bob along with eavesdropper Eve in the Bruce Schnier’s “Applied Cryptography” book published in 1996.
Many things have changed since then, Alice and Bob are not just “protocol participants” anymore, they have become dangerous cyber criminals plotting something evil, while Eve has become a heroic character trying to save the world from these dangerous criminals and their vicious plots.
Eve couldn’t do much without another heroic character Jerry whose job is to make Eve even more successful by embedding back doors to crypto protocols and standards. I think, Eve would fail more often than not without Jerry’s help if criminals Alice and Bob used the right crypto algorithms and protect their private keys well all the time.
As you probably know already, I didn’t invent Jerry and new roles myself, they’ve been created by the people whose daily job is cryptography and who publish their work in serious scientific journals that I could only partially understand. Nevertheless, I think my education in applied math and practical experience are sufficient to understand where they are going to with all that.
From my side I would also add few other actors popular nowadays:
- Good Samaritan John who states often that he has nothing to hide and doesn’t care much about Eve, Jerry and others alike who never stop spying on him.
- Seasoned Security Consultant Jim who comes to executive meetings often to tell business leaders that “we can’t really protect your systems against state-sponsored attacks, so let us not even try and save money for something else”.
- Influential Crypto Forum Chair Lars whose job is to stay the course, pretend that nothing has happened and save the face of the organization whose almost mandatory advice is used by IETF, its TLS working group, NIST and others.
- Insignificant Choir of crypto forum’s, TLS’, IETF’s memebers, cryptographers and entrenched security engineers who are desperately trying to understand how to deal with all that in the real life (hereafter Choir).
Drama Unfolds
Dual_EC_DRBG
Oddly enough, the drama would not have ever happened if one of Jerry’s notorious colleagues (we’ll call him Snow White) had not decided to disclose very interesting and intriguing details about once NIST standard and RSA Bsafe’s default called Dual_EC_DRBG. The nature of the hack is explained in simple terms here. There are two points – P1 and P2 on a curve. The first is used to calculate a random value, which is a coordinate x of the production n*P1, where n can be considered as an internal state of the algorithm. P2 is used to change an internal state of the algorithm by calculating the production n*P2 and using its x coordinate as a new state.
As it was demonstrated by Dan Shumow and Niels Ferguson in 2007 and pointed out by Bruce Schneier if there was a dependency between P1 and P2, e.g. if P2 = s*P1, then calculating internal state becomes trivial – it’s an x coordinate of s*P1*n, where both P1*n and ‘s’ are known to an attacker.
Since a method of P2 selection has never been disclosed, it created suspicions that a backdoor key (see ‘s’ above) has existed and was known to the algorithm creator since day one. The same has been confirmed by Snow White. The loop has been closed and NIST had nothing better to do as removing Dual_EC_DRBG from the standard.
The other Snow White’s revelation was that RSA got $10M from Jerry’s employer to make Dual_EC_DRBG a default in their crypto library called BSafe that was successfully licensed to many commercial companies for very expensive license fees, so RSA made money from the both compromising their library with a backdoor and telling their customer how secure their solution was. What a wonderful business model! I firmly believe now that not only RSA had the best cryptographers, but very inventive and industrious business leaders as well.
The funny thing about BSafe is that thanks to Seasoned Security Consultant Jim (see above) changing defaults in existing implementations is practically impossible, because nobody wants to spend money to protect their systems against state-sponsored attacks. Remember, “it’s impossible” according to the Jim’s assessment.
NIST P-256
I think, it’s good time to talk about NIST P-256 now. There is a reason why this particular curve is given more attention than any other NIST curve:
- A good compromise between speed and security (256-bit prime looks about right).
- It’s a default in the latest production version of OpenSSL.
- EC arithmetic is optimized in OpenSSL implementation (see enable-ec_nistp_64_gcc_128 flag in OpenSSL config), which increases the speed of algorithms such as ECDHE almost twofold.
Looks good, right? Wrong, if you consider the fact that the method how EC parameters have been selected is not quite clear. To be more exact, it was not clear how a seed has been chosen to generate a curve parameter.
It means that a statement about P-256 being “verifiable random” is simply not true and a D. J. Bernstein’s note in a TLS WG discussion confirms that and provides a hint about Jerry’s employer involvement in this case as well.
The common suspicion here is that Jerry has tried many of them until found a weak EC curve that can be exploited in the same way as in Dual_EC_DRBG case. Since there is an opinion that there might be a “spectral weakness” in ECC (check also this), that suspicion seems quite plausible. “Spectral weakness” means that there is a uniform (?) distribution of weak EC curves that can be eventually found through enumeration in a reasonable time interval.
Drama Perpetrators
Good Samaritan John, Seasoned Security Consultant Jim and Influential Crypto Forum Chair Lars make everything even worse by pushing everyone into the direction of not doing anything. Let me explain why their rhetoric is dangerous and doesn’t make too much sense to me.
John’s statement, “I have nothing to hide from my government”, is probably OK for personal emails and social media, but it becomes less acceptable, if at all, when John is in a position of protecting a global company’s secrets. Any international company wants to keep competitors at bay and any government tries to help its major businesses as much as possible. Conflict of interests becomes obvious here and a wise CEO would definitely try to find a replacement for John as soon as possible.
Jim just wants to simplify his own life by ignoring threats coming from a government. The problem with this approach is that if one government can break a system, other governments might find a way of doing the same, as well as well heeled and organized cyber criminals that could be connected to a government. Furthermore, either a government or cyber criminals can create tools and make them available for script kiddies at which point everyone could attack the system.
Finally, my favorite actor Lars who knows very well what’s going on in his organization, who periodically listens to the Choir’s rants, but still pretends that nothing has happened and who doesn’t want to do anything to rebuild trust to his organization even in the eyes of his own co-chairs. I won’t write too much about it, I just want to refer to Alyssa Rowan’s message to a Jerry’s colleague Kevin and Lars’ response to the rant.
I won’t make any conclusion from this story, because I could not formulate it better than co-chair David McGrew did:
“The Research Group needs to have chairs that it trusts, and who are trusted by the broader IETF and Internet communities that they work with”.
Trust is a keyword here in my view.
Drama’s La Finale
If you’ve followed my line of thought to this point, you’ve probably come to the same conclusion as I already – there is no anyone who would protect your curves from Jerry:
- NIST, CFRG – dysfunctional and not trusted.
- TLS WG – doesn’t have enough expertise, relies on CFRG when it comes to cryptography.
- Choir – not an organization, can’t really create a standard.
- John, Jim, Lars – simply do not care or corrupt (remember $10M?), or both.
As you see, nobody will protect your curves, except you!
What you can do
Special and Random Curves
To simplify our considerations we could divide all curves in two groups – random and the ones with specially selected domain parameters, e.g. NIST P-256, D.J. Bernstein’s Curve25519 and Curve41417 are “special”, while Brainpool curves are “random”.
An important requirement of Brainpool curves is that the method of parameter’s selection is clearly defined. It includes seeds that are used for deriving the parameters. They also try avoiding the following threat coming from the “special” curves:
“The primes selected for the base fields have a very special form facilitating efficient implementation. This does not only contradict the approach of pseudo-random parameters, but also increases the risk of implementations violating one of the numerous patents for fast modular arithmetic with special primes”
This requirement creates a certain “nothing up my sleeves” assurance, which is very important considering lack of trust to manually crafted curves especially when Jerry and his colleagues are involved.
Even though optimized EC arithmetic is not available for the random curves, “nothing up my sleeves” factor seems to be more important and it very much determines my personal choice.
Implementation
Since in the most cases you’re not going to build an isolated crypto-system, it’s important to integrate your crypto libraries with existing system software such as Apache, Tomcat, JBoss, RoR, HAProxy and other web and application servers that you might use.
All of them use OpenSSL to do cryptography and that’s why using Brainpool curves will require an OpenSSL version that supports them. You could find the curves implemented in the OpenSSL version 1.0.2, but it’s still a beta that you probably don’t want to use in production.
My solution was to backport Brainpool curves to a stable version 1.0.1. It was not trivial, but doable. The patch against 1.0.1j is provided in the “Appendix A”.
After the new version is created you can statically link it with a web server of your choice.
You should also keep in mind that when it comes to SSL/TLS implementation, there is a possibility to use different curves for digital signature (ECDSA) and ephemeral key exchange (ECDHE).
The type of curve used for ECDSA is the one that is used as your server’s private key, while ECDHE curve should be provided as a parameter in a server configuration file, e.g. ‘ecdhe’ parameter in ‘bind’ command of HAProxy config. Please notice that NIST P-256 is a default there, just like it is in OpenSSL. I’m just saying … 🙂
To configure Apache you can use SSLCertificateFile option that points to a certificate file containing EC parameters generated by ‘openssl ecparam’ command. If you want to use brainpoolP256r1curve, you’ll need to run:
openssl ecparam -name brainpoolP256r1 -out bp_params.pem
and then copy/paste the output to your certificate file.
Certificates
To use a Brainpool curve for signatures (ECDSA) you would need to generate a private key and then either use it for creating a self-signed certificate or a CSR file if you want your certificate to be signed by a known certificate authority (CA).
The problem with the latter case is that big CA’s such as Symantec/Verisign might not support Brainpool curves (even NIST curves support is relatively new for them). I didn’t check smaller CA’s yet, so there is a room for research.
Generating a private key is simple for a curve, you just need to use the same ‘openssl ecparam’ command:
openssl ecparam -name brainpoolP256r1 -genkey -out bp_key.pem
After this is done, you can generate a self-signed certificate or a CSR file just like you did it in RSA case, e.g. to create a certificate run:
openssl req -new -x509 -key bp_key.pem -out cert.pem
Brainpool security considerations
I’ve noticed when was going through DJB’s SafeCurves pages that there was one problem called “Twist Security”, which didn’t look good for my choice (brainpoolP256r1):
Curve | Cost for twist rho above 2^100? | Cost for twist rho |
---|---|---|
brainpoolP256t1 | 2^44.0 | 2^44.0 |
Since security of brainpoolP256r1 is equivalent to its quadratic twist brainpoolP256t1, I was concerned a bit and went through different attacks that DJB described in the “Twist Security” section. The first was small-subgroup attack, which is simply not applicable to Brainpool curves due a requirement of having cofactor equal to 1.
The other two attacks are not relevant either when it comes to OpenSSL implementation, because the latter is compliant with X9 standards, which require a point-on-curve validation.
Just to be sure that it’s true for OpenSSL, I’ve checked the code and found EC_POINT_is_on_curve (see ec_lib.c module), which is called each time when a new EC point arrives.
I’ve even had a good conversation about that with TLS community and didn’t see any objections to what I found so far.
I didn’t see any other considerations that would say that there is a serious weakness in this curve.
Conclusion.
- Nobody will protect your curves except you.
- Standard bodies are either not trusted or not efficient, or both.
- Defaults are dangerous and can contain a backdoor.
- Using random curves creates a certain level of assurance that there is “nothing up Jerry’s sleeves” and “first person attack” is not possible.
Comments deleted by Google.
Initial post has been published @ blogger.com, but since Google keeps deleting comments, I moved the blog to this site. Here are screen shorts that blogger.com has deleted.
A very interesting fact disclosed by a member of cryptography research community Jerome Circonflexe is that “situation with DUAL_EC_DRBG was totally obvious to researchers at the time when it was published”. It raises even more questions. If it was known from day one (year 2007) why nobody in that community including very influential CRFG had done anything to remove the scandalous standard from NIST? Everybody was silent until Snowden came with his stunning revelations. Is it a CRFG’s conspiracy (with Jerry’s employer), intimidation, or indifference to the public interests and to all the people who rely on the standard? I think, in any case they owe explanations and an apology to all of us.
Appendix A – Brainpool Backport Patch
I’ve created a patch for back-porting Brainpool curves from 1.0.2 (beta) to 1.0.1j (stable). Since I didn’t change any algorithm, but simply added barinpoolP256r1 parameters, a probability that I broke anything is negligibly small.
I’ve tested the patched version, which I call 1.0.1z, standalone and with 1.0.2. using the latter as a server. I’ve even statically compiled it with HAProxy and was able to terminate SSL using the curve for both ECDSA and ECDHE algorithms. Everything worked, no surprises have been found so far.
After you apply the patch and build your 1.0.1z version with brainpoolp256r1 in it you can verify that everything was correct by running
$ path-to-custom-openssl version
OpenSSL 1.0.1z 15 Oct 2014
$ path-to-custom-openssl ecparam -list_curves | grep brain
brainpoolP256r1: RFC 5639 curve over a 256 bit prime field
Appendix B – Optimized NIST P-256 speed
I’ve tested NIST P-256 speed with optimized EC arithmetic (enable-ec_nistp_64_gcc_128) and compared it with that of the Brainpool curve. The optimized NIST curve was 2x times faster for ECDHE and ECDSA/signing operations, but was about the same for ECDSA/signature verification. An absolute benefit was around 0.1-0.2 millisecond per operation.
I don’t think that it’s an important denominator. I’m also suspicious about optimized EC arithmetic, because if it can be optimized by an implementer, brute forcing can be probably optimized by an attacker as well, which can decrease a cost of an attack. It’s not something that has been proven, just a “common sense” reasoning.
Finally, I think, a peace of mind and a confidence that a “first person attack” is not possible is a huge benefit compare to 0.1-0.2 millis per operation. The results are below:
$ openssl version
OpenSSL 1.0.1z 15 Oct 2014
NIST curve 2x times faster for ECDH
$ openssl speed ecdhp256 ecdhbp256
Doing 256 bit ecdh(nistp256)’s for 10s:
71830 256-bit ECDH ops in 10.00s
Doing 256 bit ecdh(brainpoolP256r1)’s for 10s: 30885 256-bit ECDH ops in 10.00s
OpenSSL 1.0.1z 15 Oct 2014
built on: Sat Nov 15 13:46:22 PST 2014
options:bn(64,64) rc4(ptr,char) des(idx,cisc,16,int) aes(partial) idea(int) blowfish(idx)
compiler: cc -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -arch x86_64 -O3 -DL_ENDIAN -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
op op/s
256 bit ecdh (nistp256) 0.0001s 7183.0
256 bit ecdh (brainpoolP256r1) 0.0003s 3088.5
NIST curve is about the same speed for signing
NIST curve is 2x times faster for signature verification
$ openssl speed ecdsap256 ecdsabp256
Doing 256 bit sign ecdsa’s for 10s: 108757 256 bit ECDSA signs in 10.00s
Doing 256 bit verify ecdsa(nistp256)’s for 10s: 50898 256 bit ECDSA verify in 10.00s
Doing 256 bit sign ecdsa’s for 10s: 91873 256 bit ECDSA signs in 10.00s
Doing 256 bit verify ecdsa(brainpoolP256r1)’s for 10s: 25161 256 bit ECDSA verify in 10.00s
OpenSSL 1.0.1z 15 Oct 2014
built on: Sat Nov 15 13:46:22 PST 2014
options:bn(64,64) rc4(ptr,char) des(idx,cisc,16,int) aes(partial) idea(int) blowfish(idx)
compiler: cc -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -arch x86_64 -O3 -DL_ENDIAN -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
sign verify sign/s verify/s
256 bit ecdsa (nistp256) 0.0001s 0.0002s 10875.7 5089.8
256 bit ecdsa (brainpoolP256r1) 0.0001s 0.0004s 9187.3 2516.1
It would be nice to have a 1024 bit ecc curve in standards like this:
MODUL: deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00000553
coeff_A: b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce00b0b00ad02e500a11ce000000000000000000
coeff_B: de5001500dead0000ae5001500900d0000f1ea5001500be540000f1ea5001500c0010000f1ea5ff15ff25a5afe00000000000000000000000000000006c06c057a813fe4fc86eea79422c2f42581b924c73626b35043223f6c24edb1b00000000000000000000000000000000000000000000000000000000000000000000163
q_MODUL: deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00deadbeef001500bad00f00d00dea795ed3665842e7c5bbe6164c8694f28bfecbbdccaf14eb1d029be82868f7167ee3d3492126a4eee72ff26a1a9461187a59b40cc8bcfbe87af0405c66f562d68d
X0: 0de5ff15ffdeadffffae5ff15ff900dfffff1ea5ff15ffbe54fffff1ea5ff15ffc001fffff1ea5ff15ff25a5afe0000000de5ff15ffdeadffffae5ff15ff900dfffff1ea5ff15ffbe54fffff1ea5ff15ffc001fffff1ea5ff15ff25a5afe0000000deadbeef15badf00d00400a11ce00a2d00b0b000000000000000000000001
Y0: 26f3cec2a668d0d3262431b83d8324cb8314c306a7e619c40cd25964492c5dbcf2a179da1d75c1758a158939c351c4cfcab769575fc6c4c31e9505ce161d800c7b49255813c3190a5595b5f6a7514a60f8efc8b9eef49c4cb1c536cbba81e3bf81816790d45b71b14c886c00915e2c9180a6a249eec84097c3cd8f85eb0d05fd
I’ve see this one here: https://www.fh-wedel.de/~an/crypto/accessories/domains_anders.html
Both major and popular ECC standards such as NIST and Brainpool have a “verifiable random” requirement for the curve’s parameters.
They state it indeed, but don’t enforce, e.g. nobody knows how coefficient ‘b’ has been generated in NIST p265, while there are references in public sources that it has been introduced by Jerry Solinas, an NSA employee. Brainpool curves are very transparent in this regard.
BTW, I’ve been talking at many security conferences lately, including RSAC, but all my submissions related to issues described in this article have been always rejected so far.
Is there a feasible method by which NIST ECC curves over prime fields could be intentionally rigged?
The simple answer is: it has not been discovered, even for DUAL_EC a backdoor key has not been publicly uncovered. Since security always operates with threats, the real question here should be: how serious the threat that a NIST ECC curve can be compromised is?
To answer this last question, you can check a discussion here: https://crypto.stackexchange.com/questions/30144/do-weak-elliptic-curves-exist
The community was not silent regarding DUAL_EC in 2007, but the government just does not care. And the media did not care about tech stories back then.
The algorithms was insane anyway, nobody would ever use it, because basing a random number generator on elliptic curves is ridiculously slow compared to other designs (e.g. based on hash functions). That’s what makes it extra suspicious that RSA (the company) used it in their devices.
Here some press article from then:
https://www.wired.com/2007/11/securitymatters-1115/
https://eprint.iacr.org/2006/190
https://eprint.iacr.org/2007/048
Thank you. I’ll take a look
https://www.wired.com/2007/11/securitymatters-1115/:
“So the agency’s participation in the NIST (the U.S. Commerce Department’s National Institute of Standards and Technology) standard is not sinister in itself. It’s only when you look under the hood at the NSA’s contribution that questions arise”
So, now after we’ve all looked under the hood and noticed an apparent, intentional and successful attempt to compromise a public cryptographic standard, after consequent scandals that involved other agency’s employees working on other cryptographic standards (e.g. see https://www.ietf.org/mail-archive/web/cfrg/current/msg03646.html), do we still believe that agency’s participation “is not sinister in itself”?
Why? How many successful attempts are required to change this opinion?
This one is interesting: https://eprint.iacr.org/2006/190
Berry and Andrey have proved that DUAL_EC was not secure and knew that it was going to NIST, so the community knew about that as well. However, when I’ve clicked on “Show Discussion” button, I didn’t find any. No condemnation, no attempts to organize the community to prevent DUAL_EC adoption by NIST.
Look at what’s going on with Facebook’s CEO now: he’s literally grilled by legislators, so they do have interest in these matters. At the same time, Mark looks like an innocent child compare to the agency in this case. Why community didn’t try to get senator’s attention back in 2006?
The latest community efforts in the direction of ECC security are important to mention. The project https://safecurves.cr.yp.to/ is fairly up-to-date and bring some good insights on community driven research on safe curves. It is very important to be looking for new developments, as you very well appointed, it is up to us to secure our systems and nobody else. No standards board would be of help with this, they are just too slow and too biased, like any industry standard. I particularly like the Ed448-Goldilocks, and there are a couple of implementations out there already. https://github.com/otrv4/ed448 https://github.com/otrv4/pidgin-otrng
Thanks for a good comment. I’ll answer later: there is a caveat in “new development” that has not been adopted yet by standards and by common OSS framework, especially if you work in a heavily regulated industry.