> It is our intent to transition all clients and subscribers to ACMEv2, though we have not set an end-of-life date for our ACMEv1 API yet.
Please don't do this. It will break millions of sites needlessly. Most installations of lets encrypt plugins aren't going to auto update to v2. A lot of us are also using custom v1 code for various reasons that may not be easy to change.
The preferable end-of-life date for ACMEv1 (sparing any existential security issues) should be never. Otherwise you will be executing a Geocities-sized web meltdown every time you phase out a version of the API.
> Technically, Mail-in-a-Box turns a fresh cloud computer into a working mail server. But you don’t need to be a technology expert to set it up.
I'm just choosing MIAB as an example here. This applies to anything that LE now enables. People don't know they're using LE, much like IOT users don't know they're using HTTP/1.1. It's part of the plumbing. What's an ACME client? What's LE? What's v1?
This is probably happening for IOT devices across the globe just the same. A 2y expiration date is an order of magnitude too low for plumbing. Imagine if we suddenly decided to phase out HTTP/1.1 within two years.
We have to recognise that we are shoving HTTPS down people's throats. Pretty soon, HTTP will get big f-off warnings. OK: fair enough. However, if we're doing that, we should also provide a viable alternative, with the same reliability. Otherwise, HTTPS is a massive step backwards for the decentralised web. LE is that alternative, but not if we start breaking backwards compatibility every 2 years.
Rather, "after this point, no new domains may setup via v1", so any existing certificates and installations are grandfathered. Two years is sufficient for MIAB to update their software and distribute to users.
>LE is that alternative, but not if we start breaking backwards compatibility every 2 years.
Not what I'm saying either. They have a v2 now, we don't know if they need a v3. And they want to keep v1 running for a while.
But there will be a point where v1 will need to be switched off, similar to how modern browsers have switched off SSLv1 despite a lot of people still having servers running with that.
LE will, at some point, have to decide between keeping v1 running or moving away from old protocols to be able to evolve. And that cannot be infinitely pushed backwards.
And your old client still works on the systems it's deployed on (by definition) so you could just stop development on that.
It's bad enough that some people comment without reading -- I apparently commented without paying attention.
Obviously a decent length of grace period would be the correct way of deprecating the older version, to give people time to update their infrastructure accordingly. I would suggest at least a full year (giving at least four renewal cycles to test changes in a QA environment before being forced to update production), probably more. Perhaps, if possible, a year for new certificates and two years for renewals?
(I'm on the Certbot team.)
Also, some people dislike this feature quite a bit, and there are about 100 different clients.
There's enough entrenched inertia to HTTPS without giving people more ammunition regarding the actual amount of work involved. Unless there's a security reason to eliminate the v1 endpoint, please don't.
This was a major breaking change, without any advance notice, but nothing melted down.
I'm sure the other validation endpoints are used a lot more, but the effect shouldn't be any different, especially if give a deprecation notice of a year or two.
DNS challenges exist and are useful but have more extensive infrastructure requirements. Nothing beats the ease of use of "just put the box up and it'll retrieve its cert as needed".
As would be the preferable end-of-life date for SSLv3 and HTTP.
(Related, a big thanks to Google for un-trusting that whole big Symantec security chain. Yeah, I realize they weren't competent, but I also realize that it had no practical effect on my site's security, as I don't have nation states or motivated hackers in my threat model.)
Security measures should be weighed like everything else - as cost/benefit. In many cases the cost of the security is not worth it.
Edit: I'd just like to point out the irony in some of the replies to this comment. I'm complaining about zealotry, and the vast majority of nasty replies I've received to this comment are using language that only zealots and ideologues would use. My god, you'd think I'm killing puppies based on some of these responses. Nope, just advocating for using HTTPS where it makes sense, and not having it forced down your throat.
Those devs are gonna be really surprised when they find out that unencrypted connections are routinely tampered with.
> they either don't know or don't care about all the effort and pain they're creating
You have not been paying attention to the hundreds of tools available to make HTTPS painless.
> until someone at Big G decided they weren't.
And Mozilla. And countless research papers. And real-world attacks that are reported over and over again. The fact is that the global Web has become hostile, regardless of your prejudice against Google's Web security teams.
> In many cases the cost of the security is not worth it.
The problem is that it's not YOUR security, it's other people's. If websites don't implement HTTPS, it's the users of
the Web who pay the price. It's their privacy being deprived. And the website becomes easy to impersonate and manipulate, increasing the liability of having a website. HTTP is bad news all around.
I hardly ever see people talk about this use case and how to solve it with https everywhere. AND it's super widely used: e.g. debian repositories.
deb https://deb.debian.org/debian stable main
And because HTTPS is nothing more than baseline security, it's possible to automate it with things like Let's Encrypt and not add any more checking beyond current control of DNS or HTTP traffic to the domain.
(Another confusion along these lines is assuming HTTPS is useful as an assertion that a site isn't malware. It asserts no such thing, only that the site is who it claims to be and network attackers are not present. If I am the person who registered paypal-online-secure-totes-legit.com, I should be able to get a cert for it, because HTTPS attests to nothing else.)
Don't get me wrong GPG signatures with pinned public key is a lot better than trust TLS of a random mirror.
But isn't it nice to have two layers, the two key systems are independent and orthogonal that seems like a solid win.
Need I remind of Heartbleed (openssl) or the very debian specific gpg key derivation bug years ago.
There will always be bugs, we can only hope they aren't exposed concurrently :)
Majority of HTTPS traffic is sniffable and largely non-confidential, unless you pad every file and web-request to several gigabytes in size.
Does your website use gzip? Good, now padding won't help you either, — unless it is way bigger than original content. Oh, and make sure, that you defend against timing attacks as well! Passive sniffers totally won't identify specific webpage based on it's generation time, will they?!
As for authenticity… Surely, you are going to use certificate pinning (which is already removed from Google Chrome for political reasons). And personally sue certificate issuer, when Certificate Transparency logs reveal, that one of Let's Encrypt employees sold a bunch of private keys to third parties. Of course, that won't protect authenticity, but at least, you will avenge it, right?
SSL-protected HTTP is just barely ahead of unencrypted HTTP in terms of transport-level security. But it is being sold as golden bullet, and people like you are the ones to blame.
I bet the SNI issues will eventually be fixed too.
And yes, with momentum behind certificate transparency, it could definitely hold CAs to the fire :)
TLS is no silver bullet, but it's a good base layer to always add.
Consider you're a cloud provider running customer images. If everyone downloaded the same package via https over and over again, the incurred network utilization would be massive (to both you and the debian repository in general) compared to if everyone used http and verified via GPG, all from your transparent squid cache you setup on the local network.
It would probably be better to use a distributed system design for this.. BitTorrent or who knows ipfs maybe..
If you're doing this, then you've made your own HTTP client so you can do whatever you want.
"HTTPS Everywhere" is a web browser thing.
Because the rest of the content is not verified?????? That's the whole point of HTTPS????????
The whole point of having GPG is that you (as the distributor/debian repo/whatever) have already somehow distributed the public key to your clients (customers/debian installations/whatever). Having HTTPS is redundant as it is presumed that initial key distribution was done securely.
I wonder if anyone will be surprised when they learn how HTTPS and HTTP/2 will be used to push more advertising to users and exfiltrate more user data from them than HTTP would ever allow.
Will these "advances" benefit users more than they benefit the companies serving ads, collecting user data and "overseeing the www" generally? Is there a trade-off?
To users, will protecting traffic from manipulation be viewed as a step forward if as a result they only see an increase in ads and data collection?
Even more, perhaps they will have limited ability to "see" the increase in data collection if they have effectively no control over the encryption process. (e.g., too complex, inability to monitor the data being sent, etc.)
We're talking only by HTTPS. Adding HTTP/2 is just mudding the conversation.
Care to give any argument on how does adding a TLS layer over the exact same protocol (HTTP/1.1) will be used to do that?
Except most big orgs now employ MitM tools like BlueCoat to sniff SSL connections too.
> You have not been paying attention to the hundreds of tools available to make HTTPS painless.
I have, and they don't. They make it easier, but you know what's truly painless? Hosting an html file over HTTP. What happens when Let's Encrypt is down for an extended period? What happens when someone compromises them?
> And real-world attacks that are reported over and over again.
Care to link to a few?
> The problem is that it's not YOUR security, it's other people's.
Oh, so you know better than me what kind of content is on my site? So a static site with my resume needs SSL then to protect the other users?
From Friday, in which Turkey takes advantage of HTTP downloads to install spyware on YPG computers: https://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlo...
Without TLS how do YOU know that the user is receiving your static resume. Any MitM can tamper with the connection and replace your content with something malicious. With properly configured TLS that's simply not possible (with the exception you describe in corporate settings where BlueCoat's cert has to be added to the machine's trust store in order for that sniffing to be possible). Hopefully in the future even that wont be possible.
The content of your site is irrelevant. We do know that your lack of concern for your user's safety is a problem though.
I also wish that managing certs was better, but until then, passing negative externalities to your users is pretty sleazy.
Absolutely yes. Without that layer of security, anyone looking at your resume could either be served something that's not your resume (to your professional detriment) or more likely, the malware-of-the-week. (Also to your professional detriment).
Do you care for the general safety of web users? Secure your shit. If not for them, for your own career.
But how likely is it to actually happen? For the former, someone would need to target both you and specifically the person who you think will view your resume, and that's, let's be honest, completely unlikely for most people. The second case I can see happening more in theory as it's less discriminating, but does it actually happen often enough in real life to the point where it's a real concern?
FWIW, I have HTTPS on all my websites (because, as everyone mentioned already, it's dead simple to add) including personal and internal, but I still question the probability of an attack on any of them actually happening.
Basically, I see it this way:
- You can be MitMed broadly, like the Xfinity case, but the company in question can't really do anything crazy like inject viruses or do something that would cause the user to actually notice because then their ass is going to be on the line when it's exposed that Comcast installed viruses on millions of computers or stole everyone's data.
- Or you can be MitMed specifically, which will cause professional detriment, but would require someone to specifically target you and your users. And I don't see this as that likely for the average Joe.
Really, what I would like to know is: How realistic is it that I, as a site owner, will be adversely affected by the MitM that could theoretically happen to my users on HTTP?
Consider the websites you view every day.. most of them are probably HTTPS by now.
It's the wild west, basically. Regardless of how likely it is that someone is waiting for you to hit a HTTP site right now so they can screw with it, why even take that risk when the alternative is so easy?
I've already covered the general case above. Anyone in a position to intercept HTTP communications like that (into every unencrypted connection) is in a position where if they intercept and do enough to materially harm me or my users through their act, then they will likely be discovered and the world will turn against them. They have far more to lose than to gain by doing something actively malicious that can be perceived by the user. So I don't realistically see it happening.
> Regardless of how likely it is that someone is waiting for you to hit a HTTP site right now so they can screw with it, why even take that risk when the alternative is so easy?
I already said I use HTTPS, so your advice isn't really warranted. I also specifically asked how likely it is, so you can't just "regardless" it away. I get that there's a theoretical risk, and I've already addressed it. But as a thought experiment, it is helpful to know how realistic the threat actually is. So far, I haven't really been convinced it actually is anything other than a theoretical attack vector.
Internet providers have been injecting ads into websites for years. Hackers and government have been doing same to executables and other forms of unprotected payload.
Hashes, cryptographic signatures, executables signing, Content-Security-Policy, sub-resource integrity — numerous specifications have been created to address integrity of web. There is no indication, that those specifications failed (and in fact, they remain useful even after widespread adoption of HTTPS).
For the most part, integrity of modern web communication is already controlled even in absence of SSL. The only missing piece is somehow verifying integrity of initial HTML page.
"Injection" is the process of inserting content into the payload of a transport stream somewhere along its network path other than the origin. To prevent injection, you simply need to verify the contents of the payload are the same as they were at the origin. There are many ways to do this.
One method is a checksum. Simply provide a checksum of the payload in the header of the message. The browser would verify the checksum before rendering the page. However, if you can modify the payload, you could also modify this header.
The next method is to use a cryptographic signature. By signing the checksum, you can use a public key to verify the checksum was created by the origin. However, if the first transfer of the public key is not secure, an attacker can replace it with their own public key, making it impossible to tell if this is the origin's content.
One way to solve this is with PKI. If a client maintains a list of trusted certificate authorities, it can verify signed messages in a way that an attacker cannot circumvent by injection. Now we can verify not only that the payload has not changed, but also who signed it (which key, or certificate).
Note that this does not require a secure transport tunnel. Your payload is in the clear, and thus can be easily cached and proxied by any intermediary, but they can not change your data. So why don't we do this?
Simple: the people who have the most influence over these technologies do not want plaintext data on the network, even if its authenticity and integrity are assured. They value privacy over all else, to the point of detriment to users and organizations who would otherwise benefit from such capability.
However, it's not that hard to avoid replay after cache expires. HTTP sends the Date of the response along with Cache-Control instructions. If the headers are also signed they can also be verified by a client. If the client sees that the response has clearly expired, it can discard the document. As a more dirty hack it can also retry it with a new unique query string, or provide it as an HTTP header and token which must be returned in the response.
By the way, — signing is not equal to "null encryption". Signing can be done in advance, once. Signed data can be served via sendfile(). It does not incur CPU overhead on each request. Signing does not require communicating with untrusted parties using vulnerable SSL libraries (which can compromise your entire server).
As we speak, your SSL connection may be tampered with. Someone may be using a heardbleed-like vulnerability in the server or your browser (or both). You won't know about this, because you aren't personally auditing the binary data, that goes in and out of wire… Humorously enough, one needs to actively MITM and record connections to audit them. Plaintext data is easier to audit and reason about.
Literally in the time you've spent thinking about and composing your reply you could have implemented free, secure TLS for your users.
Are you name dropping wosign just to be obtuse? They were untrusted because they were untrustworthy, not because Google just doesn't like them. https://www.schrauger.com/the-story-of-how-wosign-gave-me-an...
It just coincidentally happens, that US controls 100% of root CAs and Kazakhstan (most likely) controls 0. So the later needs more audacious measures, while former can just issue a gag order to Symantec (or whoever is currently active in market).
CA system is inherently vulnerable to government intervention. There is no point in considering defense against state agents in HTTPS vulnerability model. It is busted by default.
Marking non-https sites as non-secure is a result of the network having proven itself to be unreliable. This is both the snowden revelations, as well as the cases of ISPs trying to snoop.
Besides, HTTPS isn't hard to get. Worst case means you install nginx appache or the like to reverse proxy and add in TLS. Things got even simpler when let's encrypt came along. Anyone can get a trusted cert these days.
It isn't your threat model that is important here. It is the users' threat models. Maybe you have full control of that too (the simplest case where that would be true is if you are your only user) but most sites aren't.
You will see the same sort of anger at e.g. parents who refuse to get their kids vaccinated (they're my kids, they say; Big Pharma can't make decisions for me, if you want to get your kids vaccinated, that's fine but there's a cost-benefit analysis, I just don't want it forced down my throat). It would be incorrect to conclude that the angry people are the wrong people.
Speaking as someone who's maintained a lightweight presence on the Web for over 20 years, I've thought about the tradeoff and I think it is worth it. Our collective original thinking about protocols skipped security and we've been suffering ever since. I was sitting in the NOC at a major ISP when Canter and Siegel spammed Usenet. Ow. Insecure email has cost the world insane amounts of money in the form of spam. Etc., etc., etc.
You and I probably disagree on the cost/benefit analysis here, which is OK. It'd be helpful in discussion if advocates on both sides refrain from assuming zealotry on the other side.
That machinery has a cost. With every barrier we throw up on the web, it makes it harder to build a reliable site. I also realize this is an argument I've lost. It's so much easier to just say "HTTPS everywhere" than to examine the tradeoffs.
This touches on the real point of all this, which doesn't seem to have been contained in any replies to you.
There's no real choice in the matter, https is a requirement if, and that the very big if right there, we truly acknowledge that the network is hostile. With a hostile network the only option is to distrust all non-secure communication.
https isn't about securing the site as you know, it's about securing the transmission of data over the transport layer, and it's needed because the network is hostile.
It doesn't matter one little iota what the data is that's traversing it, as there's no way to determine its importance ahead of time. A resume site might not be of much worth to the creator, but the ecosystem as a whole ends up having to distrust it without a secure transport layer because the hostile network could have altered it.
It doesn't matter the effect of that alteration might be inconsequential, as there's also no way to determine that effect ahead of time. The ecosystems 'defense' is to distrust it entirely.
And that's the situation the browsers/users/all of us are left with. There's is no option but to distrust non-secured communication if the network is hostile.
Even places like dreamhost give you a letsencrypt cert for free on any domain.
There is no case to be made for not securing your site, on principle or based on what's already happening out in the world, with shady providers injecting code into non-secure HTTP connections.
You see it as "a simple resume site," and I see it as a conduit for malicious providers to inject malicious code. Good on the browser folks for pushing back on you.
The warning used to be the absence of a pad-lock, but who notices that?
In any case, (a small subset of) the random enthusiast sites and such are close to the only reason I use a browser recreationally anymore. I absolutely agree with you.
The answer isn't to stop fixing things. The answer is to make it easier and cheaper to be secure.
Kinda like what LE is doing, no?
Your failure to grasp this is fairly evident from the rest of your comment.
It sucks badly. I'd prefer a less hostile network myself. Even back then there were bad actors but at least you could somewhat count on well-meaning network operators and ISPs. Nowadays it's ISPs themselves that forge DNS replies and willfully corrupt your plaintext traffic to inject garbage ads and tracking crap into it. And whole nation states that do the same but for censoring instead of ad delivery.
Can you explain why you think Symantec demonstrating incompetence is completely isolated from your Symantec SSL protected website?
I sense a lot of hostility coming from you. It seems like you think we do these things for fun. Do you imagine a bunch of grumpy men get together, drink beer, and pick a new SSL provider to harass and bully?
Thin-skinned immature response aside, what you lack is empathy. You can't possibly understand why someone would take my stance. It's your lack of empathy I despise. Look at the replies to my comment - people are losing their minds. It's this rabid dogma that frightens me, and frankly makes me hate the infosec community in general.
To you, security is the only thing that matters. I don't know where the moral righteousness comes from, but someday you'll realize that your black and white viewpoint should have been more nuanced.
Edit: The profane quote above was edited out by the parent, but clearly there is some animosity and instability there.
Oh, I get it. I've worked with lots of people like you.
As an infosec practitioner, I'm the one that cleans up after the people who claim good current infosec practices are "too hard" or "impractical" or "not cost-effective", which all boil down to sysadmins and developers like you creating negative externalities for people like me. I have heard all of these arguments before. "Oh, we can't risk patching our servers because something might break." "Oh, the millisecond overhead of TLS connection setup is too long and might drive users away." "Oh, this public-facing service doesn't do anything important, so it's no big deal if it gets hacked."
I'm not at all sorry that the wider IT community has raised the standards for good (not best, just good) current infsec practices. If you're going to put stuff out there, for God's sake maintain it especially if it's public-facing. If using the right HTTPS config is that difficult for you, move your stuff behind CloudFront or Cloudflare or something and let them deal with it. If you can't be bothered with some minimal standard of care, you need to exit the IT market.
And good luck finding a job in any industry, in any market, where anyone will think that doing less than the minimal standard, or never improving those minimums, is OK.
My goodness, you just nailed it.
The IT job market is so tight that complete incompetence is still rewarded. Incompetence and negligence that would get you fired immediately or even prosecuted in many if not most other professions.
If restaurant employees treated food safety the way most developers treat code safety, anyone who dined out would run about a 5-10% chance of a hospital visit per trip.
I was just arguing with a “senior developer” who left a wide open SQL injection in an app. “But it will only ever be behind the firewall, it’s not worth fixing.”
That’s like a chef saying “I know it’s old fish but we’ll only serve it to people with strong stomachs, I promise”.
To your parent comment -
No, I don't think it's a cabal of "grumpy old men" - I think it's a cabal of morally righteous security-minded people who have never worked for small companies or realize that most dev teams don't have the time to deal with all this forced entropy.
You care about security, I care about making valuable software. Security can be a roadblock to releasing valuable software on time and within budget. If my software doesn't transmit sensitive data, I surely do not want to pay the SSL tax if I'm on a deadline and it's cutting in to my margins.
Most people who advocate for security, including myself, have worked on small teams and understand the resources involved. Putting a TLS certificate on your shit with LE takes minutes. Doing it through another CA is minutes, in a lot of cases. You spent more time downloading, installing, and configuring Apache, then configuring whatever backend you want to run, and writing your product or blog post or whatever it is you’re complaining about securing.
Honestly, in the time you’ve been commenting here, you could have gotten TLS working on several sites. Managing TLS for an operations person is like knowing git for a software developer. It’s a basic skill and is not difficult. If it’s truly that difficult for your team, (a) God help you when someone hacks you, they probably already have and (b) there are services available that will front you with a TLS certificate in even less time than it takes to install one. Cloudflare and done.
> Security can be a roadblock to releasing valuable software on time and within budget.
Great, you've pinpointed it. Step two is washing it off. Ignoring security directly impacts value, and I'm mystified that you don't see this.
But I guess I'm a zealot ¯\_(ツ)_/¯
if you have one server, yes.
else it's the other way around, because if you have multiple servers you need to do a lot of fancy stuff.
And LE also does not work in your internal network if you do not have some stuff publicy accessible.
And it also does not work against different ports.
Oh and it's extremly hard to have a proxy tls <-> tls server that talks to tls backends, useful behind NAT if you only have one IP, but multiple services behind multiple domains.
IPv6 fixes a lot of these issues.
I don't understand your last point. Where do you see the problem with letting a reverse proxy talk to a TLS backend?
You get the requested server name from the SNI extension and can use that to multiplex multiple names onto a single IP address. The big bunch of NATty failure cases apply to plaintext HTTP just as well, no?
This means the backend server certificates are only ever exposed to your reverse proxy. There's no need to use publicly-trusted certificates for that. Just generate your own ones and make them known to the proxy (either by private CA cert or by explicitly trusting the public keys).
If you need lots of different domains, use one of the auto certificate tools.
If you can't use one of those yourself, consider hosting on a platform that can automatically do this for you for all your sites, like cPanel (disclaimer: I work for cPanel, Inc).
If your stuff is never publicly accessible because you're in a fully private network, just run your own CA and add it to the trust root of your clients.
If you need an SNI proxy, search for 'sniproxy' which does exist.
If you're so small that you can't afford an infrastructure person, a consultant, or a few hours to set such things up yourself, then maybe you should shorten the HN thread bemoaning doing it and use the time to learn how.
Funny you mention this.
With this new functionality, I can register valid certs for any domain in the world if their DNS is insecure, or if I can spoof it.
Have we gotten any headway yet on that whole "anyone can hijack BGP from a mom and pop ISP" thing ?
How many CAs are still trusted by browsers, again? How many of those run in countries run by dictators?
HTTPS doesn't secure the Internet. It's security theater for e-commerce.
This is just one anecdote, but I worked at a company small enough that I was the only developer/ops person. Time spent managing HTTPS infrastructure couldn't have been more than a handful of hours a year.
What is so painful to you about running your website(s) on HTTPS?
It may be easier to be more empathetic.
It's not that ominous. It's not even red!
I think it's pretty obvious to most users that "Insecure" doesn't matter as much on some random blog, but does matter a lot on something that looks like a bank or a store.
SSL has a history of being a pain in the ass. There are a lot of pain in the ass implementations out there. Everyone gets that.
At the same time, it's never been easier, and basic care for what you're serving your users demands taking that extra step. What Google is doing amounts to disclosing something that's an absolute fact. Plain HTTP is insecure (in the most objective and unarguable way possible), and it is unsuitable for most traffic given the hostile nature of the modern web.
Do you want your users being intercepted, engineered, or served malware on? If the answer is no, secure it. The equation is that simple. Any person or group of people who in 2018 declines to secure their traffic is answering that question in the affirmative and should be treated accordingly!
That's not "zealotry" friend, that's infosec 101.
So in a way, you're right. I'm not sure why that's a negative.
Your software does not work if it is not secure. Security is a correctness problem.
Yes sometimes it's pain to solve some TLS based errors and I also miss the opportunity to debug each transmitted packet with tcpdump but I also appreciate it that the continuous focus on TLS improves the tooling and libraries and each day it get's a little bit easier to setup a secure encrypted connection.
Do they keep their servers up to date? Why is it so much easier to do that than getting an SSL cert four times a year?
I hope they update their servers more often than that.
i.e. lets say your internal network DNS domain is 'my-company-lan.com' - all you have to do is ensure that 'my-company-lan.com' is also registered in public DNS, and then you can secure ALL your internal services using a free LE wildcard cert, that's automatically trusted by all platforms and browsers. For some companies that's going to be a BIG cost and resources saving.
 but not actually used for any public facing services.
It's at this point that I swear profusely at Microsoft yet again, for pushing the concept of '.local' domain suffixes a decade ago. As it's not a legal TLD, I can't get certs for any of my internal services without rolling my own internal CA, which only works automatically for Windows domain machines, and not for anything else.
"One cert to rule them all, and in the darkness 'bind' them."
For the wildcard certs, you just need to add a TXT record to the public DNS entry, no public web server required.
Even if you have no intention of using your internal DNS domain name on the internet, it's good practice to register it anyway.
The problem here is that there's no such thing as domain ownership, only domain renting. You forget to pay your bill (read: someone loses an email) and a core part of your infrastructure is up in smoke, or worse, taken over by a squatter.
I don't think there's a way around coming up with a reliable process for renewing your domain. You somehow manage to do it for lots of other things already.
See eg point 4:
Most clients that support DNS-01 can use nsupdate or APIs of public DNS providers to make this an automated process.
1. Use acme.sh: https://github.com/Neilpang/acme.sh
acme.sh --issue -d *.example.com --dns
I have been toying a little with wildcard using certbot on my Ubuntu OpenVPN appliance, but was a bit unsuccessful at the moment.
Maybe i should just try and build a very tiny virtual sever that does nothing but spit out a wildcard domain certificate to some predefined destinations to have it used in anything that wants a certificate. Could be beneficial to a (large) infrastructure to have an always-ready certificate to use for free. Dunno if EV validation will uphold though.
I think acme.sh is the easiet to use in all of clients.
I've put my DNS to Cloudflare and after that the acme.sh was incredibly easy to implement thanks to their API implementations.
Also learned a valuble lesson: *.provider.com is not the same as provider.com :)
./acme.sh --issue -d noty.im -d '.noty.im' --dns
It then told me to add TXT record, which I just manually do because I used RackSpace cloudns which has no built-in support.
I manually verify DNS with dig, when it's ready I just do:
./acme.sh --renew -d noty.im -d '.noty.im'
then the cert(private key and full chain) are stored in ~/.acme/noty.im/
These privateky and fullchain can be used directly with nginx without any modification.
Also the EFF:
I really hope so.
The cost to providers is exactly the same for a wildcard and a standard certificate, and yet they costs hundreds of dollars. It's unbelievable it's lasted this long
Yes there's obviously business costs, and they have to employ people to do verification, etc (which they often do a terrible job at), but I think you see what the parent is getting at..
This is great news!
Even in the case that it is compromised and you know it, your only option is certificate revocation. And you are in big trouble if you are relying on revocation because most clients do not keep very up to date with the CRL.
Not only for security, but the 90 days is to encourage automation. And most clients like certbot will check everyday, and if the cert is within 30 days of renewal, it attempts to renew. If letsencrypt is down, it will try again the next day. So you have an entire month before an outage would affect you.
No way. Every time I've worked with an organization with three years expiry it's guaranteed they have no idea, after three years how to even renew the cert. They are effectively longer in many cases than the hiring cycle and for larger organizations can be a complete nightmare. No one wants to invest in time in automation, training, tracking, etc., because it's so far down the road. The 90 day model makes much more sense because it requires automation. In terms of the ACME endpoints being down, I'm not going to say that won't happen but renewal starts 30 days before the cert expires and if Let's Encrypt's ACME endpoints are down for 30 days or longer there's a good chance we are all dealing with something far more dire than cert renewal at that point.
I have my own domain name servers, so it wasn't hard to wire up DNS-01 support.
Anyway, the client has been running daily out of a cron job, updating certs on remote servers as they need to be, with very little intervention from me, for well over a year now. It's just about a set-it-and-forget-it setup.
Let's Encrypt is intended to be fully automated and you shouldn't have to faff about with it every quarter, it should do its thing all by itself.
...most of the time.
1. Ubuntu VPS #1:
a. dovecot ssl
b. postfix ssl
c. apache multiple virtual domains ssl
d. pureftpd ssl
2. Ubuntu VPS #2:
a. apache multiple virtual domains ssl
3. Microsoft Server
a. IIS multiple virtual domains ssl
With Let's Encrypt, you don't need to minimize the number of certs just to save some money.
We've got a number of open PRs as well to add other resources, e.g., load balancing, rate limiting, zone settings, etc. HashiCorp is currently reviewing/merging.
Warning: I have made services inaccessible by deploying before making sure the git repo I was working from was the latest version. That's the downside of stateless deployments!
We've all been there!
* Those with 1 or 2.
* Those with 10-40.
I suspect lowering the price(s) on a volume-scale would allow me to find customers with 40+ domains, but at the same time I'm happy where I am and seem to have a reasonable niche.
I don't want my webserver to have the ability to change my entire zonefile just so it can authorise certificates!
You can generate them on a secure host (or container) which pushes the certs to the machines which needs them.
And Heroku already supports wildcard certs (that you need to provide yourself) if you use the SSL addon.
My biggest peeve with the whole "HTTPS Everywhere" push is not the general notion of using encryption, but that the encryption is annoyingly coupled with the CA system, which is terrible for many reasons.
CAs seem like a system that really doesn't work today, we've seen multiple times that many of these CAs aren't worth delegating trust to to begin with, and it causes an unnecessary cost and burden upon just... encrypting traffic.
So you’re sitting in a cafe, and you go to Facebook.com. Lo and behold, someone’s installed a MITM proxy on the router, that presents its own encryption key instead of Facebook’s, and your browser has no way to tell this because the CA system isn’t a thing. They now have your password, can steal your session to spam your friends, whatever else. How do you prevent that?
Given the exploitability, laziness, general failure to follow best practices, not to mention misaligned incentives that we're seeing from major CA vendors, having centralized CAs seems like an ever-worsening solution.
In other words, if someone claiming to be Facebook has told a significant number of people all over the world that Facebook's cert fingerprint is ABCD124, and that fingerprint matches what they're getting presented, it's probably legitimate. We can add additional points for the cert signer being the same one as the previous cert, lack of listing in a CRL, cert transparency logs, etc.
There's no reason this system couldn't bolt on top of the existing CA infrastructure to avoid a bootstrapping problem either.
It adds a probability value into the mix, in other words. That value has always existed, but now we expose it to the user in some way and stop pretending that it does not.
I can use HPKP to pin the cert I get from Lets Encrypt; a cert issued for my domain some other way won’t be trusted due to the hash of its public key being different from the one I pinned.
The Public Key Pinning Extension for HTML5 (HPKP) is a security feature that tells a web client to associate a specific cryptographic public key with a certain web server to decrease the risk of MITM attacks with forged certificates.
HPKP makes administration more complicated but if your threat model includes state-level actors, it prevents them from getting a CA to issue a valid certificate for your domain.
Certificate Authority Authorization (CAA) has been mandatory for CAs since September 2017; it uses DNS to specify which CAs are allowed to issue certificates for your domain: https://blog.qualys.com/ssllabs/2017/03/13/caa-mandated-by-c....
Are you sure that all "old school" CAs wouldn't issue a cert for that?
They were never supposed to fight phishing. Domain Validation certificates literally validate… domains, and nothing more.
It would make more sense to prevent googIe.com from existing at the .com registry level, before any TLS is involved.
single point of failure as in, getting hacked and misiussing certificates?
It's a concern whenever a large portion of decentralized infrastructure has a single centralized dependency. Even if that dependency is awesome and doing great work right now.
Ideally, there would be several free CAs that all used the ACME protocol. But somebody's got to pay for that and somebody's got to go through the effort of setting it up when Let's Encrypt already works really well.
It would be nice if they simply offered two choices:
1. I love automation! Give me a 90 day certificate.
2. I understand the security trade-offs. Give me a 3 year certificate.
For future reference - the BRs have a section with a timeline, it's great for finding upcoming or recent changes significant enough that the CAs needed a deadline.
They also pretend, that compromising 3-months certificate is "ok" (or at least less harmful, than compromising a year-long certificate), when in practice there is no reason to assume so, — 3 months is more than enough for any real-life eavesdropper.
Firstly, CA/B explicitly can't talk about pricing or product offerings, because a group of businesses that collaborate on setting prices or product offerings is called a Cartel and is illegal (the example you're probably thinking of, OPEC, exists because its members are sovereign entities, and thus enjoy total immunity from the law). When they meet in person the CA/B members always begin by reading out the rules that lay out what mustn't be discussed for this reason.
Secondly, the idea is not at all that compromising 3-month certs is "ok". Instead Ryan's focus is on the pace of change. During 2016 CAs agreed to use the Ten Blessed Methods for validation, in 2017 that agreement became a concrete rule (thanks to Mozilla) but a 39 month certificate issued under the prior validation status quo would still be trusted until mid-2020.
Historically what has happened is that there's a grace period, and then CAs are supposed to go back and revoke any certificates still outstanding that break the new rules. But this is error-prone, back in early 2017 you can see the list of violations I found while checking that certificates for now prohibited "internal" names were revoked as required, each CA had excuses for why they'd missed some, but the overall lesson is that things will be mised. So Ryan doesn't want to rely on grace periods, he wants a shorter window of validity for the certificates.
MD5 and SHA-1 is the go-to example for this stuff. We expect already that SHA-2 (e.g. SHA-256 used currently in certificates) will fall the same way as the others, because it's the same construction, so we're going to be doing this again in perhaps 5-10 years. But with 39 month certificates the _minimum_ time from changing the rules to getting rid of the problem is 39 months, if it takes a few months to agree what to do, the total may be closer to 4 years. That's a very long time in cryptographic research, too long to predict what's coming. 90 days would be much better from this perspective.
SSL requires one click with Netlify, and it's on by default with Now.
Key compromise for a single site is much less disruptive than losing control of a key that protects hundreds or thousands of sites. Generally you want to keep your scope smaller, it's safer. Rather than blanket-verify everything. Wildcards also makes it more difficult for you to see what of your names is going through CT logs.
Caddy will support wildcard certificates, but most users will not need them, because already Caddy can obtain certificates "on demand" - dynamically, during the TLS handshake. Again, the main reason for using wildcards at this point would be to reduce pressure against LE rate limits.
btw, in that scenario, even if the sites all share an IP address, you can use a TCP-level proxy that supports doing the TLS SNI exchange to determine where to send the connection on, so the proxy doesn't need any of the keys and the encryption is end-to-end.
So I guess, make sure you trust your DNS provider if you're using wildcards. Or is there another exploit I'm missing?
Of course such access may be easier for a disgruntled internal actor so it is a risk worth considering (and mitigating via proper separation of concerns/access).
Any DNS-based validation is contingent on full DNS control, and that does mean FULL. CNAME records are absolute, if I CNAME foo to xyz then I'm trusting xyz 100%. I won't get an email round-trip or CAA ping for the certificate unless I'm looking for it, because CNAME implies that all things that apply to xyz apply to anything pointed at it. So the CAA record for xyz applies, not the CAA record for foo - it's not even valid to have any other record types for the same name as a CNAME record, and CAA resolution stops if it gets a valid response versus walking up to the domain root.
To be clear: CloudFlare issued a perfectly valid certificate for a perfectly valid use case, it just bothers me that I couldn't tell it was issued until after-the-fact by seeing it in CT logs, and couldn't have prevented it from being issued by the mechanisms that seem to be built for that.
LE is all about DV certs -- you just need to control the web server at secure-payments.yourbusiness.com, and with DNS control you can aim secure-payments.yourbusiness.com anywhere
Specifying and implementing ACMEv2 took a while, that was a lot of work. Adding wildcard support on top of that wasn't trivial but it wasn't nearly as much work.
Trying to figure out how to get Route53 to stop the wildcard at the top level or get a wildcard cert that will go down the path.
Why would you say such a thing?
Tens of millions of certificates have been provisioned using the ACME protocol without issue.
There’s a lot of cryptography behind the scenes that leads up to the TXT record being generated. You can read all about it at https://github.com/ietf-wg-acme/acme/blob/master/draft-ietf-...
If you don't verify who controls the IP space, then if you can control the DNS, you can generate certs. Certs that appear valid to unsuspecting users.
Putting that kind of trust in DNS is pretty crazy considering how insecure most DNS setups are. Not to mention general attacks on DNS. There's even a potential chicken and egg problem, if you need DNS to secure your HTTPS, but you use HTTPS to manage your DNS.
What's really crazy too is it seems like this can't even be avoided. Even if I'm not using Let's Encrypt, if someone owns my DNS, they can use Let's Encrypt to get valid certs for my domain. That's insane.
What am I missing here?
If I "own your DNS" wouldn't I just change them all to an IP space I control anyway? (If that was a requirement).
Unless I'm missing something, requiring "owning the IP space" seems to be an impossible requirement to fulfil. I'm on a virtual host in Azure/AWS/Linode I have no way of proving I own the IP (because I don't).
As noted above, DNS forgery attacks against the ACME server can result in the server making incorrect decisions about domain control and thus mis-issuing certificates. Servers SHOULD perform DNS queries over TCP, which provides better resistance to some forgery attacks than DNS over UDP.
An ACME-based CA will often need to make DNS queries, e.g., to validate control of DNS names. Because the security of such validations ultimately depends on the authenticity of DNS data, every possible precaution should be taken to secure DNS queries done by the CA. It is therefore RECOMMENDED that ACME-based CAs make all DNS queries via DNSSEC-validating stub or recursive resolvers. This provides additional protection to domains which choose to make use of DNSSEC.
An ACME-based CA must use only a resolver if it trusts the resolver and every component of the network route by which it is accessed. It is therefore RECOMMENDED that ACME-based CAs operate their own DNSSEC-validating resolvers within their trusted network and use these resolvers both for both CAA record lookups and all record lookups in furtherance of a challenge scheme (A, AAAA, TXT, etc.).
If your situation really is that you have unsecured DNS that routinely gets hacked and you just sort of muddle among somehow with users frequently getting phished, malware downloaders, and so on, well, I guess Let's Encrypt doesn't magically solve the trouble you've stepped in.
At the very least, a public key in WHOIS should be required to generate certs. Why in the world isn't this being done? And is there some way Let's Encrypt can start checking for this (to not issue invalid certs for domains that do list a public key), so maybe the above insanity can be stemmed?
I would welcome an authenticated channel to domain registrars, and I would welcome making checking it mandatory for CAs. I think the lack of this is an unfortunate gap, although I don't think we've seen the epidemic of misissuance that you've worried about.
In order to make this happen, it would probably require some coordination between ICANN and the CA/Browser Forum. You can become an Interested Party at the CA/Browser Forum yourself in order to propose this kind of mechanism, or you can find an existing Member or Interested Party to bring it up.
I already participate as an Interested Party and I could bring it up eventually but I'm currently working on certificates for Tor onion services, and I'd rather get that finished before taking on something else.
There may have been previous discussions of this idea in some forum, but I don't know for sure where.
By the way, DNSSEC plus DNS CAA can already allow a domain registrant to use cryptographic means to forbid issuance by unauthorized CAs, and checking this is already mandatory for CAs.
This can achieve some of what you want on the negative side, but it's presumably not all the way there.
From Wikipedia: "As of February 2018, Qualys reports that 2.9% of the 150,000 most popular websites use CAA records."
So, about 97% of the most popular websites are currently vulnerable to having valid domain certs generated for their domains if their DNS is compromised, or if the CA doesn't strongly validate DNS responses.
And domain registrants and site operators are extremely heterogeneous in ways that could make cert issuance extremely difficult if we made applicants do something new and manual, especially in the offline world.
On the other hand, I've also written skeptical articles about PKI and worried about the fragility of Internet security. Your concerns aren't misplaced, in that a lot of the stuff we rely on is super-fragile.
But in many ways, it's been getting better over time as CAs' power has been getting more and more circumscribed by new rules and technical mechanisms. We have Baseline Requirements amendments that give CAs less discretion in their operations and require more transparency from them. We have CT, we have CAA, we have must-staple, we have databases that researchers can use to find problems. (For a while we also had HPKP.)
So I'd urge you to take your passion about this issue and work on some more security mechanisms to improve the infrastructure, because there's lots more that can be done.
Also, if you come up with good new deployable mechanisms, Ryan Sleevi will be glad to help you make them mandatory for CAs. :-)
I don't think anybody would want to implement the kinds of technology and solutions I would provide, because every time I bring them up (in forums like this one, and others), people either ignore them or argue against them, and I have no interest in pushing large boulders up hills.
But I would like to thank you for your work. I appreciate that you all are trying to make things better.
Yes, it's inconvenient, and people will still get hacked, but it's also getting easier to do, it _does_ help, and as Snowden showed, encryption really does help deter governments from spying.
I think it's currently unfair to https websites that non-https websites aren't considered insecure.
A cloud provider/datacenter controls the IP space in the majority of cases.
If someone owns your DNS, they can use almost any CA to get valid certs for your domain (apart from EV only ones). That's kind of how Domain Certs work. "A domain-validated certificate (DV) is an X.509 digital certificate typically used for Transport Layer Security (TLS) where the identity of the applicant has been validated by proving some control over a DNS domain"
If this is something that worries you, then you should only trust sites that have Extended Validation certificates.
Every user of the web in the world would have to do this in order to avoid this vulnerability.
But to be honest if your DNS service is compromised, you have bigger problems.
The whole point of TLS is to provide privacy, integrity, and authority. Having a secured connection is pointless if it's a connection to an attacker. Oh, great, nobody can spy on my connection to the NSA.
All DNS is supposed to do is to point you at the server to get your connection from. It isn't supposed to mediate the security of the connection.
This is back to the old chicken-and-egg of public key crypto: public key connections are secure, as long as you provide the initial host key in a secure, out-of-bound method. If an attacker can circumvent this and inject their own initial host key, they can MITM. Hence why PKI exists: to prevent an outside service from defeating the security of the connection. But apparently, DNS can compromise the connection, by allowing attackers to just generate certs willy-nilly if they can guess your GoDaddy account password.
This. A long thread mostly advocating HTTPS everywhere and only one mention of this. Would any of the knowledgeable HTTPS advocates here care to comment on it?