NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
NSA Recommends How Enterprises Can Securely Adopt Encrypted DNS (nsa.gov)
dylan604 1188 days ago [-]
From under my tinfoil hat, I have to wonder why we would listen to any recommendations from the NSA? Why would we not believe they are going to make recommendations of methods they know how to exploit?

Taking off my tinfoil hat, I understand that one of the purposes of the NSA is to keep US information safe. Following their recommendations should make your data safer.

However, Snowden showed us that the NSA doesn't always follow the rules it is supposed to operate within. Does that mean they are always be suspect? How do we decide when their recos are for the good fo all?

xxpor 1188 days ago [-]
This is one of the arguments to separate USCYBERCOM and NSA.

IMO, the US should really strictly separate the offensive/SIGINT and defensive aspects of NSA into separate agencies, and move the defensive part out of the intelligence community. Make it part of NIST or something. It would restore at least a little credibility to documents like this.

ISL 1187 days ago [-]
The tricky bit is that offense and defense actually require the same knowledge. A 0-day is immediately applicable for offense and must be immediately patched for defense.
joe_the_user 1187 days ago [-]
A 0-day is immediately applicable for offense and must be immediately patched for defense.

This outlines why there's a problem with a single agency tasked with offense and defense. If you had agency tasked only with the defense of US infrastructure, it would have a clear mandate to patch zero days. Any "offensive" agency would have to deal with it doing this (just as such offensive agencies deal with zero days being patched by others).

It seems entirely reasonable to prioritize protecting US infrastructure over the possibility of a spy agency culling another nations' secret.

xxpor 1187 days ago [-]
Legal mechanisms for one way doors seem possible. The offensive agency can communicate info about 0 days to the defensive one, but not the other way around.
jgerrish 1185 days ago [-]
Except bureaucracies are going to do what they do. The offensive agency will hold back information, and when an attack occurs on the defensive agency, calls for bringing the defensive agency under the umbrella of the offensive agency will be made.

Brats fighting over resources while Mom just wants to sleep. It's not their fault, it's the nature of political organizations.

Sure, you could fix it with centralized oversight and stringent information sharing rules. But eventually you're either strangling them with rules or building one conceptual agency.

darksaints 1187 days ago [-]
Not necessarily. In fact, depending on how they're structured, they can be quite complimentary. Red team strategies thrive off of this sort of competitive environment.
astrea 1187 days ago [-]
And for efficiency's sake they might even roll them into the same org, to remove red tape and logistical complications
VWWHFSfQ 1188 days ago [-]
The NSA has a long history [0](PDF) of providing guidance for "best practices" for securing USA corporate networks. It's part of their job.

[0](PDF) https://apps.nsa.gov/iaarchive/customcf/openAttachment.cfm?F...

dylan604 1188 days ago [-]
Yes, I stipulated that in my post. However, we have seen how the NSA wants data from US corporations. If they make a "recommendation" for corps to use, then how do we know it is solely for the corporation's best interests rather than the NSA's own interests in more easily accessing the corp's data while providing false sense of security?

We do know that the NSA pushed for a particular type of RSA encryption to become the default because they already knew how to break the encryption. Once something like that is known in the wild, credibility will from then on be suspect at best as to true motive.

29083011397778 1188 days ago [-]
I feel it wouldn't be completely unjustified to trust NSA recommendations simply because America isn't the only country in the world with intelligence agencies. The NSA is commonly assumed to be offensive, but part of their job is to aid in defence as well. Ensuring (malicious) foreign actors have trouble gaining information is a National interest just as much as the NSA themselves gaining access.
1188 days ago [-]
Daho0n 1187 days ago [-]
Only if it doesn't make their own work any harder. NSA prefer weak security for everyone if securing US assets might risk others to be more secure too:

>NIST failed to exercise independent judgment but instead deferred extensively to NSA. After DUAL_EC was proposed, two major red flags emerged. Either one should have caused NIST to remove DUAL_EC from the standard, but in both cases NIST deferred to NSA requests to keep DUAL_EC"

https://www.nist.gov/system/files/documents/2017/05/09/VCAT-... [PDF warning]

AndrewKemendo 1188 days ago [-]
I mean this question sincerely, as I'm a US Government employee currently and former Intelligence Officer:

What is it going to take to restore, or maybe even just establish, trust in US Government institutions by people in the US?

dsr_ 1187 days ago [-]
Prosecution of government employees and officeholders. You don't gain trust without a reason to trust you. There is no magic reset button. After COINTELPRO and Iran/Contra and Snowden, the USA Intel establishment has a solid record of attacking their own citizens and behaving as though laws don't apply to them and there is no oversight. Valerie Plame?

And then there's the last five years, starting with the Clinton email server. The Intel community can't prevent that, and then the FBI goes political at the end of election season?

There is no trust of intelligence agencies because they have proven they deserve none.

uncletammy 1187 days ago [-]
I logged in just to upvote you. This! 100%
dylan604 1187 days ago [-]
Your successes are secret, your failures are known. That's your industry's catch phrase.

Here's a list of things I can think of:

1. If you're the CIA/NSA, stop spying on American citizens.

2. See #1

3. Stop NSLs. If you want data, get a valid warrant visible to the public. Allow companies to inform their users/customers that their entire platform is vulnerable.

4. Subject yourself to non-govt over view.

5. Stop hoarding 0-days, and actively work with vendors to fix vulns.

Don't know how to fix it, but until you're no longer in the news for screwing up, this is where we are. Your internal documents show that all of that data slurping has not led to significant positive results. Why spend the money on it then? Why erode the trust that you want? You don't want to tip off the advesary, but your own citizen's rights are much more valuable than what little information you are getting.

dylan604 1187 days ago [-]
Want to add one more thing:

6. Whenever coming up with a new spying/data collection scheme, ask yourself "in the light of public outcry in the past, how would this new thing be perceived?" Be honest about it, and not hand wavy "the public will be okay as long as we catch bad guys".

SkyMarshal 1187 days ago [-]
In addition to what others have already said, actually preventing major breaches like the recent Solarwinds hack. Eg, besides the tinfoil hat stuff, just getting the basics right.

Personally I'd like to see the entire US govt IT infrastructure rebuilt around DARPA's HACMS project [1]. Get rid of Windows, base it all on SEL4 Linux or similar, rebuild the apps (everything in userspace), etc. That would significantly reduce the attack surface. Huge project obviously, but one can wish.

[1]:https://www.darpa.mil/program/high-assurance-cyber-military-...

ampdepolymerase 1187 days ago [-]
You probably don't need HACM. Microsoft Research has produce quite a number of software and compilers for producing provably secure low level code and drivers.
SkyMarshal 1187 days ago [-]
When are we getting that in Windows? It needs to start with a formally verified OS (and firmware), then build a provably-correct protocol/driver/app/etc stack on top.

MS was working on Singularity [1] a while back, but its webpage speaks of it in past tense, so it doesn't appear to be an ongoing project.

[1]:https://www.microsoft.com/en-us/research/project/singularity...

ampdepolymerase 1187 days ago [-]
Microsoft, and more importantly Microsoft Research, have many, many projects tackling this. Off the top of my head: the P programming language, F*, Dafny, and a bunch of others (I think the last two are part of something called Project Everest). Microsoft Research is one of the biggest players in this field. A lot of their efforts span multiple departments so you will have to search around.
SkyMarshal 1187 days ago [-]
Yup, I'm aware they do a lot of work on this, and hired up half the Haskell and PLT researchers in the world, and have a bunch of projects on it scattered across their web properties.

And maybe some of it is trickling down into parts of Windows or .NET, like F# and others, but I wish they'd develop an entire ecosystem around it and make stronger push to commercialize it.

With all the aforementioned resources MS has at their disposal, by now they should have their own version of SEL4, comprehensive tooling for secure-by-design and correct-by-construction application development, and optionally secure sandboxed containers for backward compatibility with legacy apps. Though ideally MS funds a Manhattan project for rewriting the most popular apps using their SxD and CxC tooling.

Singularity was on the right track, but they let it languish and die for some reason, when instead they could be rebuilding an entire secure/correct ecosystem with Singularity as the focal point, and commercializing it with enterprise support, and selling it to governments and large enterprises with significant security concerns.

I wonder why this hasn't happened yet.

https://en.wikipedia.org/wiki/Secure_by_design

https://us-cert.cisa.gov/bsi/articles/knowledge/sdlc-process...

ampdepolymerase 1186 days ago [-]
It is hard to iterate and make changes when your codebase relies on human/machine assisted proofs. Great for airplanes and cars when regulatory approval means iteration is slow. Not so great for consumer software.
SkyMarshal 1186 days ago [-]
Good point, though I’d suggest most consumer software, at least office apps and similar needed by the government, are mature enough they don’t need rapid iteration. Most new features being added to them these days are optional at best.
dylan604 1187 days ago [-]
You missed the part about basing everything on SELinux4. That would mean drop Windows. I'm all onboard for that.
darksaints 1187 days ago [-]
For odd family reasons I have a lot of acquantances in SOCOM and intelligence orgs, and I've been around these sorts of people long enough to know that the vast majority of them are well meaning and sincerely eager to do the right thing. But see, that's the problem: they're all employees, and they all report up the chain to an unelected appointee, far removed from any voting-based accountability.

I don't think the trust will ever come back because I don't believe the government has the right incentives in place to restore trust. It only gets worse from here. And certainly nothing I've learned about them in the last 10 years would convince me otherwise.

Daho0n 1187 days ago [-]
Nuremberg set the principle of "just following orders" isn't an excuse. Employed or employee; both are guilty.
darksaints 1187 days ago [-]
Sure they set a precedent for war crimes. Spying on your own citizens? It's not even apparent that that is a crime, just abhorrent.
AshWolfy 1188 days ago [-]
Strict civilian oversight, transparency, protections for whistleblowers, and culling much of the current leadership
Goofdup 1188 days ago [-]
A free Snowden would be a good start
cjfd 1187 days ago [-]
Not just freeing. Give the guy some kind of lofty award and have the next US president give it to him together with a speech explaining how great the things are that he did.
gonzo41 1187 days ago [-]
Actually, I don't think Snowden did that much. He exposed crimes, sure. But they still happen in darkness now. So there's this drifting time element to his story. It's in the past. And the systems that are in place for accountability didn't result in any accountability. Rather things just went dark.

Maybe have a ponder about, if Snowden didn't happen, what would be different about today?

Upside from Snowden, the intel community got better controls around administrator access on their networks.

boomboomsubban 1187 days ago [-]
The point of Snowden's actions was to reveal what our government is doing in an attempt to force them to change it. The immediate branding him as a traitor and foreign agent both allows the government to continue the illicit activity and discourages others from revealing damaging material.

By pardoning him and then rewarding him for his actions, it at least signals that government organizations shouldn't engage in similar activities and will lead to other people stepping forward if they do.

gonzo41 1187 days ago [-]
>it at least signals that government organizations shouldn't engage in similar activities.

This is the worst way for the executive to send signals to the organizations it controls.

It also assumes that the alphabet agencies were building these systems without government knowledge. But, everyone knew. They got the money for the project from the government. Sending a signal would be better achieved via legislation. Snowden should have leaked to members of congress.

I strongly doubt Snowden will ever get a pardon. It would be rewarding people declassifying large government programs based on their feelings about those programs.

boomboomsubban 1187 days ago [-]
>But, everyone knew.

So why would Clapper lie about it to Wyden? Additionally, what good would leaking it to Congress do then?

As a whole, I agree that the pardon alone would be a poor signal. Ideally it would be accompanied by additional changes. You argued that Snowden didn't accomplish much though and part of the reason is the hostile reaction from the government.

>It would be rewarding people declassifying large government programs based on their feelings about those programs

If a program is classified, it shouldn't be in a moral gray area. The public should be aware if we're committing unethical acts.

gonzo41 1187 days ago [-]
I agree with you but I think Snowden could have gone a different route. Probably to the same effect, with less damage to his own life.

Most things that are classified are in the morale grey area.

Craighead 1188 days ago [-]
Snowden, the Russian agent who under a self proclaimed psychotic breakdown after breaking into systems he didn't have authorization to, traded American and private company secrets for safe harbor to the country who has the least morals in terms of cyber security and hybrid warfare capabilities, because he thought he was about to be assassinated?
0xy 1187 days ago [-]
James Clapper is still a free man despite lying to Congress and the American public, so I don't see why anyone should trust a word anyone says from any US intelligence agency.
dylan604 1187 days ago [-]
Well, Clapper did come back for another visit with a statement that could best be summarized as "Oops, my bad". So, to Congress, that's apparently enough. Congress basically has battered spouse syndrome*. They desperately want to believe things will change, but it never does. Instead, time after time, intelligence communities run roughshod over whatever civilian oversight they subject themselves.

*I absolutely am not trying to belittle abuse by using that phrase.

rapjr9 1187 days ago [-]
It is the nature of trust that it is easily lost and difficult to regain. Basically trust is based on experience over the long term. If there are good experiences for a long time trust grows. Any breach of trust resets the trust clock to zero (or even to a negative value).
Daho0n 1187 days ago [-]
Host all their secrets and vulnerabilities on their site and shut down everything else. Prosecute everyone involved. Follow Nuremberg principles set by the US itself (IE. "I followed orders" is not an excuse.)
ByteJockey 1187 days ago [-]
A number of years of good behavior.

The numbers going to be a bit different for everyone, but as the years go on trust will be built with a larger and larger portion of the public.

DangitBobby 1187 days ago [-]
Decades of transparency and trustworthiness.
KirillPanov 1187 days ago [-]
Put Edward Snowden in charge of NSA.

I know that sounds ridiculous, but you asked.

Human civilization has a long history of people who were exiled for their priciples returning as great leaders. It's a hard-to-fake signal.

nelson_muntz69 1187 days ago [-]
I’m 45 years old, and for essentially my entire life, the US government has been playing the role of villain in popular culture. How do you reverse that? Its deeply embedded in my mind at this point, and has been backed up by reality.

A good start would be a return to representative government. The parties do not, IMO, represent the US citizens well, if at all.

MrStonedOne 1188 days ago [-]
NSA recommendations for the securing of government infrastructure should be listened to by corps with the normal grain of salt of understanding when and where it is appropriate to apply such measures.

NSA recommendations for the securing of enterprises and corporations should be ignored or avoided. If it was actually secure, they would also recommend it for government use

dylan604 1188 days ago [-]
I can see the logic in your theory. Further testing would be required for validation though. I would almost expect the public recommendations for corporations to be different than a company like a defense contractor.

At the end of the day, it's ther own damn fault that their recommendations are viewed so skeptically.

darksaints 1187 days ago [-]
That's the frustrating thing. The NSA actually used to be good, and generally useful towards securing our digital borders. SELinux, for example, was a contribution to the linux kernel that is generally highly regarded and well vetted, and it was freely given to us plebs that don't have nuclear arsenals to secure.

But now, everything is different. They've been telling us to use flawed crypto algorithms simply because they know how to break them and they can have access to whatever they want. With our current-day NSA, our least risky choice is to consider the NSA an adversary, just like North Korea, Russia, or China. Nothing they say should be taken at face value.

Pretty shitty that we have to treat our own government security agencies like that.

tobmlt 1187 days ago [-]
Let us not forget the elliptic curve backdoor they gave themselves a few years ago. https://www.wired.com/2013/09/nsa-backdoor/
CyanBird 1188 days ago [-]
> I have to wonder why we would listen to any recommendations from the NSA?

Because the NSA is a defensive as well as offensive institution, they defend the home front as well as they attack adversaries

Daho0n 1187 days ago [-]
> they defend the home front as well

No they clearly do not, not if it makes it any harder for themselves. They include weaknesses and backdoors in products and encryption used by US companies too. That is the direct opposite of "defending the home front".

astrea 1187 days ago [-]
Under what threat model would you be threatened by a domestic spy agency having a backdoor? Ignoring the red herring that "backdoors don't stay hidden".
nitrogen 1187 days ago [-]
That's far from a red herring. Lawful intercept features have been hacked before, and NSA tools have been leaked before.

Apart from that, do you really want your infra to be someone's pawn on a virtual battlefield? Nothing they do is going to be in your direct interest.

Apart from that, there is parallel construction and rapidly shifting social/political winds that can turn inoccuous one day into verboten the next.

AshWolfy 1188 days ago [-]
My instinct would be to say to verify what they are saying, but it seems like we should just take advice from the people who we would verify with it in the first place
1188 days ago [-]
AviationAtom 1188 days ago [-]
You do realize that SELinux is their baby, right?
nimbius 1188 days ago [-]
>NSA recommends that an enterprise network’s DNS traffic, encrypted or not, be sent only to the designated enterprise DNS resolver.

its either a slow day at the NSA or federal agencies have become so intellectually bankrupted by the cloud that they consider proclamations of the fundamentals of DNS and networking to be some sort of sage wisdom.

0xquad 1188 days ago [-]
They are responding to the very recent emergence of applications (like Firefox) that (optionally) use their own encrypted DNS, thus bypassing the enterprise's ability to apply security policy based on DNS. (Visibility on DNS is also useful to help detect some malware.) I'll allow it.
vaduz 1188 days ago [-]
It's clearly also spurred by the attempts to further obfuscate the use of DoH via Oblivious DoH [0] - though they don't go into much details on it.

[0] https://news.ycombinator.com/item?id=25344358

Daho0n 1187 days ago [-]
>thus bypassing the enterprise's ability

I think you could change it to read " bypassing the NSA's ability" and find the real reason behind this.

0xquad 1187 days ago [-]
They aren't recommending you don't use DoH. Just that you don't allow individual apps to bypass your enterprise resolver. In fact I use the same strategy at home (with DoT) to enforce ad and tracker blocking. It's just common sense really.

From the document: >[...] NSA recommends that the enterprise DNS resolver supports encrypted DNS, such as DoH, and that only that resolver be used in order to have the best DNS protections and visibility.

belorn 1188 days ago [-]
I read that to mean: Do not allow doh to tunnel all your dns traffic out to cloudflare regardless of the promise of encryption. Send it only to the designated enterprise DNS resolver, ie the one under control by the enterprise.

All other DNS resolvers should be disabled and blocked, ie all those public dns resolvers.

salawat 1188 days ago [-]
In other words "MitM everything".

It gets so tiresome. I used to think the people who called out tyranny everywhere were just nuts, but it never ceases to amaze that everything nowadays keeps going "centralize and control".

VWWHFSfQ 1188 days ago [-]
No you're misunderstanding what the recommendation is. If a company runs their own DoH or even regular DNS or AD resolvers, then the company's client computers (the laptops their employees take home) should not be querying any old resolver hard-coded in their web browser (Firefox, CloudFlare) for internal company domain addresses. That's literally all this is saying. It's good corporate IT policy anyway and it's only being reiterated with DoH.
salawat 1188 days ago [-]
Oh, to be able to buy that anymore. I don't see anyone implementing it that way though. I see it being implemented in a way whereby all DNS activity must go through the corp resolver first, essentially giving full tracking history of 99% of users.

Yes, it already happens with corporate proxies, and yes, I'm sour about that too...

To be clear, I'm not blaming any NetOps here. It's just... Why is it so tempting? The things you can do with that type of data, it almost seems like you have to be superhuman in terms of keeping people away from it. Maybe I"m just too damn enchanted by just delivering packets to enable people... But when you get pushes from agencies suggesting "Hey, add a by definition awesome logging and tapping point" it really ruins my day.

And yes, I run a network too. No. I don't give a darn what my users do with it as long as the servers are up and fine, and the global riffraff stay out. I don't know. Just overly grumpy I guess.

VWWHFSfQ 1188 days ago [-]
To be honest, I have no idea what point you're trying to make.

Here it is:

* Companies have internal (intranet) network services

* Companies operate their own DNS (DoH) resolvers

* They also have global (internet) employees

* The devices those employees use have hard-coded DNS (DoH) resolvers (Google, CloudFlare)

* Don't let them use the hard-coded DNS (DoH) resolvers

* Make sure their machine uses the company DNS (DoH) resolver.

I know people think that DNS-over-HTTP makes everything private and secure, but it doesn't. Google and CloudFlare still see every single DNS query from everyone.

deathgrips 1187 days ago [-]
>And yes, I run a network too. No. I don't give a darn what my users do with it as long as the servers are up and fine, and the global riffraff stay out.

You don't care if your users get hacked? Would you mind telling me what company you work for?

wmf 1188 days ago [-]
Sending your DNS queries to a resolver that you control is hardly MITM. In this case "you" is a company.
vaduz 1188 days ago [-]
> Sending your DNS queries to a resolver that you control is hardly MITM.

That's if your users are well-behaved and follow the rules. To stop the users from being badly behaved, NSA recommends blocking connectivity to well-known IP addresses of the public DoH resolvers (e.g. Cloudflare) and TLS inspection to stop connections that try to go to less well known ones, including via ODoH, which means your TLS inspection device must understand the protocol.

To do TLS inspection at that level you need to MitM all HTTPS traffic going everywhere, as you need to read all HTTPS traffic to any possible host, as any of them may be a DoH resolver or relay. Q.E.D.

nobody9999 1187 days ago [-]
>To do TLS inspection at that level you need to MitM all HTTPS traffic going everywhere, as you need to read all HTTPS traffic to any possible host, as any of them may be a DoH resolver or relay. Q.E.D.

Yep. And corporate/enterprise environments should (not nearly enough do) do exactly that with devices owned and managed by the enterprise.

Any personal devices or those owned by contractors, clients and other external actors should not be allowed access to internal corporate networks. This is neither a particularly new or controversial idea either. Most large (or even medium-sized) organizations have separate "guest" networks for external resources which aren't secured or monitored.

However, internal networks are (and should be) a very different story.

vaduz 1187 days ago [-]
> Yep. And corporate/enterprise environments should (not nearly enough do) do exactly that with devices owned and managed by the enterprise.

Sure! But the fact that they should (and many have been) already be doing just that does not change the fact that the technique imposes a third party listening into supposedly two party encrypted exchange. It's allowed, but it is still MitM.

> Any personal devices or those owned by contractors, clients and other external actors should not be allowed access to internal corporate networks. This is neither a particularly new or controversial idea either.

Whilst I agree, recent trend to push for more BYOD where the device is owned by the contrators, employee or an external actor, but still allowed access and controlled by the enterprise does tend to blur the lines quite a bit, especially as most tooling has been lacking decent isolation between "enterprise" and "private" on the same device. MDM tooling tends to want to administer the whole device and apply the stricter "enterprise" policies, with a pinky promise that private life is going to be respected.

nobody9999 1187 days ago [-]
>Whilst I agree, recent trend to push for more BYOD where the device is owned by the contrators, employee or an external actor, but still allowed access and controlled by the enterprise does tend to blur the lines quite a bit, especially as most tooling has been lacking decent isolation between "enterprise" and "private" on the same device. MDM tooling tends to want to administer the whole device and apply the stricter "enterprise" policies, with a pinky promise that private life is going to be respected.

Which is why employees should either be given employer-owned devices or use device subsidies (as many companies provide) to pay for a device that's only used for work purposes.

That companies attempt to hijack (and I mean that in both metaphorical and literal senses) personal devices for corporate purposes, aside from the obvious issues, it's also terrible security policy.

That businesses do this is exploitative, unethical and insecure. I suspect such businesses don't really care about the first two, but should care about the third.

As an infosec/infrastructure guy, I'd raise hell over such a policy -- because leaving aside the scumbaggery (I think I just coined a new word. Good for me!), having personal devices connected to internal corporate resources (even with corporate MDM configurations) is literally begging to be compromised, for (hopefully) obvious reasons.

nobody9999 1187 days ago [-]
>Sure! But the fact that they should (and many have been) already be doing just that does not change the fact that the technique imposes a third party listening into supposedly two party encrypted exchange. It's allowed, but it is still MitM.

Replying again, as I should have addressed this as well.

I disagree. When using corporate resources, the organization is not only well within their rights to monitor (or at least log) all communications, given the potential for malware, data exfiltration and (to a much lesser extent) employee misconduct, an organization would be remiss for not doing so.

Which is why it's extra important not to allow or (as I addressed in my other comment), require those working onsite to use personal resources on internal networks.

vaduz 1187 days ago [-]
You are missing the point here. There is no argument here that the corporation should be able to monitor communications going on or out of their systems (though some limits how and for what purpose that monitoring can be done do exist, especially in the EU - it's not unlimited), but that is not what the calling the technique for what it is is about.

Use of MitM by the corporation as part of Data Loss Prevention interferes with any hardening you or your vendors might be making against a MitM attack attempted by anyone else - it breaks if, for instance, the application vendor your enterprise has decided to use (let us call them "Example plc") has pinned their own CA certificate within the applicaton as the only one that is supposed to sign certificates on the Example domains - say, for "content.example.com" - following example that e.g. Google set. Or, worse yet for this example, as specific certificate to be used instead of the specific trust anchor. I've seen both examples in the wild, so it is not an idle discussion.

Not only you need to override that pinning with your own CA in the application for the content to be inspected, to retain the same level of hardening you'd need to implement the same checks the application did in your DLP system, so that it verifies that the system is legit - and that costs money and time and remains fragile over time, so many enterprises simply do not bother doing so, falling back to the well-known list of public CAs instead (that includes my $CurrentCorpo, much to my annoyance). It weakens the whole system, which is already fragile enough thanks to actors like Symantec, WoSign and StartCom - and possibly others.

deathgrips 1187 days ago [-]
You seem to be implying that a company should not know what kind of web requests are coming from its computers.
vaduz 1187 days ago [-]
> You seem to be implying that a company should not know what kind of web requests are coming from its computers.

Please point out where I made that claim.

All I am saying is that the way "knowing what kind of web requests" - and DNS request in this case - is achieved is by becoming a third party in supposedly two party encrypted communication. The company certainly has the authority to do so (check your local laws, though, as there are some exceptions) - but it is MitM in function and practice, if not in name. "TLS inspection" and "data loss prevention" are simply common euphemisms for the technique.

It's also not new, MitM proxies and for that matter endpoint introspection (e.g. keyloggers at the user machine) have been in use for decades in the enterprise, and have been making their way into BYOD private machines as well via various MDM tooling.

Using your company DNS server as the grandparent has mentioned is not MitM. Inspecting all traffic by all devices in your company to try to enforce the use of said DNS server requires MitM, though.

deathgrips 1187 days ago [-]
You keep saying MITM, which is an attack type, as if any kind of traffic inspection is bad. There is no third party in this sort of proxy: the company is communicating with the internet. The fact that a company proxy is inspecting traffic from a company computer does not make the company proxy a third party because both resources belong to the company and should be used for legitimate purposes. Is a reverse proxy doing a MITM attack on a web server if it offloads encryption and authentication for it? No, because both resources are owned by the same party.

TLS inspection and DLP are not euphemisms, they're valid names for a security practice. They're not even the same thing--you couldn't replace both mentions with "MITM" and expect another to know what you're talking about.

userbinator 1188 days ago [-]
...and in the case of people using DNS-based adblocking and such, "you" is... yourself.
StreamBright 1188 days ago [-]
What he means is that NSA and related has the ability to MITM HTTPS while another actors do not, probably.
grayfaced 1188 days ago [-]
In light of SolarWinds hack, I think it's fair to say that DoH is a real threat. Putting that volume of machines under the control of a single "trusted" network provider is very bad idea.
nobody9999 1187 days ago [-]
> In other words "MitM everything".

>It gets so tiresome. I used to think the people who called out tyranny everywhere were just nuts, but it never ceases to amaze that everything nowadays keeps going "centralize and control".

The recommendations are for enterprise networks, although they're also reasonable (although not really accessible to the non-technical) for individuals who care about their privacy as well.

An enterprise network isn't (or shouldn't be) some sort of individual free-for-all. In fact, good security practice recommends (although this isn't universally implemented) that all perimeter network traffic, regardless of type, be proxied (or MitM'd, as you put it) to protect from both intrusions and exfiltration of data.

Are you claiming that Enterprise networks should allow external resolvers to be used on internal resources willy-nilly?

In fact, good security practice demands that devices that aren't authenticated (e.g., with 802.1x) shouldn't be granted access to internal resources at all. On the flip side, internal devices shouldn't rely on external infrastructure resources either.

This isn't censorship or some sort of fascistic control mechanism. Rather, it's an appropriate organization response to extant and potential threats to their IT infrastructure and data.

freeone3000 1188 days ago [-]
Somebody has to recommend the basic stuff, otherwise it's just rumor. This is a topical addition to a list of DNS recommendations.
TedDoesntTalk 1188 days ago [-]
Now you can point to that and say “We use the best DNS practices as recommended by the NSA.”
rasengan 1188 days ago [-]
Specifically, they are saying that in a home/personal environment, it makes sense to use DoH with a public resolver like cloudflare, but in enterprise, you will not be able to maintain tight control over internal use as browsers role out with DoH by default unless you block those public resolvers and enforce policy to use the enterprise resolver. It doesn't matter unless you care about these tight controls though even in enterprise.
StreamBright 1188 days ago [-]
It is going to be super hard to block DoH since it is indistinguishable from normal HTTPS traffic. If you MITM the HTTPS connection the browser can detect that and refuse to use the connection. Many companies are in this situation that MITMing HTTPS is not working very well. I think Google enforces HSTS[1]. Not sure how that works with DOH. I also think that we need browsers that do not have any other means of DNS resolution than the good old operating system wide /etc/resolv.conf (or similar). I am not going to fight with Google if I have the right to have my own DNS server or not. They are taking the open internet inch by inch. This is the last drop.

1. https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

XorNot 1188 days ago [-]
At an enterprise level the browser configuration is controlled by the IT department. Your MITM CA certificate is going to be forced into the trusted list everywhere.
kortilla 1188 days ago [-]
Will that work with websites that pin cert trust anchors?
vaduz 1188 days ago [-]
HPKP is dead for all intents and purposes as far as browsers go. What pinning? The CA certificate store that the browser is using is something any enterprise that is interested in control is already extending by adding their own CA cert - and it has been that way for a very long time.

This approach does break some applications that pin specific certificate instead of relying on "any valid CA" model (e.g. Signal desktop) but that is seen as feature, not a bug, when it comes to enterprise.

phkahler 1188 days ago [-]
>> Your MITM CA certificate is going to be forced into the trusted list everywhere.

Not on my phone.

pstrateman 1188 days ago [-]
Then your phone doesn't go on the lan.
phkahler 1188 days ago [-]
OK.
rasengan 1188 days ago [-]
That's a good point -- I wasn't really thinking about the practicability since it's not something I had much interest in. That said, I guess either blackholing the DNS so it can't initially be resolved, or even figuring out any and all IP addresses it resolves to and blackholing those IPs would be a start.

I agree, however, that it should be possible to run your own _______. That's what the internet was meant to be.

toast0 1188 days ago [-]
Well known public DoH resolvers are going to have well known IPs and will be easy to block.

eSNI and the encrypted TLS handshake proposal that was floated recently rely on fetching keys via DNS, so that's not applicable for DoH, and the handshake for a DoH client will probably be easy to distinguish from an HTTPS client.

It will be easy to block, if you want to.

StreamBright 1188 days ago [-]
What if well known DoH resolves are going to be in the same range as the web traffic? If you could distinguish between DoH and HTTPS easily with iptables or pf, sure. I thought it is not that easy.
toast0 1188 days ago [-]
That really depends on how much Cloudflare and Google are willing to risk.

If enough people block the published DoH resolver IPs, they'll see reduced availability on any hostname that is hosted on the same IPs; but if Cloudflare and Google put important content on the same IPs, it makes it harder to block.

pstrateman 1188 days ago [-]
Enterprise networks should generally be forcing all HTTPS traffic through proxies.

This requires local client certificates to be installed.

sneak 1188 days ago [-]
This works great when you have all of your computers in an office, like in 1999.
pstrateman 1188 days ago [-]
This still works if you use a VPN correctly.
jorblumesea 1188 days ago [-]
Security recommendations aren't supposed to be brilliant insights. They're just best practices.
737min 1188 days ago [-]
Exactly right. Many security problems are due to forgoing the simple things in favor of exotic.
blondin 1188 days ago [-]
unless i am reading this wrong, they are not saying don't send your requests to Cloudfare, Apple, etc. i am not entirely privy to all this, but aren't they entreprise grade DNS resolvers?
Spooky23 1188 days ago [-]
They are high quality services, not controlled by the company.

It’s pretty obvious that allowing unloggable DNS traffic in an enterprise is a bad idea. It makes ex filtration trivial.

salawat 1188 days ago [-]
Exfiltration already is trivial DNS or not. Just admit that you want to be able to eavesdrop on all activity whatsoever.

What, you think that anyone looking to get something out undetected isn't using raw IP's?

Spooky23 1188 days ago [-]
On my network? Absolutely, ability to inspect packets is absolutely essential. On a public network? Different story.

I’ve personally been engaged in incident response and in many scenarios DNS is a control mechanism for malware, or uses it for various purposes. It’s often a key piece of evidence for reconstruction of an incident.

Raw IPs can be used as well, but that doesn’t negate my point.

0xquad 1188 days ago [-]
>Raw IPs can be used as well, but that doesn’t negate my point.

And in fact if you have enterprise-wide visibility on DNS requests, you have the opportunity to detect the use of an IP that was not returned in a request. Making it immediately suspect.

aftbit 1188 days ago [-]
I wonder when smart TV manufacturers will begin using DNS-over-HTTPS to make it harder for PiHole et al to block their ads.
morpheuskafka 1188 days ago [-]
I wonder why they haven't done this along time ago. They don't even need a full DoH endpoint, just an auto-updating hosts file downloaded from a non-blockable domain (one used for other TV features) would do it.
collsni 1188 days ago [-]
The already hard code dns to do this, need to do an address translation to fix
tptacek 1188 days ago [-]
There's no reason any of these products need to use on-prem DNS of any sort, except maybe for the DNS lookup to the central server that they require to operate at all. I know a lot of people base DoH concerns on the idea that it allows their set-top box to evade their local DNS policy, but that's not a coherent argument; these boxes can tunnel all their traffic out, if they want to (you can block that, but it's all-or-none, which is the thing the DNS boffins claim they can work around).
jccooper 1188 days ago [-]
I kinda doubt they're concerned about the tiny percentage of users who would do such a thing.
edgan 1188 days ago [-]
I already use an Nvidia Shield instead of my smart TV, even though both are Android TV. It is such a better experience. If any device started taking over it's DNS in a way I couldn't override, and I had reason to care, I would stop using it. PiHole is already a meh solution.

My two primary apps on my Shield are SmartTubeTV and Kodi. I won't pay for YouTube when they force bundle it with other services I don't want. The alternative of ads has gotten to ridiculous levels, and then the ads in the video from them on top. SponsorBlock is another game changer. Sadly it isn't in an AndroidTV app yet.

On my phone it is Vanced all the way for YouTube, and it does have SponsorBlock.

novok 1188 days ago [-]
And that's when you start adopting reverse dns + mac address filtering
Randor 1188 days ago [-]
I don't understand why people can't see the dangers of moving everything to DoH. For example if you have a 3000 user network and 2900 of them are using a local resolver. You have almost no chance of finding those 100 nodes doing DoH without MITM everything over 443.

Someone will probably respond with something like: "Just block the IP address ranges of public DoH resolvers" and that would work for the resolvers we know about.

userbinator 1188 days ago [-]
I don't understand why people can't see the dangers of moving everything to DoH

Because "more security" is hard to argue against. The huge corporations who ultimately want to take control of the population have realised that, and are using that excuse to get in bit by bit.

nobody9999 1187 days ago [-]
>Because "more security" is hard to argue against. The huge corporations who ultimately want to take control of the population have realised that, and are using that excuse to get in bit by bit.

But in reality, DoH doesn't really provide "more security."

All it does is obfuscate DNS queries. If you're concerned about ISP tracking, DoH doesn't really help with that at all since the ISP can see where you're going just by looking at packet headers anyway.

And the Googles and Facebooks of the world love DoH because it bypasses PiHole style ad/tracking blockers.

The appropriate solution is to use PiHole (or PiHole style blocklists) in concert with a local recursive resolver (or an external resolver that supports DNS-Crypt), not to obfuscate your DNS requests, allowing all the ads/tracking/spying connections to proliferate.

It's not a perfect solution, but it's a much better solution than needing to implement one or more ad/tracker blocking solutions on every single device on your network.

creata 1188 days ago [-]
How will DoH help "huge corporations... take control of the population"?
userbinator 1188 days ago [-]
First they take the DNS queries. Then they start routing the rest of the traffic through their servers, while advertising how it's all "for your privacy and security", of course.

There's another article on the front page about how these companies already wield immense power: https://news.ycombinator.com/item?id=25802366

To be clear, I'm not against the principles behind DoH, and think traffic going from the local network into the Internet benefits from encryption; I'm against how it's being implemented at the application-level and its subversive nature.

creata 1187 days ago [-]
That's fair enough, but in the short term, Cloudflare is more trustworthy (and tolerant of free speech!) than my ISP and government. Is there an initiative in which I have to trust none of these parties?
heavyset_go 1187 days ago [-]
DNS-based ad blocking worked pretty well, and DoH breaks that.
MacsHeadroom 1187 days ago [-]
You can reroute DoH to your own resolver. If you have a trusted wildcard certificate on the device you want to reroute DoH for this will work 100% of the time. If you don't have a trusted wildcard cert on the device in question it usually will either not care or will fall back to unencrypted DNS.
seanieb 1188 days ago [-]
If you’ve got 3000 nodes on your network without inventory, logging and configuration control on each of those devices you’ve already lost. You don’t have a secure network, at best you’ve got a guest network at a cafe.
Randor 1188 days ago [-]
An estimated 18,000 companies were affected by the SolarWinds incident. Many of those companies had excellent inventory, logging and configuration control. You simply cannot detect DNS over HTTPS in the network without performing MITM.
seanieb 1187 days ago [-]
Really, you can’t detect that if you’re on the machine?... and if you’re not on the machine or its acting differently to what the logs show. Isolate it until you can investigate.
Randor 1187 days ago [-]
Actually, no you could not detect that even from the machine performing the DoH. You could probably detect it if you attached a debugger and set a breakpoint on the resolve functions being used. May I ask what you do for a living?

Why even comment on things that you don't fully understand?

seanieb 1184 days ago [-]
I'm a security engineer. It's pretty much my thing. I'm taking about logging it on the machine, that's not owned...and yes you too can do it... I'm doing right now. Even on my raspberry pi.

The detection part I think you're misunderstanding is that you need to compare what the machine is logging and what it's actually doing, by looking network traffic, etc. Looking for parallax, differences between the two.

Randor 1184 days ago [-]
I am not misunderstanding anything. Let's terminate this conversation, I can see that it will not get anywhere.

It's amusing that you actually believe that you can 'check the logs' to detect all DoH being performed on the machine. Would you be willing to disclose your employer? "I can check the logs" sounds like something a naive systems administrator would say.

I'm glad that 'security' is your thing. The best thing about the internet is that you never know who you are talking to... Even when you meet people that wrote the parts of the operating system you're currently using.

seanieb 1183 days ago [-]
I never said you could log all DoH. You’re not following what I’ve said. If you’re relying on DNS for your security posture in anyway right now you’re in a really bad place. Having those non malicious dns requests in the clear are a safety blanket at best. Check the default DoH resolver and the systems DoH logs. Then look at network traffic and then for gaps. Programs that use their own resolver and just mix it with there own TLS traffic can be observed, even without knowing the DNS record, the ip is enough.

Also feel free to Google me, creepy as it is, I’ve no idea why my specific employer would help this discussion in anyway.

PS the victims of solarwinds had dns and it didn’t help them. Expecting the attacker to use a known IOC or contact an obvious C&C domain is where the industry is at. My opinion is DoH will actually force blue teams to build systems that are effective. My chosen model is parallax. Known behavior, known states that can be checked.

collsni 1188 days ago [-]
They are just saying within enterprises it is of the enterprises best interest to control all aspects of dns so all traffic can be monitored. Which you seriously need to do if you aren't doing it already.

Dns needs to be monitored holistically it is a great place to catch IOCs.

deathgrips 1187 days ago [-]
I think I found the one Hacker News in this thread that actually works in security ops.
ENOTTY 1187 days ago [-]
There's good money to be made selling a solution that detects and blocks rogue DoH requests.

Anyways, it's possible: https://dl.acm.org/doi/abs/10.1145/3407023.3409192

egberts1 1187 days ago [-]
^^^THIS^^^

Especially given that malicious JavaScript can now make DoH toward their distributed command and control centers.

nuker 1187 days ago [-]
Why they advised DoH and not DoT? DoT is simpler, no http cookies ambiguity. Easier to block counter argument does not really apply to businesses...
vaduz 1187 days ago [-]
For non-malicious apps, generally speaking DoT is something you need to specifically enable, whereas certain major applications are working towards using DoH by default (or they already do [0]). DoH is also mixed with regular HTTPS traffic, so it is much harder to detect and act upon - so businesses need to spend more effort to counter it. As a bonus, most of the mitigations in use here also apply to DoT, so you are also getting it in the bargain...

[0] https://blog.mozilla.org/blog/2020/02/25/firefox-continues-p...

1vuio0pswjnm7 1187 days ago [-]
There has certainly been evidence of censorship amongst the thousands of third party open resolvers. Are there any examples of known "malicious" third party DoH or DoT resolvers. Has anyone been studying this.
jcpham2 1188 days ago [-]
Nice try NSA
permille42 1188 days ago [-]
Could someone create a replacement for DNS entirely please?

DNS does WAY more than what the typical user needs it for and services that present it are resultantly much more complex than what is needed for the 99% use case.

The 99% use case: resolve x.y.z to some IP address.

What I think should happen:

1. At each level, a public/private keypair is used to authenticate valid records for the name. Eg: .com has public/private keypair(s) to represent who can sign x.com records. .com owner only needs to publish these. Reliable sources ( ISPs etc ) can then share these.

2. The x.com records themselves would be: Mapping from x.com to IP address(s) / public key.

3. The x.com owners could then publish out their x.y.com records freely and they could be mirrored by everyone.

Unlike the current methodology, there would be far less need to trust where you get the records from. The public/private keypairs should change WAY less frequently.

Agreeably in such a widely distributed system you wouldn't have nice TTL, but that is for the better. DNS records should not be changing that frequently.

Such a new system also should be done in a fully distributed way and NOT controlled by a bunch of money grubbing bastards who make way too much money from records.

It should NOT cost $20/yr to own a record pointing x.y to a number. It's absurd and really needs to stop.

userbinator 1187 days ago [-]
Did you just reinvent DNSSEC...?
permille42 1187 days ago [-]
No. My point isn't to use security on top of existing DNS records. My point is to make a brand new distributed system entirely that is free for all instead of run by a bunch of greedy internet thugs.
permille42 1187 days ago [-]
My bad for having an idea on how to modernize. Assholes.
asdfthrowawayyy 1188 days ago [-]
Bunch of garbage, NSA and FBI hate ESNI.

Firefox silently pulled all production ESNI code as of v83 without a word of warning to anyone. As in, the Firefox development team simply killed encrypted SNI and told nobody that may have been using ESNI in despot regimes, in exchange for future ECH support which is not implemented anywhere yet.

Nor will ECH be endpoint supported any time soon.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:53:28 GMT+0000 (Coordinated Universal Time) with Vercel.