NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Why Confidential Computing Is a Game Changer (darkreading.com)
theamk 1345 days ago [-]
A somewhat more technical explanation: https://cloud.google.com/blog/products/identity-security/int...

This using a feature in AMD processors which protects one VM from another, by encrypting memory with VM-specific key. I think the idea is that even if hypervisor is compromised, there is no way to access running data of the machine.

From the practical standpoint, you still have to trust Google's infrastructure. Here is a key quote:

> all GCP workloads you run in VMs today can run as a Confidential VM. One checkbox—it’s that simple.

The video confirms it -- you click on the checkbox while creating VM, it starts as usual, you ssh into it as usual, the only sign that you are protected is a line in dmesg output.

So there is nothing "game changing" about this -- your threat model is mostly the same, but your attack surface is slightly reduced. The biggest threats (misconfigurations and network attacks on vulnerable software) are still there, and are not changed in any way at all. And you have to keep absolutely trusting your cloud provider, too.

It looks like the main point of this whole project is to satisfy government regulators, big bosses, security consultants and to check boxes on security evaluation worksheets.

lxgr 1345 days ago [-]
> So there is nothing "game changing" about this -- your threat model is mostly the same, but your attack surface is slightly reduced.

If done correctly (using attestation, as mentioned here already), this can reduce the attack surface significantly.

Right now, you need to both trust your cloud provider to not introduce backdoors for themselves or some government _and_ to keep doing so until the end of your business relationship with them.

Ideally, with trusted/confidential computing, you only need to trust the vendor to initially do as they say and not outright lie to you (e.g. by making the checkbox a no-op). In many ways, this would protect a cloud provider from themselves.

Of course, with the current implementation non-successes like Intel's SGX, one could argue that this is merely kicking the can of trust down the road to the hardware vendor, but as far as I understand it, this is not an inherent flaw of the idea of trusted computing but rather a specific implementation.

Reelin 1345 days ago [-]
> you only need to trust the vendor to initially do as they say and not outright lie to you (e.g. by making the checkbox a no-op)

The cloud provider can't lie to you (assuming you know how to check, anyway). There are instructions available to your code to have the CPU perform cryptographic attestation regarding its current state. These instructions can't be emulated because they involve producing a cryptographic signature using a private key embedded in the hardware (which chains back to a root of trust for the hardware vendor).

Basically, you can ask the CPU running your code "are you in confidential mode?" and it will respond in the affirmative with a cryptographic verification that chains back to the hardware vendor. You do this before loading the encryption keys for your super sekrit data store over the network.

lxgr 1345 days ago [-]
Ah, I wasn't aware that AMDs implementation already supported attestation in addition to just memory encryption.
Reelin 1345 days ago [-]
I feel the need to disclaim that I have no actual experience using SEV. Also it looks like the attestation protocol may have been broken by attacking the PSP firmware? I have no idea what the current state of affairs is, particularly regarding the claimed firmware downgrade vulnerability.

https://arxiv.org/abs/1908.11680

https://berlin-crypto.github.io/event/amdsev.html

theamk 1345 days ago [-]
If one is using attestation and independent verification, then I agree.

But this is not what the article says. It specifically mentions "Confidential VMs." and says "giving customers a simple, easy-to-use option". Looking up the actual product, they strongly recommend using google-provided images.

So this particular announcement does not change much. You still need the trust the cloud provider to not introduce backdoors.

And I bet that all the pre-made images will have "https://packages.cloud.google.com/apt" in the sources.list -- so if Google wants to snoop on you, all they need is to ship a backdoored package. And if this does not work, they may send you a regular email saying "the physical host is failing, please reboot to migrate" -- and when you reboot, it will be not in the protected mode.

So yes, the general idea of "Confidential Computing" is sound, but reading the post carefully shows the current system is just for making non-technical people excited.

m3kw9 1345 days ago [-]
It’s “game changing” to attract clicks.
simonebrunozzi 1345 days ago [-]
I wonder how this might affect startups that are trying to sell security in that specific area - e.g. Hysolate [0] (no affiliation).

[0]: https://www.hysolate.com/

anonymousDan 1345 days ago [-]
No you don't. Look up remote attestation.
motohagiography 1345 days ago [-]
This is important because tech is pervasive enough that it's just not reasonable to trust other people to manage cleartext data on our behalf anymore. You wouldn't believe the fights I've been in over encrypting databases where the resistance was because it would cut out the unofficial privileged access and impunity the DBAs and their managers had to the business data. The ability to look up someone's personal information in a data lake of millions of people is socially elevating and there are platform companies where snooping isn't a bug it's a perk of the job.

Part of the reality of living in an increasingly lower trust society is that we need new tech to limit the power of strangers who manage our data. While the game changing aspect of this isn't instantaneous, if you aren't using one in 5 years you will likely have to assert why you aren't using a confidentiality system, and ideally within 10 there will be penalties for exploiting it.

Metus 1345 days ago [-]
Could you use something like confidential computing to attest over http(s) what software is actually running on the server? This would offer a very interesting trust model together with reproducible builds, where you could have the CPU attest over http(s) that it is indeed the code base published on Github/Gitlab that is actually running on the server and receiving your data.
theamk 1345 days ago [-]
You'd have to attest a lot of things to get this to work. Who else has the access to the database? Are there backups -- and how are they protected? Are you sure the server's private SSL key is not shared with other, non-attested servers? Are there any unsafe CDNs that are used?

You can kinda make it work with things like Protonmail which have heavy client-side encryption -- but this approach severely limits available features (for example, in Protonmail, you cannot search in message text).

dastbe 1345 days ago [-]
sadly, confidential computing isn’t the answer to your desire.

what they are iterating on here is the trust boundaries between your company and your infrastructure provider as well as your peers sharing the hardware.

gcommer 1345 days ago [-]
tl;dr of confidential computing:

In normal cloud computing you are effectively trusting the cloud provider not to look at or modify your code and data. Confidential computing uses built in CPU features to prevent anyone from seeing what is going on in (a few cores of) the CPU (and in EPYC's case, encrypt all RAM accesses). Very roughly: These CPU mechanisms include the ability to provide a digital signature of the current state of the CPU and memory, signed by private keys baked into the CPU by the manufacturer. The CPU only emits this signature when in the special "secure mode", so if you receive the signature and validate it you know the exact state of the machine being run by the CPU in secure mode. You can, for example: start a minimal bootloader, remotely validate it is running securely, and only then send it a key over the network to decrypt your proprietary code.

Effectively, it increases your trust in the cloud from P(cloud provider is screwing me over) to P((cloud provider AND CPU manufacturer are both working together to screw me over) ∪ (cloud provider has found and is exploiting a vulnerability in the CPU)).

Disclaimer: I work for Google but nowhere remotely related to this (I know only publicly available information about this product); I happened to do very similar research work 6 years ago in grad school.

theamk 1345 days ago [-]
.. except in this product, the software to "remotely validate it is running securely" is provided by the same party that is running the cloud.

So it increases your trust in the cloud from P(cloud provider is screwing me over) to P(cloud provider is screwing me over) ∪ (only "cloud ops" department in my cloud provider wants to screw me over, and they cannot get help from anyone else in the cloud provider)

Not a very big change if you ask me.

stefan_ 1345 days ago [-]
That's a nice theory. In reality, VMs have no innate source of randomness and call into their hypervisor for that sweet sweet entropy - just as they ask hypervisors to map hardware into their address space, which drivers then proceed to innately trust.

This improves the situation by an infinitesimally small amount.

kevincox 1345 days ago [-]
Is this true? I thought modern server CPUs had access to true randomness that didn't need hypervisor mediation. Or does the hypervisor have the ability to trap RDRAND?
rightbyte 1345 days ago [-]
Sounds cheaper to run job batches on a local server then to run them encrypted on a remote mainframe, if you want that level of security.

I mean what is the switching overhead of signing the VM memory.

Reelin 1345 days ago [-]
> I mean what is the switching overhead of signing the VM memory.

Not much. The crypto is implemented in hardware. It's transparently added and removed as data is stored to and fetched from memory. The CPU contains unencrypted data but the rest of the components never see the plaintext.

rini17 1345 days ago [-]
Does this solve a real problem? Such as, hardware owner leaking stuff from VMs was an issue?
lxgr 1345 days ago [-]
The hypothetical possibility is enough to be a very real problem if decision makers perceive it to be.

And unlike facetious data locality laws that equate physical location with logical control, confidential/trusted computing might actually be able to address their (in my opinion not unfounded) concerns.

Reelin 1345 days ago [-]
You can't prove they don't spy on you for their own gain (financial or otherwise). A single rogue employee with physical access is all that's necessary otherwise. There are also plenty of small cut rate cloud providers out there without much in the way of reputation.
nurettin 1345 days ago [-]
Hardware based confidential computing is one thing. What we are kind of missing is encryption on the database layer where indexes are computed in a way where you can have fine grained control over who accesses which row. I am imagining a certificate chain based database index where you can't select data that your certificates (roles) don't allow and that is done quickly on the database layer so not even the admin can gain access.
FridgeSeal 1345 days ago [-]
I was thinking about just this thing recently, although more from a search perspective: notably, how do I build a full text search index where different users can see different amounts of the documents, ideally without storing multiple copies of the documents. I’m convinced there’s some clever data structures that might allow this to happen, but I haven’t found them yet.
hansvm 1345 days ago [-]
Without any additional constraints (e.g. that users/documents are clustered, that each user only has access to a small or large subset of documents, etc...) there aren't any great solutions. No matter the data structure, with n documents and m users you need on average at least nm bits in addition to the space to store the documents. Document insertion is at least an O(m) operation, and user insertion is at least an O(n) operation (for any fixed data structure on average across all possible user-document mappings of that size).
decisionSniper 1345 days ago [-]
The NSA's been working on something similar for years, cell-level security. Gotta have some way to compartmentalize the data I guess.

https://en.wikipedia.org/wiki/Apache_Accumulo

RcouF1uZ4gsC 1345 days ago [-]
> Under the hood, Confidential Computing environments keep data encrypted in memory, and elsewhere outside the CPU. Data is decrypted within the CPU boundary by memory controllers using embedded hardware keys that a cloud provider does not have access to.

I don’t know how much that buys you. If the threat model is that the cloud provider cannot be trusted, can’t the cloud provider just run your software on a machine that did not encrypt memory. After all they control the machines and schedule what code runs on the machines. How could you even detect an attack like that?

EDIT:

Unless you are using some theoretically secure system such as fully homomorphic encryption, if the organization that physically controls the machine your code runs on wants to compromise you, they can.

anonymousDan 1345 days ago [-]
Look up remote attestation. Essentially it allows you to verify you are talking to your code inside an enclave/encrypted VM.
darkwater 1345 days ago [-]
I'm reading about it here https://en.wikipedia.org/wiki/Trusted_Computing#Remote_attes... but maybe it's not the best link. From what I understand from Wikipedia, remote attestation works in the scenario in which you are the producer of the TCP enclave, or trust it. And you can know that the software running in there is that specific copy, not a tampered one.

But in this case I think OP was claiming that the google checkbox/dmesg message could be just fake/placeholders and you would not know (unless you can really inspect the internals). Am I getting something wrong?

gcommer 1345 days ago [-]
If you trust the CPU vendor to not be colluding with your cloud provider, and that the cloud provider hasn't found and exploited a hardware or software vulnerability in the enclave, then a successful remote attestation is a cryptographic proof that you are executing your code unmodified without the cloud provider being able to see either your code or (with careful delivery) your data.

There are additional side channel concerns such as RAM bus sniffing; it looks like the EPYC processors handle that by encrypting all memory accesses. Additional concerns include memory access patterns and power usage monitoring; I don't see these mentioned in any of AMD's SEV whitepapers but they can (with great care) be mitigated in your software.

Disclaimer: I work for Google but nowhere remotely related to this (I know only publicly available information about this product); I happened to do very similar research work 6 years ago in grad school.

aspenmayer 1345 days ago [-]
theamk 1345 days ago [-]
> Firmware that is signed and verified by Google's Certificate Authority establishes the root of trust for Secure Boot, which verifies your VM's identity and checks that it is part of your specified project and region.

What was the threat model again, could you remind me? /s

1344 days ago [-]
reportgunner 1345 days ago [-]
Eh why wouldn't it be a game changer.
russfink 1345 days ago [-]
Homomorphic computing is not mentioned...?
topspin 1345 days ago [-]
It's a product announcement by a Google employee and they're not selling homomorphic computing. They're selling a different product.
ackbar03 1345 days ago [-]
Im not a cryptography expert but from what i've learnt about homomorphic encryption in my courses its nowhere close to being usable at reasonable speeds
Joker_vD 1345 days ago [-]
Yeah, turning 1 bit of plaintext into 20 MB of ciphertext kills any hope for "reasonable speed". I have a toy functional language implementation that has no built-in data structures whatsoever, and even it in the end represents a bit only as an 8-byte closure.
ssmiler 1345 days ago [-]
Maybe in batched HE schemes. In https://github.com/tfhe/tfhe the ratio is much smaller, 2.5kB per plaintext bit.
anonymousDan 1345 days ago [-]
This is actually practical for general workloads. Homomorphic encryption not so much (although there have been advances for machine learning inference recently)
gumby 1345 days ago [-]
Several orders of magnitude slower. Run in in a Secure Enclave (like SGX) you run at full speed.
llarsson 1345 days ago [-]
Better speed, yes, but also, SGX has been broken in different ways and just a few days ago, Apple's secure enclave for phones was broken, too.

These are extra obstacles along the way, but not insurmountable walls.

ilaksh 1345 days ago [-]
Why would people downvote the most insightful comment? Surely it is on topic to mention the next level of protection?

I believe that we need a new paradigm for voting in threads because people are consistently using it poorly.

mitchtbaum 1345 days ago [-]
Google is The Source to Trust. (abcMT)aking all human-computing has with a passion you need to see, I will show you this. Believe.

God Is Great. God Is Good. Let Us Thank Her for This Tool. By Her Hand We All Are Fed.

One Cup

brunoTbear 1345 days ago [-]
Nelly!
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 07:42:41 GMT+0000 (Coordinated Universal Time) with Vercel.