(I am not part of any process with respect to this, embargoed or otherwise.)
Edit: the upstream commit is 58122bf1d856a4ea9581d62a07c557d997d46a19, called “x86/fpu: Default eagerfpu=on on all CPUs”, and it landed in early 2016. Greg K-H just submitted backports to all older supported kernels.
The point is: although relatively unlikely, it is still _possible_ that you need some mitigation even if you have newer hardware (Sandybridge or newer is where XSAVEOPT first showed up, I believe).
Disclaimer: I work on Linux at Intel.
"3) post-Spectre rumors suggest that the %cr0 TS flag might not block speculation, permitting leaking of information about FPU state (AES keys?) across protection boundaries."
AES-NI is part of the vector/FP units and uses those registers as well.
Yes that does create some new attack vectors, but these "bugs" make me think that the whole architecture is a rooted, burning trash fire.
It's long been the case that side-channel attacks can extract key materials out of conventional CPUs. Power analysis alone has been now a decades long science and not going away any time soon, made all the more exciting by the prevalence of RF and the advancement of antennas. Spectre and the like is just another wake-up for those not paying attention e.g. in cloud services. Consider yourself one of the enlightened when it comes to crypto material handling.
Makes me think if there is any incentive to do crypto properly or security theater will always prevail?
It's security through obscurity.
Be careful, though: there are real TPMs (actual chips built by companies that, in theory, know how to build secure-ish hardware) and there are firmware-emulated TPMs. I think that Intel and AMD both offer the latter. Intel calls its offering “platform trust technology”. I take that to mean “please trust our platform even though there is sometimes a web server running on the same chip as the so-called TPM”.
Have no clue on Intel, but for AMD there is basically separate ARM core within CPU so it's has TrustZone built in. Is it just a web server too? I truly curious.
These guys are the leaders in small USB crypto keys. Yes their devices offload various crypto routines; and they are cheap.
This is a misconception. These crypto keys are only designed to protect RSA and ECC private key, and encrypt symmetric key instead of actual data, for good reasons. The actual symmetric encryption is still performed on the host computer, the actual AES key can still be stolen from a CPU side-channel.
Are these tokens any good? Yes, they guard your private key. Is it enough to protect you from Spectre? No.
Also, source on them being "much more trivial to exploit than the article's issue"? The only issue I've heard with Yubikey's certificate operations was https://www.yubico.com/keycheck/ where they also provided anyone affected with a replacement key at no charge.
vou posted on a thread about one process stealing memory from another using cpu delays.
yubi key expose a device (or type as a usb keyboard), which every single user process have access to.
So? Are there any actual exploits you'd like to share that take advantage of either of these? Or are you just speaking in hypotheticals? Because in that case, basically everything you do on any computer that isn't airgapped (and even that can be exploited) is going to theoretically be exploitable.
They're not high performance, they aren't "security focused" hardware, nor are they perfect fit for the task, but they're reasonably well understood and broadly available. The Pi Zero does have a closed "GPU" firmware, but there is an effort to run open firmware on it.
However if you expect the host to attack the device, USB OTG (i.e. Linux USB gadget drivers) may not be a good choice, you may want to access it via network instead and that opens up more choices (though most will not be as small).
The other alternatives are basically going to be micro-controllers, for example FST-01/Gnuk, and FPGAs which are still more of a black box than CPUs at the moment.
The next architecture should/will have dedicated on-chip space for things like encryption. That we are mixing essential and trivial data in the same space, and expecting neither to leak into each other, is the root of the problem. I wouldn't be suprised if in ten years we are talking about L1 through L3 Cache, with a separate "LX" cache for the important security stuff.
Now on a server, with a far greater proportion of security-related tasks, then we may need greater allocation to security. A split between security-specialized with lots of separate protected chips, and general-use CPUs with one bigger chip may be likely.
Why not relegate the trusted stuff to a sub-chip then? (i.e. TPMs in desktop PCs, or iOS' "secure enclave")
(I say "chip" but I think it is more likely to be a separate 'core' on a chip, one with it's own cache and ram. It wouldn't need much of either.)
Makes me think if there is any incentive to do crypto properly or security theater will always prevail?
Yes and no... It's really important that this be viewed from the context of the discussion opened by theo in the video from the previous HN post (provided in this thread by codewriter23).
Here's My TL;DW from the irritatingly poor quality video:
Yes, they are pissed that they are being excluded (rumour is amazon and google have been implementing fixes).
However, they are not necessarily "not-respecting" the embargo according to the proposed methodology Theo outlines in the video: to (speculatively) exclude _any_ potential source of speculative execution vulnerabilities to ensure they are safe without giving weight to any one rumour. And then gradually prune back the precautions as they become publicly disclosed.
Apparently they used a similar strategy previously to provide patches for sshd before they were allowed to publicly disclose the vulnerability... prevent the bug from being reachable without revealing exactly what is broken in the commits by never touching the offending code. In this case the idea is to be non-specific, disable a whole class of things even though it might not be necessary (because in this case they really don't know where the problem is exactly).
Disclaimer: The above is not my opinion, it was my interpretation of the relevant context from the video, i do not know if it matches their actions.
It seems possible the commenter on the oss-security mailing list is not aware of this strategy and is giving more weight to openBSD's patch than it deserves (and perhaps wrongly implying openBSD have disrespected the embargo as a sideffect).
However these patches are way beyond me so I cannot tell.
The part you're referencing is Theo speculating about the next bug. He suspects fixing it requires flushing a cash line but he doesn't know which one (because he doesn't know where the bug is) so he proposes flushing all of them until the bug is published and then removing the flushes that aren't necessary.
He then mentions the last serious OpenSSH bug. Instead of publishing a fix for the bug (and thus disclosing the bug) they decided to publish a patch that moved a bunch of code around and just happened to also make the buggy code unreachable. Then they told everybody to upgrade and once that happened they could safely disclose the bug and publish a fix for it. No embargo necessary and everybody got the fix at the same time. (I assume that's why he brought it up.)
Can someone with deep enough knowledge of the patch tell if it implicitly demonstrates the flaw? (and therefor effectively breaks public disclosure) or is it purely speculative - oh god the puns are killing me.
This sort of thing regularly happens. I remember an incident relatively recently when someone inaccurately "pointed out" that Arch Linux had "broken" an embargo by packaging an upstream release .
The language being used makes it sound like somehow openbsd is breaking agreements. Considering this appears to be the result of the false perception that OpenBSD breaks embargos they are a party too, its important to fight this loose usage of words.
This/these cases however might be an exception.
I fully agree that one should be careful of propagating such false perceptions about OpenBSD (or any other entity).
However these circumstances can also be a matter of safety. For instance, an easily exploitable SSH vulnerability can incur serious damage to lots of institutions.
Further, the embargo isn't/shouldn't be about protecting Intel - it's about protecting everyone that uses Intel CPUs (sometimes those goals are aligned, sometimes not). How you go about that is one thing and if you intentionally disrespect that embargo (whether you were in on it or not) means that the assumptions and motivations for the embargo are invalidated and the consequences could be huge.
Now you don't necessarily have to agree with the embargo but if you don't know the consequences (in this case it looks like it was likely to be known) you take it up on yourself that you (with most likely very limited information) can identify the consequences of doing such a disclosure.
It's the same problem of doing a irresponsible disclosure of a major vulnerability. Most do consider that to be a dick move.
Video. Contains profanity.
Better quality video should surface eventually..
I wasn't impressed by whoever dropped the f-bomb.
An embargo is state imposed. Not corporation imposed.
It was done in February for Spectre ("variant 2"). Strange that Dragonfly is only starting now to clear registers.
If any mis-speculation happens, it has to happen after this point, at which point any user-supplied data left over in the register file isn't accessible to speculated code, because even speculated code can only use architectural names to address registers.
The cost is more expensive context switches currently since we'll have to fully unload and reload all SIMD/FP state. I'm sure Intel will fix this one in a couple gens.
See information about XSAVEOPT and the "Init and Modified Optimizations" in the SDM: intel.com/sdm .
As @luto said above, recent versions of Linux ripped out the lazy handling entirely.
It's basically another aspect of the branch-speculation bugs, not fixed by the original fixes.
(Since you can only learn a small part of the state each time, you need to have the other processes state remain in the FPU while you repeat the process to learn the entire AES key or whatever).
Yet I have been always mesmerized how hard it is to understand security stuff. Maybe it is because I don't find it interesting, as I am more interested in creative stuff like gaming.
Honestly it has been maybe 10 since I've abandoned the idea of caring about security. I just do what is minimal: passwords, avoid sketchy websites, not keeping sensitive files, using trustworthy software, etc.
Security is just too hard now. Maybe manufacturers like intel are to blame, and obviously there MUST be some political will to make sure that most electronics are insecure to give an advantage to intelligence agencies.
Because ultimately when I first heard about the sony rootkit, and lately about the HD firmware worm, I was really feeling powerless and outdated. I really think that even for a guy like me, who can write software, to not be able to protect myself efficiently against those attacks, and to tell non-programmers that "no I cannot hack people's computers", is starting to make me feel like an idiot.
As years go by it seems that electronics seem more and more vulnerable, and I still feel completely unable to defend myself against it. Even politically I'm sure that designing a completely secure computer would be a taboo subject because people would argue that it could help the bad guys, and I'm sure that politically, one could not design such a device with success.
The whole "I don't care since I have nothing to hide" is really a fair excuse to show that I'm not capable of defending myself, and I will let others go at their cyber wars without me caring at all. For now the security of individuals seems to be lost, and I fear that one day it won't only be state actors that use security for policing, it will be petty criminals. If cyber chaos ensues nobody will use computers anymore, and they might even become banned from possession.