Today Rendition Infosec is releasing a blog post that we started writing more than a month ago. Why now? The dust has settled, that’s why. Prior to the dust settling on Meltdown and Spectre, we think this very important conversation would have been lost in the noise. In light of these vulnerabilities, we think it is important to talk about how their disclosure was handled. What did we get right, what did we get wrong, and how should we in the security community posture for the disclosure of next round of CPU vulnerabilities (there will be more).
As most know, Intel found out about the CPU vulnerabilities as early as June of 2017. The mainstream public did not find out about these until January 2018, and then only because AMD engineers made careless comments in open forums that allowed independent researchers to reverse engineer the vulnerabilities. Normally, Google releases vulnerabilities to the public in 90 days, with exceptions given only in rare circumstances. In this case, they waited because of the seriousness of the vulnerabilities and the amount of work needed to patch them.
Did we get this disclosure right?
But did Google (and the others who waited to disclose) get this one right? Before examining this, let’s consider Google’s logic in their 90-day release policy. If Google released vulnerabilities to the public immediately upon finding them, attackers would undoubtedly create exploits for many of these vulnerabilities before vendors could release patches (and definitely before those patches could be applied by end users). Waiting on the release gives vendors time to release patches and most potential victims time to patch before being exploited. But Google understands that it can’t wait an indefinite period of time – we’ve seen first-hand at Rendition Infosec that many will abuse this and take six months or more to release a patch. The time limit forces action (rather than apathy) on the vendor’s part.
In the case of Meltdown and Spectre, it was agreed that processor manufacturers and major operating system vendors needed more time to patch given the complexity of the vulnerabilities. But this time also played into attacker’s hands (particularly nation-states). We know that the research for Meltdown and Spectre was built from a presentation on kernel ASLR at Blackhat Europe in 2016. This talk (and discussions after it) became the eventual genesis for most of the independent discoveries of Meltdown and Spectre.
We must assume that some nation-state attackers also viewed this research and did their own follow-on research. To assume anything else is to assume that nation-state attackers are completely incompetent. Further, many nation-state organizations who engage in hacking are also engaged in defense. They have people on the closed Linux kernel mailing list who would have been aware of the vulnerabilities. Many nation-states also monitor the email of their populace (and developers are valuable targets), so one must assume they could have discovered news of the embargoed vulnerabilities there as well.
If nation-state hacking groups knew of the vulnerabilities, they almost certainly weaponized them. Independent researchers quickly wrote working proof-of-concept exploits for Meltdown using only the information carelessly posted by AMD engineers. A nation-state group that had full information about the vulnerabilities would have no trouble writing the same.
Asking the tough questions
When we consider the question of whether Google should have moved to release the vulnerabilities earlier, we think the answer is an overwhelming YES. But in this, we need to discuss why. There are several questions that we think play an important part in the decision calculus.
- Would nation-state actors likely independently discover and use the vulnerability to target others?
- Can vulnerable parties enact mitigations to limit damage from exploitation of the vulnerabilities?
- Will waiting to disclose the vulnerabilities likely increase the risk that vulnerable parties will be exploited by nation-states?
- Will logging later reveal the fact that exploitation occurred (e.g. the Discovery course of action)?
- Will waiting to disclose give vendors time they actually need to build a working patch or just time to stall?
The answer to the first question has been answered already, we won’t cover it more here.
The answer to #2 is a definite yes. Vulnerable parties could have changed architecture configurations for hypervisors to prevent running workloads of different data sensitivity on the same hardware. While this is already best practice, it is not followed by many organizations today. Vulnerable parties could also examine (and re-architect) the multiuser systems they have in place today.
Though the answer to #3 is debatable, we think it is also a yes. Our logic is that while sophisticated nation-states might have built working exploits very early, it may have taken others far more time. Increasing the time to disclosure works in the favor of less sophisticated nation-states (who are often less discriminate in their targeting).
Finally, the answer to #4 is a clear no. Even knowing about the vulnerabilities, detecting exploitation is difficult and requires specific instrumentation to be enabled. Pre-disclosure almost nobody had that enabled (and those that did didn’t regularly store those logs). This question is important because if an organization is exploited during the embargo period, it is important that they are able to discover the exploitation after the fact. In this case, there is virtually no opportunity to do so.
The last question is probably the hardest to answer. In hindsight, we can see that vendors were clearly not ready to release patches. Intel struggled with their patches and so did AV vendors (in light of Microsoft changing kernel memory access patterns). But how much more time was needed to do it right (and are we even there yet)? Linus Torvalds (creator of Linux) called the Intel patches “complete and utter garbage” and implied Intel isn’t serious about security. Would more time have really eased this situation? If so, how much more time? And what does this extra time do for attackers?
We understand Google’s decision and don’t want to be “Monday morning Quarterbacks” but we think there are some lessons to be learned here. Knowing what we know now, it is reasonable conclude that disclosing the vulnerabilities to the public on the regular 90-day timeline would have been a net positive for some while being a net negative for others.
Those who react quickly to vulnerabilities and take steps to mitigate vulnerabilities even before patches are released clearly would have benefited. Those who only apply patches (sometimes) within a 90-day window of release clearly would have been worse off. That leads us to ask the question of who we’re trying to protect. Given these two groups, we believe those who take proactive steps to increase their security at every turn should be given the most consideration in these decisions. For this reason, we feel that Google should have released information about the vulnerabilities much earlier.
Whether you agree or disagree with our conclusion is the least important part about this post. While reading this (and considering your own stance on disclosure) please take the above points into consideration. Disclosure decisions are rarely black and white and examining a decision in hindsight is always easier than making one in real time. But in this case, in light of all the facts we think that Google got it wrong this one time. We aren’t sharpshooting either (that’s not our thing at Rendition), we just want to lay out all the facts so you can critically analyze the situation for yourself.
Thank you Google Project Zero
Lest we end this post on a negative note, Rendition Infosec wants to thank Google and the members of Project Zero for everything you do. You’re making the world a safer place and we appreciate it. From the bottom of hearts, thank you.