Should Antivirus (AV) software be part of your threat model? Strictly speaking, yes it probably should be. AV is potentially dangerous to an organization and should be tested thoroughly before being deployed. As argued in the recent WSJ article about Kaspersky (note that the article is behind a pay wall), AV software could threaten the confidentiality of a protected system.
But as any infosec professional can tell you, information security is about more than just confidentiality. The security triad is referred to by the acronym CIA, which most reading this post will know stands for Confidentiality, Integrity, and Availability. In every security program, one of these items takes precedence over the other two.
In the case of the NSA contractor who placed classified material on their home computer, confidentiality was clearly the most important of the three. However, there are few organizations for whom a breach of confidentiality is really the most damaging impact. In the vast majority of organizations, devastating compromises to integrity and availability would have a far greater impact to organizational health.
The WSJ (and other outlets) have already done a sufficient job addressing how AV could compromise confidentiality so we will cover it lightly here. AV software, while searching for files, could upload those files to their cloud scanning services. Once uploaded, the files could be directly sent to third parties (like Russian intelligence as alleged in the WSJ article). Alternatively, the AV vendor could simply alert the interested third party to the existence of the files enabling more precise targeting.
Because AV software operates at the kernel level, it has the ability to write any file on the local system. This is true even for files that are opened in “exclusive mode” on Windows. In addition to files on the local system, the AV software can impersonate any user logged into the system and access any files on network shares that the user can write to.
Changing the integrity of files can have devastating impacts. These impacts are particularly difficult to detect when the changes are small. Stuxnet damaged Iranian centrifuges by fundamentally altering the integrity of the data (RPM speed). While Stuxnet also hid the changes from the operators, the integrity compromise caused the damage.
In another scenario, picture a manufacturing firm bidding on a large contract. They would use the Enterprise Resource Planning (ERP) system to gather data for their contract bid. Subtle modifications to the ERP system could cause the company to create a suboptimal bid based on false cost information.
In another example, consider the inventory tracking system at a military depot. Combatant commanders rely on supplies to enable troop movements. The inventory system could be modified to make commanders believe they had more ammunition, fuel, and other provisions than was really the case. The result could easily allow a lesser force to be victorious in battle.
Finally, consider what might happen if medical records of military members were tampered with. Service members must undergo rigorous pre-deployment physicals. If an attacker modified the results of some physicals, they could effectively make a percentage of soldiers non-deployable. Marking too many soldiers non-deployable would raise eyebrows, but any number causes a loss of resources addressing the non-existent issues. Additionally, in a military medical system already plagued by errors and inefficiency, the hit to troop morale would be significant.
When considering attacks on integrity, it is unlikely that AV software would be blamed for the damage. In fact, most attacks on integrity would not be blamed on a cyberattack at all. Occam’s Razor demands that we consider more benign issues such as software and human errors first. In any sufficiently advanced attack, tying the integrity issues to AV software would require very advanced instrumentation on the machine. This type of instrumentation is beyond the resources of most organizations and could easily be detected by the antivirus software in any case.
There are numerous ways that AV software can impact the availability of an information system. Consider that most AV software operates a network firewall and can terminate processes. Either of these two capabilities could create serious impacts, but combining them would be devastating.
Imagine the impact of AV selectively terminating connections to a database server. Alternatively, AV could terminate the database client process. Of course the server process could be killed if AV software is running on the database server, but that might draw too much attention. Any time AV needs a machine to become temporarily unavailable, all it needs to do is terminate the lsass.exe process from kernel space. This will guarantee a system shut down. As an added bonus, open files can’t be saved, which may have an impact on integrity as well. None of the logs for these events would likely point to the antivirus software.
Next, consider that every year AV vendors deploy some signatures that cause false positive detections. Antivirus vendors wishing to impact availability can just make a “mistake” in their signatures. Of course this can’t be targeted as precisely as the other options laid out above, but might be the most viable option if plausible deniability is needed.
Finally, the AV vendor could simply render a machine unbootable by overwriting portions of the hard drive. While this is technically an attack on the integrity of the system files needed for boot, its goal is to impact availability.
Vendors have immense remote access to systems where their antivirus software is deployed. Abusing the antivirus software can have potentially devastating impacts, as discussed above. Most AV vendors have most of these capabilities. By installing AV software, you place a tremendous amount of trust in the vendor to do the right thing. In addition to malicious action by the AV vendor, we saw with the recent CCleaner backdoor that your vendor’s security matters substantially too.
When constructing your threat model, ensure that you consider how you might react to an issue caused by your security software itself. If you need help building a threat model or just want to learn from a team that’s been there, done that, and has more tee shirts than they’ll ever wear, contact Rendition Infosec to set up a consultation.