top of page

InfoSec Bullshido – Zero-day



“Zero-day” (https://en.wikipedia.org/wiki/Zero-day_%28computing%29) or “0-day” (also pronounced “oh-day”), is another term that we hear quite a lot in news about information security – usually in the context of security incidents. This is another one where it’s a bit of a tough call whether to post it under InfoSec Basics or InfoSec Bullshido.

The term is useful in the context of understanding how software vulnerabilities are identified, communicated, and addressed, but it is also frequently used as an excuse when something goes wrong, or as a “buzzword” (http://en.wikipedia.org/wiki/Buzzword) when trying to sell a product or service.

Historically (https://web.archive.org/web/20180131070511/http://markmaunder.com/2014/06/16/where-zero-day-comes-from/ - retrieved from archive.org), the term zero-day appears to have originated with software pirates, and referred to the amount of time since a piece of software was commercially released. So, a “zero-day” referred to software which had not yet been released.

Nowadays, the term appears to be ambiguous when used as a noun, so I think it’s probably easier to break the discussion down into zero-day vulnerabilities, and zero-day exploits. It should be noted that different writers often have slightly different definitions of these terms (and other terms, such as “zero day attack” and “zero day malware”), usually due to the perspective from which they are defining them, or in an attempt to draw more precise distinctions.

In general, a zero-day vulnerability refers to a vulnerability for which a patch is not available, while a zero-day exploit refers to an exploit based on a zero-day vulnerability. So, day zero is the day on which the clock starts for the software developer, after their becoming aware of the issue, and we would certainly want the gap between that moment and the deployment of a released patch to be as small as possible.

Zero-day vulnerabilities represent a major risk, but that risk is theoretical until or unless an exploit is developed. Once a zero-day exploit exists, the risk level then depends on the nature of the exploit, and how quickly the vendor/developer learns about and addresses it. In practice, developers will become aware of an issue through discovering it themselves (often due to bugs reported by users), being notified of it by security researchers (often through bug-bounty programs), or through discovering (or being notified of) the existence of an exploit.

From a risk management perspective, it is important to realize that the life-cycle of vulnerabilities and exploits have accellerated dramatically over the years, and that there are variations on all of these themes. Here are a few scenarios, in no particular order:

1) Developer discovers vulnerability

In this case, the developer can identify the vulnerability, develop a patch, and release it before the broader community is aware of the issue. Sometimes, the developer will not advise anyone that a vulnerability existed in the first place.

2) Responsible security researcher notifies developer of vulnerability

In this case, the developer can investigate the vulnerability, develop a patch, and release it before the broader community is aware of the issue. In this case, there is usually an announcement made after the patch is available.

A number of security researchers follow some sort of “disclosure deadline” policy, under which they advise the developer that they will announce the vulnerability to the broader community after a certain period. Policies like this are meant to ensure that the developer has time to address the issue, but also that the broader community is informed of the vulnerability in a timely fashion. As an example, Google’s Project Zero follows a standard policy of 90-days. (https://googleprojectzero.blogspot.com/p/vulnerability-disclosure-faq.html)

3) Vulnerability is publicly announced, or “dropped”, without notifying the developer.

In this case, the developer finds out when (or shortly after) the broader community does, and has to rush to develop / deploy a patch. This is not generally considered ethical behaviour among security researchers, but can also include cases where a researcher discovers information about a vulnerability on a malware forum, and then notifies the developer.

4) Vulnerability is “announced” through the developer or a researcher identifying an exploit which uses it.

This is about as bad as it gets – the vulnerability is already being exploited “in the wild” when the developer learns about it.

The worst-case, of course, is where exploits using a vulnerability are being used, and no one is aware of them. It is generally thought that many intelligence services hoard zero-days so that they can be used to maximum effect. Stuxnet is one of the better-known examples of this – see Jack Rhysider’s Darknet Diaries episode for an excellent summary (https://darknetdiaries.com/episode/29/)

Another very interesting scenario is illustrated by the so-called “Drupalgeddon” and “Drupalgeddon 2.0” vulnerabilities (https://www.securityweek.com/drupalgeddon-critical-flaw-exposes-million-drupal-websites-attacks), which affected the Drupal open-source web content management framework, used by a wide variety of sites. In this case, the developer made an announcement BEFORE releasing the patch (https://www.drupal.org/psa-2018-001).

Huh? Why would they do that?!

Well, since Drupal is open-source, the developer knew that malware authors (and security researchers) would immediately review any new version, to identify all updates which could potentially represent security vulnerabilities which could then be used on unpatched systems. In this case, the Drupal developers suspected (and events proved them correct) that the vulnerability would quickly be identified and exploits developed very quickly, which is why they wanted to emphasize the critical nature of this update in particular, so that they could get as many systems as possible patched as quickly as possible.

To summarize, the term “zero-day” generally refers to vulnerabilities or exploits for which no patch has been released.

But where does the Bullshido come in?

One example of Bullshido is where the term “zero-day” is used as a scary term, meant to generate FUD (https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt), through making things sound worse than they might otherwise seem. If a “vulnerability” is bad, a “zero-day vulnerability” must be worse, right? If anything, the distinction should be between “patched” and “unpatched”. The vast majority of security incidents involving software vulnerabilities take advantage of known vulnerabilities for which patches exist, making it clear that patch management is one of the major challenges in InfoSec today.

At the highest level, the challenges affecting patch management include budget, availabilty of resources with the required expertise, complexity of systems / environments, education of resources at all levels, prioritization, and risk management discipline. Patching properly is hard – for small companies, it’s usually budget/resources, while for larger companies, it’s usually budget/resources AND the size/complexity of the technology footprint.

In any case, unless an organization is “fully” patched, the risk associated with “known” vulnerabilities is usually more than enough to worry about, thank you very much. (This is actually a vital lesson around risk management and prioritization. If you’re not up to date with your patching, focus on that first, and THEN start thinking about zero-day vulnerabilities.)

Another example of “zero-day” Bullshido is where an organization blames an incident on a “zero-day” exploit, whether or not it’s true. Similarly, “advanced actors” seem to be responsible for just about every incident, even when the exploited vulnerability is trivial. By calling something a “zero-day”, some companies try to minimize their bad press, through leading people to think that it was a “zero-day”, so no one could reasonably expect them to be able to have prevented the incident.

To be fair, it’s sometimes true. Far more often, however, the vulnerability in question may be relativley recent, but is not a zero-day.

A fascinating case-study in this whole question is around the so-called EternalBlue exploit (https://en.wikipedia.org/wiki/EternalBlue). This was a true zero-day, in that it was exploited prior to the vendor being aware of it, apparently by state actors (https://www.wired.com/story/nsa-zero-day-symantec-buckeye-china/).

A lot of drama here, actually, including back-and-forth between a number of apparent “state actors” (ie, groups associated with, or part of, state intelligence services), the Edward Snowden leaks (https://en.wikipedia.org/wiki/Edward_Snowden), and even the fact that the “kill-switch” was identified (almost by accident) by a researcher named Marcus Hutchins (https://en.wikipedia.org/wiki/Marcus_Hutchins), who was later arrested for earlier work as a malware author.

In any case, the vendor was notified of the vulnerability (probably in early 2017), and the released a patch in March, 2017. All of this was BEFORE the vulnerability was leaked to the broader community in April 2017, and BEFORE the global WannaCry ransomeware attack (https://en.wikipedia.org/wiki/WannaCry_ransomware_attack) in May 2017. So, strictly-speaking, WannaCry was not a zero-day, even though it was devastating, with estimates of losses ranging from hundreds of millions of USD to as high as four billion USD.

In this particular case, it might be unnecessarily harsh to attack an organization from describing WannaCry as a zero-day, but most incidents involve exploits for which patches have existed for months, or years. Still, if you’re going to use the term “zero-day”, use it correctly, and don’t spread BS.

Cheers!

Comments


bottom of page