A young woman – let’s call her Alice – had recently moved into her own apartment, and invited her parents over for Thanksgiving dinner (in October, of course – this being Canada). Her mother – let’s call her Eve – was talking to Alice in the kitchen, and noticed that she had the turkey thawing in the sink with a dish-rack covering it.
Alice’s father – let’s call him Bob – walked in just as Eve was asking Alice why she had the dish-rack covering the thawing turkey. Alice responded: “That’s the way Dad always does it. Right, Dad?”
Bob responded, “Yes, Dear. But you don’t have a cat.”
I’ve heard several variations of this story, and think it illustrates a vital point around establishing processes. Processes are essential to any robust system, and developing and establishing processes are critical to any successful organization, be it a PMO (Project Management Office), SOC (Security Operations Centre), or anything else.
HOWEVER, processes can be horrible, time-wasting, stress-inducing disasters if they are badly-designed or rigid. Or if they get out of date – the fact that we DID something in a certain way in the past is NOT sufficient reason to continue doing it in that way.
Over more years than I care to discuss, I’ve worked with, designed, implemented, and fought processes for data loading, software change management, project management, user support, and many other things. Far too often, processes are designed, implemented, then left to rot, leading to long-term frustration and time-wasting. And, ironically enough, sometimes processes are bad because they are incomplete – ie, there isn’t enough process.
Vulnerability management is where I most recently encountered a frustrating one... A “medium” vulnerability was reported for a widely-used application framework, and forwarded for remediation. (Note that word – not for review/analysis, but for “remediation”)
The issue was perfectly legitimate, but assumed that the application framework was internet-facing – and this was not, which reduces the risk.
It also assumed that the data in question was sensitive or confidential in some way. The only data potentially exposed was the language preference used in the browser – and the small group of users with access to the tool were located in the same office, which reduces the risk.
And, to top it all off, the application framework was used inside a third-party application. The vendor confirmed that they had reviewed the issue, decided it was not relevant to their use-case, and decided that they would not be address it. This reduces the risk and dramatically increases the cost of any remediation or mitigation we might want to do anyhow.
An even moderately well-designed and supported process would have a way to record this information, a way to review it, and a way to flag it as an “acceptable risk” to the company. It doesn’t have to be complicated, or expensive, it just needs to record the details and support a minimal workflow.
All too often, however, processes grow organically out of other processes, or are invented “on the fly” - when, for example, a vulnerability scan is run and the results are simply disseminated through an organization without planning. These weak “processes” can survive in any organization, but a well-run and well-led organization will eventually identify and address them – often because of the contrast between the weak process and other, better-designed processes.
The biggest problem with processes, though, are the Process Zombies.
Organizations like the Centers for Disease Control and Prevention (https://www.cdc.gov/) in the United States coordinate responses across a wide variety of organizations – including government (municipal, state, and federal), academics, corporations, hosptials, and the public. And they have a long-standing reputation for doing these things well, so you can be reasonably confident that they have some good internal processes.
At the CDC’s Center for Preparedness and Response, they provide information on a wide variety of public health threats – including zombies (https://www.cdc.gov/cpr/zombie/index.htm) – where they state:
Wonder why zombies, zombie apocalypse, and zombie preparedness continue to live or walk dead on a CDC web site? As it turns out what first began as a tongue-in-cheek campaign to engage new audiences with preparedness messages has proven to be a very effective platform. We continue to reach and engage a wide variety of audiences on all hazards preparedness via “zombie preparedness”.
I’ve encountered a fair number of zombie film and books over the years (including the graphic novel provided by the CDC – https://www.cdc.gov/cpr/zombie/novel.htm), and believe that good zombie stories are usually about people and how we react to novel situations. Often, they are about how thin the veneer of civilization can be, and how quickly many people can descend into behaviours we would condemn in our current society.
For my current purpose, the zombie is a metaphor for blindly following processes without thought. We often do things in a certain way because we’ve done them that way in the past – even if the “past” is very recent. And if we don’t understand (or even think about) WHY we are doing something, even when it is no longer useful or relevant, we run the risk of becoming Process Zombies.
But what’s the harm? It may be frustrating, and waste time, and cost money, and increase risk, and... erm. Never mind.
To assess the risk of Process Zombies, consider the impact of password complexity rules (https://www.til-technology.com/post/infosec-bullshido-password-complexity-rules), or look at the InfoSec example above.
InfoSec is all about risk. We identify, triage, prioritize, and manage risk every day, and there are more risks than any organization can possibly address. So, how do we deal with all of it?
Process.
That’s it. Process.
Let’s look at a simple example. We start with a list of issues, trouble-tickets, vendor notifications, intelligence feeds, news stories, scan results, and endless other things.
Start by adding them all to a single list – this could range from a spreadsheet to enterprise software packages, but the key is to consolidate.
Then, we need some way to triage (https://en.wikipedia.org/wiki/Triage). The term comes from the French word “trier”, meaning “separate”, “sort”, “shift”, or “select”, and TIL the word isn’t actually related to the number three, even though triage generally sorts into three groups.
In any case, let’s categorize our initial list into three groups, using a simple high/medium/low. In general, everything in the “high” category should be more important than anything in the “medium” category. (I realize it’s more complex than this, and urgency can change, but this is - at-worst - a useful starting point)
Next, we prioritize the issues in each list, and start working through the list. Depending on resources, skill levels, and other factors, we could have different teams working on different parts of the list – for example, some issues are urgent but don’t require a specialized skill-set.
Process design is a large field, but a basic pricinple is simplicity. To quote John Gall (https://en.wikipedia.org/wiki/John_Gall_(author)), whom I had not previously heard of, though I think I read a paraphrase or parallel of the idea in a Robert Heinlein novel...
“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.”
So, process is simple. Just remember to keep it simple, keep it flexible, and don’t be a Process Zombie.
Cheers!
Comments