CMDB; The Oft Missing Security Link
While there are many maturity models that have been devised over the years related to information security. Many of these models can become fairly nuanced, with some being specific to a particular market sector, such as finance or health services, and others trying to provide a more generalized approach. While some models have been developed from the ground up, those developed by the US federal government tend to be used as a starting point. Because of this, at least for the sake of this article, I’m going to leverage the model published by the Department of Homeland Security, which is known and referred to as the C2M2. This model specifies 10 domains as follows:
- Risk Management
- Asset, Change, and Configuration Management
- Identity and Access Management
- Threat and Vulnerability Management
- Situational Awareness
- Information Sharing and Communications
- Event and Incident Response, Continuity of Operations
- Supply Chain and External Dependencies Management
- Workforce Management
- Cybersecurity Program Management
Within each domain, the C2M2 provides for up to four sub-levels, referred to as Maturity Indicator Levels, or MILs, as follows:
- MIL0 – No practices performed
- MIL1 – Initial practices performed, but usually ad-hoc
- MIL2 – Practices are fully documented, with stakeholders, standards, and supporting resources, in addition to being more developed than MIL1
- MIL2 – Practices are fully governed, managed, monitored, and enforced, often with automation or tools to facilitate
As a professional consultant, I’ve helped a lot of organizations assess or remediate various elements of their infrastructure security over the years. More often than not though, by the time customers have engaged my services, it’s as a response to an incident or event of some sort, and they’ve decided that their problem is AD related, or that they need another monitoring tool, or an SSO solution, or any of a dozen other things. Don’t get me wrong, anything that progresses the maturity level of any aspect of an organizations security framework has merit, but I personally feel it’s the wrong place to start since, as any good engineer or architect will assuredly tell you, in order to have a strong and lasting result, you have to start with a solid base, then build up from there. What organizations seem to be consistently missing is that base…they just want to keep piling on more layers.
While Risk Management might seem like the logical place to start, this blog is a bit more focused on infrastructure and, as you may have surmised from the title, I’m starting with Asset, Change, and Configuration Management instead. It’s sad to say, but the truth is that many environments that I have assessed in the past would not have fared well had my focus been on asset and configuration management. Oh sure, some organizations possessed at least some degree of inventory. Others might even have had security assessments on each platform or product in the environment. Fewer still had some degree of ongoing maintenance and management of their asset inventory, or maybe even some configuration. Over the course of nearly two and a half decades of working in IT however, I have only ever run across two organizations that had even a halfway viable approaches to configuration management, and even they would not have made been considered fully matured in this area.
Right now, you might be thinking to yourself ‘Okay Chris, sure, inventory and configuration management is important, but it can’t be that big a deal.’. I would argue that quite the opposite is true, that organizations can’t afford not to spend time on it…in fact, I think it’s the critical base that most organizations are missing. The question is, how can it be done the right way to have the best and most lasting effect? So, let’s try and break down this specific domain, and see if we can’t find some answers.
Asset Inventory
At it’s most basic maturity level (MIL1), this should translate to the organization having a complete inventory of all the hardware and software in use within the organization. At minimum, this inventory should map each item to the supported business function(s). To reach the second level of maturity (MIL2), inventory data must also include attributes that support the cybersecurity strategies of the organization. Things such as technical and business owners, data classifications and flow, external dependencies, and business continuity information, such as criticality tier, SLAs, escalation paths, and maintenance schedules. Reaching the third, and final, level of maturity (MIL3) requires that organizations have detailed governance related to assets and inventory, and inventory data must be actively maintained, instead of only be updated annually, or in response to events. This means that not only automation, but monitoring checks and balances must be in place to ensure that expedience is not allowed to derail updates and ensure the data remains trustworthy.
While some organizations I’ve encountered might have a basic inventory for their hardware, and maybe even their software, it’s rarely something that can be effectively leveraged for this purpose. The records may be part of a purchasing system if related to hardware or purchased software products, but it’s rarely recorded in a normalized data set, and generally can’t be effectively leveraged by other tools. Even were it available, the organization would still need something else to track virtual and cloud based systems and in-house developed software. Even if the organization is wanting to perform some sort of charge back, it still doesn’t make sense to add such systems and software to a purchasing data source, though such a system could be used as a feeder to another solution perhaps.
Asset Configuration
The second sub-domain is related to managing asset configuration, with the base level of maturity (MIL1) requiring that organizations have baseline configurations, also referred to as ‘as built’, defined for all inventoried assets. These baselines must be sufficiently maintained to allow redeployment of the related assets, or to deploy new instances or expand assets for a given function. Achieving the second level (MIL2), requires that all configurations take security objectives into account. Ideally, these security objectives should be specific to the function, data classification, and types of access, though this isn’t strictly required by the model. To reach the third level (MIL3), assets should be monitored for deviations from the defined configurations, and the configurations should be reviewed and updated on a regular basis.
In this context, ‘configuration’ refers to more than just the installation steps, and the values supplied, it refers to every element of the configuration. This means things like IPs, firewall rules, service accounts, access control groups, software versions (both the main, and any locally installed dependencies), and essentially anything else required to enable the product to work. In many of my prior engagements, most organizations kept this type of information within documents, if at all, though most chalked this up to ‘tech debt’ that they’d get to ‘eventually’…maybe…if/when they have time. I have to believe that, at least on some level, they know that this is not ever going to happen but, even if it did, the challenge is that data from a document is nearly impossible to use. You can’t use a document to monitor the environment for deviations from the baseline, and even attempting to assess the environment takes more time because you must first ingest the contents of the document before you can use it.
At best, most organizations I’ve encountered just review the configurations as part of an annual risk review process. More often than not, these review processes are just the admins making some sort of minor update so the record shows a change and ticks the box, if even that much. Some organizations may also implement some sort of change monitoring tools, though the effectiveness of such tools without a structured reference point is questionable. Most such tools are either very specific, or far too general in nature, or are too costly to deploy broadly. Even when such tools are in place, virtualization and cloud computing often results in information overload, with so many systems spinning up that it’s impossible to keep effective track of the environment.
Change Management
The third sub-domain is related to managing how change is introduced to the environment, with the base level (MIL1) requiring that all changes to inventoried assets are evaluated and logged. Achieving the second level of maturity (MIL2) requires that all changes undergo testing prior to deployment, and that the asset is managed throughout it’s lifecycle. Achieving the third, and final, level of maturity (MIL3), requires that all changes not only be tested, but that they are specifically evaluated against defined security initiatives and practices. Any changes that might change the availability, data classification, or other security related elements must be thoroughly documented as well.
As with the other entries in this post, I rarely see full embracement of this tenet from most organizations I have worked with. At best, organizations might have a change board that generally reviews changes prior to implementation. More often than not though, the appropriate business stakeholders are not actively involved with reviewing changes, and the impacts of changes are rarely fully understood by all of the approvers. It’s also hit or miss as to whether Risk Management, or Information Security is even involved in the change management process. As for testing, if it’s done at all, it’s usually done on a system within the production domain, rather than a proper test environment.
So What To Do?
So, what’s the answer here? What’s the right core enabling technology that will allow an organization to progress? What’s the right starting point? To answer that, let’s take a page from the example of identity management for a moment, even though that’s a different domain, and likely the subject of another post.
An increasing number of organizations have embraced the idea of identity being fully driven from an authoritative source, such as an HR database solution. Automation is deployed that can consume this information and, at a minimum, can be leveraged to provision and deprovision accounts for employees in a timely manner. Some organizations have even gone so far as to extend this to include non-employees, such as contractors or consultants. Some of the most mature organizations can, at least to some degree, also use this data to assign access as well. In nearly every organization I’ve spoken with, they generally acknowledge that this is a good idea, and a critical step even, in their organizational growth. All identity should indeed come from an authoritative source, and should be managed in an automated manner. If we extrapolate from this, why isn’t this same mentality applied to all provisioning and deprovisioning activities…systems, service accounts, and even services?
Obviously your HR database isn’t the right solution to act as an authoritative source to drive provisioning and deprovisioning for these other item types, but you should absolutely have such an authoritative source. Sure, you may have a process for requesting accounts, and maybe you keep those in your ticketing solution, or maybe as part of the risk record for a given item. You may also have inventory solutions that offer some degree of tracking as well. The problem with most of these data sources however, is that they can’t typically be leveraged as an effective automation driver, even if they happen to be normalized and sufficiently available. You can’t effectively even create relationships between records, particularly between disparate systems, generally speaking.
As most professionals will probably tell you, the first step in assessing the risk of a system, a change, an access request, or even the potential impacts of a given project, comes from definitively knowing what’s in your environment. This is the reason why nearly all of my engagements start with discovery, but no random collection of data will suffice. To be effectively used as a source to drive automation, it is critical that such data be normalized, regularly validated, and then made broadly available in a secure manner to the rest of the organization. It must become the base from which all other systems and service are provisioned, configured, and changed. This is precisely the purpose of a configuration management database, often called CMDB.
Imagine, for a moment, a world where you know exactly what would happen if you changed a service account password, or exactly what type of exposure you’d face if a particular system were compromised. What if you could know exactly where to place the heaviest monitoring to achieve the best effect? What if you could actually quantify the risk of a platform or change to the environment using verifiable data, instead of nebulous supposition? This is what a well managed CMDB and asset lifecycle process can provide.
Now I know what you’re probably thinking…it’s the same sorts of excuses I hear all the time. ‘We tried that once and it didn’t work’ or ‘That will never work in our environment because of our [insert culture/ organization/ structure/etc item here]’, or my personal favorite ‘We agree it’s important, but we just don’t have time to do that’. Meanwhile nearly every other major transformation project has required gathering some, if not all of the data required not once, but likely multiple times over. The simple fact is that every item that can be introduced to your environment without oversight, procedure, or controls adds exposure and risk to your environment, so how can you afford not to do it.
The only way to effectively use CMDB is to start by doing it right the first time. Establish the required governance, stakeholders, and processes. Decide which systems will leverage your CMDB, as well as who will own the individual records of each type. Build out your organizational lifecycle policies and begin socializing the planned shift in direction. By the time you’ve done all that, you should have a fairly good idea of what the requirements are for your CMDB solution…find the right one that meets the needs of your organization. Believe it or not, this is the hardest part, not the setup or upkeep if you do it right.
So, what does ‘doing it right’ look like, outside of everything I’ve already listed? First, you make it a hard cutover for each system…when you cut over system provisioning for example, you must take away the ability to directly add systems to the domain from all admins by default, instead requiring that the CMDB be the authoritative source, and you regularly audit the processes used to deploy systems. If the only way to get a new workstation, server, kiosk, etc onto the domain is to run it through a lifecycle process culminating in a CMDB entry, people will update the CMDB. If you also implement IaC for configuring and changing your systems and services within your environment, and take away direct ability to make change, you again ensure that all changes are fully vetted, documented, and securely implemented in a consistent manner. Continue this process until everything is tied into it…helpdesk ticketing, change management, support cycles…everything.
Don’t mistake me…I offer you no illusions that this will be an easy process. Just making the shift to an IaC based approach alone will take time to adjust to…creating configurations and deploying the supporting infrastructure, not to mention learning the required skills, will take time. What truly adopting CMDB will do however, is establish an extremely solid base from which to build everything else.
Awesome read! Honestly, I haven’t used or heard talk about a CMDB since my military days almost 15 years ago.