Inside-out: How organizations typically defend their digital ecosystem.
Outside-in: How attackers actually operate.
In other words, while organizations work to secure priority assets, attackers are more focused on whatever fell off the radar. Unknown or un-prioritized assets become ticking time-bombs when they fail to receive routine maintenance and vulnerability patching, creating opportunity for ill-intentioned hackers to strike. And these aren’t passive risks– Gartner predicted that around 30% of successful attacks this year would be against shadow IT. To solve this challenge, we’ve seen a marked uptick in scanning solutions designed to automatically identify vulnerable, or otherwise forgotten assets. But is this enough for organizations to combat dynamic, motivated attackers looking to do the same?
In the first entry of our latest series on reducing unknown attack surface, we’ll explore some of the challenges automated discovery tools face in helping organizations truly reduce risk.
The story you probably haven’t heard
In 2017 Equifax announced a data breach that exposed the personal details of 147 million Americans. And for years thereafter, security vendors have referenced this iconic incident as a premiere case study on the importance of knowing the whereabouts, value, and accessibility of your data. We all know the story. Or do we?
What you may not have known is that Equifax actually did know about the Apache Struts vulnerability before the now-infamous breach. The below was taken from the now public hearing, FEDERAL TRADE COMMISSION, Plaintiff, v. EQUIFAX INC., Defendant. The excerpt mentions Equifax’s heavy reliance on automated vulnerability scanners, though specifically notes two gaps: 1) Equifax failed to maintain a registry of the public-facing technology they owned that matched those technologies known to contain the Apache Struts vulnerability, and 2) Equifax failed to configure their scanners to search potentially vulnerable public-facing assets.
Without either mechanism in place, Equifax failed to find or patch the vulnerability in an unseen asset before malicious exploit, as reinforced below:
It’s clear the problem here wasn’t just awareness of external risks, it was awareness of at-risk assets. And while Equifax was doing much in the way of vulnerability scanning within its known footprint, who/what was looking outside of that?
Not all scanners are created equally
Scanners do help cover more area than individual humans can do manually. But what does “cover” mean? Some are designed to scan for vulnerabilities strictly within the asset-set you define, or manage and monitor those assets as they evolve; Equifax was using such technology. Others are designed to perform reconnaissance for external-facing connected IT you may have forgotten. Whether rules-based, ML, or AI– all promise reduction in time and resource-drain otherwise required to perform these functions by hand.
Though, organizations should weigh the inherent limitations of automated solutions to determine whether the resulting tradeoff between impact and effort will still help meet security objectives, or whether a hybrid approach, such as one that strategically incorporates human resources, might be a better fit for need. The below summarizes some such considerations:
1. Serious lag-time
Let’s start with something that might sound a bit counterintuitive. The fundamental value proposition for most scanners is the ability to provide continual insight into your attack surface, saving time and resources in the process. It’s certain you will save hourly effort, but less certain that you’ll achieve rapid time-to-value. Unless the scanner in question utilizes continually updated pre-indexed data, you may be forced to wait up to a month for an initial scan to complete. This renders most scanners useless for many critical use cases, including M&A.
Scanners are designed to apply encoded logic or learning frameworks at scale- to cover more ground, with less overhead. For most solutions (other than some relying on Machine Learning), any activity identified as malicious or questionable is derived from what we call ‘known-knowns,’ or patterns that have already been identified as warranting further analysis. And while some of the better tools in-market were actually originally developed by very skilled members of the hacking community, the ability to stay abreast of the most recent attack methods is still a challenge. In fact, it often takes years before the latest techniques are validated, tested, and incorporated. “Getting a jump on attackers” is almost assuredly not in the cards for organizations relying on such solutions.
3. Lack of business context and inability to make logical pivots
The technology landscape for large organizations is often structured in exceedingly complex ways, and no two businesses look the same. Automated scanners are highly susceptible to getting lost in a maze of interconnectivity, unable to make sense of logical business structure and priorities. While training is an important requirement for any such technology, it’s also time-consuming, and never a one-off. Additionally, organizations are always evolving, abandoning behavioral trends as quickly as they were established. This often limits scope and attention to particular areas, or conversely, makes it difficult for scanners to develop a trusted baseline.
4. Inability to safely verify, and prioritize
While many scanners are tuned to identify assets that may be vulnerable, it is next to impossible for them to verify the accuracy of those initial assessments without serious risk. Scanners often have no concept of scope, nor the implications of various tests across multiple scenarios, where proof of exploitation could cause significant business or security risk to production environments. As a result, scanners are also highly limited in their ability to truly prioritize discovered assets. A rollup of 3,000 newly discovered assets, pulled from the shadows, is only as useful as your ability to action them. Scanners can provide preliminary estimates of risk, but the false positive rate is typically much higher than is useful.
5. Most attackers have built the same, or better
The good news about automated scanners is that they’re designed to rapidly uncover potentially connected assets faster than humans alone can achieve. The bad news– you’re not the only one using them. Attackers use (and frequently develop themselves), tools that rival and often surpass the power of any commercially-available scanner in-market today. In fact, while you might have the resources to deploy one or two, hackers mapping your attack surface (by the thousands), often use 5-10 or more different scanning technologies, creating a serious disadvantage for defenders everywhere.
Filling the gaps
A recent study by Bugcrowd & ESG found the average number of unprotected or unmonitored assets per organization was around 400+, often due to some combination of resource scarcity, or the inability to correctly account for shadow, legacy, SaaS, or acquisition-related assets.
However, organizations looking for a way to quickly alleviate the burden of covering and managing this growing set of liabilities should think twice about the methods they trust to do so.
Timeliness, variable business logic, accuracy, and the latest methods of attack attack are all sensitive topics for today’s automated scanning solutions. And as businesses increasingly undertake initiatives that further complicate their IT landscape, like business transformation, and M&A, gaps in scanner efficacy widen.
While these solutions were broadly intended to replace human resources, it seems their greatest strength may actually be in supporting them. Crowdsourced security solutions like Bugcrowd Attack Surface Management combine the scale of scanners, with the human ingenuity needed to find, validate, and prioritize assets before malicious attackers. For more on why scanners aren’t suited to in-depth security review, check out this webinar hosted by hacker turned Bugcrowd Head of Security Operations, Michael Skelton, stay tuned for next week’s blog, or contact us today!