Note:This is part 4 of a 5-part series in which we examine a smarter approach to attack surface management. Catch up on last week’s post first.
Attack surface is evolving faster than ever. If you think of a business like a human being– living, breathing, thriving, changing — then it might be easier to conceptualize the rate of change for the modern attack surface. Consider the software and systems required to manage your business. Now add a layer of complexity for the level of customization you have applied to each. Now think quick — who has access? Has that ever changed? If so, what does that change management process look like? Accountabilities are easy to track initially, until turnover, growth, mergers, digital transformation, and other perfectly normal business operations muddle it all up.
If you’re reading this blog expecting me to reject software-based solutions outright, you’ve come to the wrong place. Automation in discovering and reducing attack surface is crucial for tackling this problem at scale. I’ve developed several myself and use another dozen or so open source varieties. But automation without human intuition is an incomplete solution. And in the race against ever-evolving attack methodologies, you want to be firing on all cylinders. That’s patently obvious to me and other reconnaissance researchers, but I realize it may not be equally clear to others outside this profession. So in this blog I’m going to share 5 examples that illustrate how the human element wins over pure software-based solutions
1. Scope – Telling a software-based solution what to test is easy. Telling it what not to test is easy. Having it identify things that might belong to you, and then making a judgement call about whether to perform an exploit… is a liiiiittle stickier. And not in a way that many scanning solutions are willing to correct for fear of getting it wrong. By using scanners to identify low-hanging fruit, humans are left to focus on the edge cases — identified assets that can be positively attributed to the organization through manual investigation.
2. Tailoring tactics by organization – Many of the activities that cause drift in your internet-facing assets are things that happen seasonally. Think conferences, sales kickoffs, marketing campaigns, etc. For temporal activities like these, new assets must be spun up quickly, and decommissioned just as fast. You know when these things need to happen, but so do your attackers.
Your digital movements have a heartbeat — and it’s not that tough to track, and eventually predict the ebb and flow of activity through openly available indicators such as LinkedIn updates, advertising spend, and email campaigns. Attackers are just waiting for you to miss something. Enter: subdomain takeovers. As explained by EdOverflow on the can-i-take-over-xyz repository a subdomain takeover is:
“Subdomain takeover vulnerabilities occur when a subdomain (subdomain.example.com) is pointing to a service (e.g. GitHub pages, Heroku, etc.) that has been removed or deleted. This allows an attacker to set up a page on the service that was being used and point their page to that subdomain. For example, if subdomain.example.com was pointing to a GitHub page and the user decided to delete their GitHub page, an attacker can now create a GitHub page, add a CNAME file containing subdomain.example.com, and claim subdomain.example.com.”
In my own hunting experience I’ve come over a top-tier, publicly traded company which had a subdomain takeover that had been performed by an attacker and turned into a cannabis company (likely for Blackhat SEO efforts). The site was set up for a team offsite that was then decommissioned, but the DNS record had remained active and then used by the attacker to reclaim that subdomain. Why didn’t scanners pick it up? Ultimately, because once a compromise has already occurred most scanners will no longer view them as a vulnerability. Scanners aren’t built to suss out brand inconsistencies, afterall. But the potential public relations fallout could have caused reputational damage equal to any breach.
3. Contextualizing for less risk – Sometimes it might feel like your scanner wasn’t made for your environment — the noise and false-positives can be overwhelming. This is because your scanner wasn’t made for your environment. It was made for, and by, everyone else’s. It’s shaped by the status quo, driven by known knowns, and errs on the side of caution when it comes to making assumptions or logical pivots.
Business logic errors are often great examples of items that fall outside the bounds of those parameters. Given their varied nature, they typically can’t be automated. At least not yet.
4. Agility in pivoting tools and techniques – Human researchers can pivot their tools and approaches to take action sooner. When a CVE is posted it can take software weeks or months to test for it, but researchers begin reporting issues in hours because they can quickly refocus tools at the problems. Software companies approach the problem via a route through development, QA, and testing in production and release to ensure their software does no harm. In many cases, very widely known issues never make it into a scanner at all.
Notably, a scanner can only track what you feed it, and it’s going to miss anything you don’t – especially acquisitions and domains not already tightly coupled to the scope you’ve provided it. A competent researcher understands this, and will work to tailor the scope as an engagement grows, fueling future discovery with the seeds of previously identified assets to ensure a greater depth of coverage.
Ultimately, the Crowd isn’t better than tooling – it’s but fueled by tooling. Where a need is found, somebody, somewhere will have built or is working on a solution to meet it.
5. Digesting expansive view of information –How do you feel about Natural Language Processing (NLP)? You interact with some form of it at least once a day, I’m sure. If your first thought was, “It’s… fine I guess?” That’s good, but not good enough for this particular use case. Scanners just aren’t designed to excel in this particular problem — digesting news, social media, and other contextual information wrapped around current events pertaining to your organization. Those bits of information are breadcrumbs to your business leading attackers towards wherever you’re not looking.
A few examples include:
- Deep nesting of events like M&A. Connecting the dots across three or four levels of activity is an art for many attackers, but the payoff can be well worth the effort as thousands of potentially vulnerable assets hang in limbo between owners.
- Review of public repos for leaked credentials or accidentally exposed source code. Pull on either of these threads and it may lead to a large repository of vulnerable assets. Identification of this is only one part of the issue, safely testing these and providing more insight into impact requires a manual hand.
- Joining datasets like Linkedin, Github, and Stackoverflow. The challenge here lies in attributing all of these personas to each other. This can easily be done with a unique name, but if they have a common name you may have to resort to using site favicons, github handles, or even cross-reference the organization’s Twitter. In this case, automation might be possible, but not very practical without inflating noise as attribution changes on a case-by-case basis.
I’ll draw a hard line and assert that most businesses have no idea what their attack surface looks like. Truly. It might sound a bit unusual, but the problem is often so overwhelming that many organizations have brought on full-time reconnaissance experts to find and piece together all the missing bits. Unfortunately, this isn’t practical for many organizations. So what then, are their options?
Bugcrowd’s Attack Surface Management portfolio helps distribute the collective creativity of the Crowd to organizations needing reconnaissance skills unique to their environment and use cases. Our Asset Risk solution provides access to the world’s best recon researchers, complete with their own kit of trusted tools and methodologies. By choosing a solution that pairs human ingenuity with software scalability, organizations have reduced unknown attack surface by up to 60% over known footprint, and 98% over seed data provided.
If you found this useful or have any questions, let’s keep the dialogue going! Tweet me at twitter.com/codingo_.
For more on how human ingenuity plays a crucial role in staying ahead of malicious attackers, stay tuned for next week’s blog, check out our website or contact us today!