Request a Demo Contact Us

Implementing Recon Over Time

If you’re trying to find bugs on bug bounty programs consistently, then automation and especially asset monitoring is something you may want to look into.

In this blog post, we will discuss various tools and methodologies used by many of the top hackers you often see on the leaderboards: with enough effort, you can become one of them too!



As briefly mentioned in my previous blog post on CIDR scanning, “Recon over time” basically means monitoring bug bounty assets continuously to spot differences and new hosts when they come online.

By doing so, it’s easy to gain a significant edge over other hackers that simply run scans periodically, without keeping track of changes (Point of time recon).

While this topic has received more attention lately, my thanks to Codingo for his Fundamentals of Bug Bounty video for highlighting how to start your pipeline; it is still a topic that is elusive.


Sending millions of DNS queries and HTTP requests from home isn’t usually a great idea.As a result most researchers rely on cloud providers.

It’s essential to find a cloud provider that is research-friendly as when scanning. You may encounter some abuse reports. They are easy to resolve if you’re acting in good faith.

Some of the most popular ones include DigitalOcean, Linode, Vultr, Contabo, and BuyVM.

There are primarily three types of infrastructure that are broadly used: 

  • Single VPS boxes: barebone machines, usually running a Linux distro such as Ubuntu. Good for simple tasks and quick prototyping.
  • VPS fleets: A set of single VPS boxes working together to complete a task. Common tools for the job are Axiom and Fleex. While not exactly the same, ShadowClone is also worth mentioning as it uses AWS Lambda to distribute tasks using serverless functions (bonus: you get fresh IP addresses every time).
  • Kubernetes clusters: Likely the more advanced setup, and the hardest one to get up and running. To this date, there are no easy scripts specifically made for bug bounty hunting and you’ll likely have to set up everything on your own. For affordable, research-friendly k8s hosting, Scaleway is commonly used.

In the sections below, we will mostly discuss topics that apply to single boxes and fleets.


Building the Econ Pipeline

One thing I learnt the hard way is that simple things often work better than more advanced ones, and this is especially true in bug bounties.

If you’re getting started with recon over time, I recommend spending more effort on what your pipeline does instead of how well it is built. This also means that you should likely use existing public tools whenever possible instead of reinventing the wheel just because you think you could code something better.

For these reasons, chaining public tools with bash wrappers is usually enough to have something good to start with, which will also help you to figure out how to move forward.

Not sure which tools to use? Check a few below: 

Following fellow bug bounty hunters on GitHub may be useful as well since you’ll see in your dashboard when they feature a new fancy tool that’s out there.


Scheduling Tasks

Since scanning should be a continuous activity, your pipeline should be configured to run 24/7: once again, if you’re starting you likely want to keep this simple.

You can either use cron jobs, systemd services, or even a simple while true loop in a tmux session.


Tracking Changes

No matter what tools you’re running, you want to keep track of any change that happens.
Doing this efficiently may not be trivial, especially because with some types of scanning there will be a lot of noise: for example, if you’re monitoring HTTP titles, one of your requests may get a 429 – too many requests response, an event that’s related to the host you’re scanning rather than to your machine.

There’s no easy solution to this, but with some scripting, you should be able to reduce noise significantly.

As for where to save these changes, text files will be good to start with.
A good idea is pushing such files to a private GitHub repo so that you will have a commit history that allows you to easily see what has changed and when.

Over time, if you find that text files aren’t meeting your needs, you can always evolve your pipeline to more advanced storage systems such as a Database Management System (DBMS).


Further Things to Think About

Here are some other things you should take care of when implementing your own recon over time pipeline:

  • Scope: How to keep a list of fresh targets that you’re allowed to hack on? Start from tools such as bbscope and bounty-targets-data. Can you do better and find even more assets?
  • Rate limits: Some programs enforce rate limits in order to avoid excessive overload on their infrastructure. It’s important to respect them. Automation is cool, but only when it doesn’t cause issues.
  • What to actually scan for: Be creative! If you do what everyone else is doing, you’ll get many duplicates. Sure, you can try to be faster than others, but it gets more interesting when you combine your automation with your own research and unique ideas.

That was it for today! Hopefully, the knowledge shared here will help you to avoid some common pitfalls and get you to find bugs consistently sooner. If you ever want to discuss automation or related topics, feel free to contact me on Twitter!

About the author (@sw33tLie / Paolo Arnolfo)

I’ve always been passionate about hacking and computers, and I started hunting on Bugcrowd consistently during the Covid pandemic. Since then, I’ve been lucky enough to talk to many people and meet new friends along the way thanks to bug bounty platforms. In 2021, I was one of the winners of Bugcrowd’s #teamhunt2021 hacking event. I enjoy talking about automation, parsing differentials, and other weird quirks of the HTTP protocol.

More resources


AI Bias Assessment

Read More

Ultimate Guide to AI Security

Read More

Get Started with Bugcrowd

Every minute that goes by, your unknown vulnerabilities leave you more exposed to cyber attacks.