In this installment of our Hacking Cryptography series, we’ll dig into hacking crypto in hardware devices. System designers building hardware devices must overcome many unique challenges, including securing the firmware update process, enforcing authentication and authorization across all access ports, and handling cryptographic key management. Naturally, such challenges present many opportunities for bug bounty hunters. In fact, there are so many exciting attack vectors for hardware hackers to explore that it was hard to choose which to cover in this article. After much deliberation, I selected five different attacks, all of which tend to yield highly impactful bugs while still being accessible to bug bounty hunters new to the world of hardware hacking. The attacks are as follows:

  • Side-channel attacks
  • Hardcoded encryption keys
  • Unprotected debug interfaces (JTAG, UART)
  • Lack of cryptographic authentication
  • Fault injection attacks.

 

Side-channel attacks

Understanding the risk

Side-channel attacks exploit unintended information leaks from a cryptographic system, such as power consumption, electromagnetic emissions, or timing variations. Unlike traditional cryptanalysis, these attacks do not target the mathematical strength of an algorithm. Rather, they exploit the physical characteristics of the hardware implementation. Some common hardware hacking side-channel attacks include the following:

  • Simple power analysis (SPA)
  • Differential power analysis (DPA)
  • Electromagnetic (EM)/radiofrequency (RF) analysis.

SPA is generally only effective against consumer-grade hardware and resource-constrained IoT devices, such as the following:

  • Smart locks
  • Low-cost USB fingerprint scanners
  • Smart plugs, light bulbs, and energy monitors
  • Home office Wi-Fi routers.

From our perspective as bug bounty hunters, SPA can be a particularly fruitful attack vector because it’s relatively simple and doesn’t require any expensive tools. Oftentimes, SPA can be performed with only a digital oscilloscope (e.g., Saleae Logic Pro) and a dump of the device firmware. Suppose we have a consumer-grade smart lock and want to use SPA to figure out the unlock code. In such a scenario, we would review the firmware disassembly to identify critical branches in the code path, such as when correct and incorrect digits have been provided. An oscilloscope would be used to monitor power consumption by attaching it to the main power rail of the microcontroller (i.e., Vcc). Through trial and error, we would build a mapping of power consumption patterns to critical code paths. We would then use these mappings to glean information about the validity of each digit that we try.

 

 

In contrast, DPA is generally much more complex and often requires specialized equipment, such as the ChipWhisperer Pro (~$3,800). While SPA often requires only a handful of power consumption traces to identify patterns, DPA typically requires statistical analysis across thousands of power consumption traces. However, with the right tools, patience, and persistence, DPA can be used to achieve impressive results, such as extracting full Advanced Encryption Standard (AES) decryption keys. Some common targets for DPA attacks include trusted platform modules (TPMs), hardware security modules (HSMs), Europay, MasterCard and Visa (EMV) payment cards, and self-encrypting hard drives.

 

ChipWhisperer Pro Starter Kit

(from: https://rtfm.newae.com/Starter%20Kits/ChipWhisperer-Pro/)

 

Lastly, EM emanations can also leak sensitive information about cryptographic operations. In the context of side-channel attacks, EM emanations are typically categorized as one of the following:

Radiofrequency (RF) Commonly used for wireless communications (think Wi-Fi, Bluetooth, Zigbee, and cellular)
Low-frequency EM fields Typically produced by current flowing through electrical components; can be used to infer state changes during cryptographic operations (similar to power analysis)
Near-field EM fluctuations Typically only detectable within a few centimeters of a chip; can be used to target specific chips/components within a hardware device
High-frequency EM spikes Occur when logic gates switch states; can be used to identify specific microprocessor instructions (e.g., MUL, MOV, XOR, and SBOX_LOOKUP), which, in turn, can be used to infer information about cryptographic operations

 

As you might imagine, EM-based side-channel attacks introduce additional layers of complexity due to the fact that the signals aren’t as readily available as for other attack vectors. For example, with power analysis attacks, we do have to gain access to the internal electrical components of the target device. However, once we have overcome this hurdle, it’s relatively simple to capture the power consumption signal. In contrast, EM emanations present an interesting attack vector because we can typically capture the signal(s) without even having physical access to the target device. However, the challenge becomes separating the signal(s) that we’re interested in from all the other EM signals buzzing through the air. Fortunately for us, software-defined radio (SDR) has made such attacks much more accessible to the typical bug bounty hunter. If you’re interested in trying your hand at EM-based side-channel attacks, I would suggest picking up a HackRF or RTL-SDR device. Then head over to Side Channel Attack for lots of examples.

Real-world example: Power analysis attack on Ledger Nano S

Power analysis attacks involve monitoring the power consumption of a device during cryptographic operations to extract sensitive information, such as private keys. In the case of hardware wallets like the Ledger Nano S, researchers have demonstrated that by analyzing power consumption, it’s possible to recover confidential information displayed on the device’s OLED screen. In May 2019, security researcher Christian Reitter reported a side-channel vulnerability that could have allowed an attacker to measure the power consumption of the device to partially recover confidential information displayed on the OLED screen. Ledger reproduced the setup and developed countermeasures, which were scheduled for inclusion in firmware updates in Q4 2019.

References: Ledger’s Official Blog on OLED Vulnerability and Power Analysis of the Ledger Nano S (Academic Thesis)

 

Hardcoded encryption keys

Understanding the risk

Hardcoded cryptographic keys are embedded directly into firmware or software binaries rather than being securely generated at runtime. This practice makes devices vulnerable because attackers who extract firmware can easily locate and reuse these keys. Even if a key is obfuscated, techniques such as binary analysis, dynamic debugging, or firmware extraction via JTAG/UART can reveal it. This class of vulnerability can be particularly dangerous because if the same key is used across multiple devices, a single compromise can endanger an entire product line or ecosystem.

The use cases for such keys can vary greatly, but common examples include the following:

  • Firmware decryption
  • Secure boot verification
  • Session key generation
  • Encrypting/decrypting data at rest.

There are many ways to identify and extract hardcoded encryption keys, but for hackers just starting out with hardware hacking, the best approaches are firmware reverse engineering and memory analysis.

Firmware reverse engineering involves disassembling binary files using tools like binwalk, IDA Pro, Ghidra, and even simple tools like the Linux strings command. Hardcoded encryption keys can often be identified by looking for high-entropy blobs within binaries, searching for keywords (e.g., password, token, and key), and reverse engineering calls to common cryptographic functions (e.g., AES_set_key()/AES_init_key(), HMAC_Init_ex(), and SHA256_Init()). Another common location for hardcoded encryption keys is the .rodata and .data sections of firmware binaries because constants are typically stored in these sections.

Memory analysis involves inspecting memory contents, whether captured via memory dumps or inspected at runtime. When searching for encryption keys in memory, it’s helpful to keep in mind that such keys are typically high-entropy but fixed in length (e.g., 16, 32, 64, or 256 bytes).

For example, suppose we suspect a hardcoded encryption key is used to validate firmware updates for an IoT device. We disassemble the update binary using IDA Pro or Ghidra and identify a call to the following function:

AES_CBC_decrypt(uint8_t *key, uint8_t *iv, uint8_t *ciphertext, size_t len);

In reviewing the disassembly, we find the instruction block below, which reveals that the key is located at 0x20001F80:

LDR R0, =0x20001F80 ; address of AES key

BL AES_CBC_decrypt

We can then dump this portion of the binary using the following commands from a Linux shell:

OFFSET=$((0x1F80))

dd if=ram.bin bs=1 skip=$OFFSET count=16 status=none | xxd

This will return the following:

00000000: 2b7e 1516 28ae d2a6 abf7 1588 09cf 4f3c +~..(... .....O<

And match an AES-128 test vector key specified in NIST 800-38A:

 

 

Real-world example: TP-Link router firmware (CVE-2017-13772)

In 2017, a vulnerability (CVE-2017-13772) was discovered where TP-Link router models contained hardcoded credentials in their firmware, potentially allowing attackers to decrypt traffic and gain administrative access.

Reference: CVE-2017-13772

Real-world example: Medtronic pacemakers (CVE-2019-6538)

In March 2019, the Cybersecurity and Infrastructure Security Agency (CISA) disclosed vulnerabilities in Medtronic devices using the Conexus telemetry protocol. The primary issue (CVE-2019-6538) stemmed from the absence of authentication or authorization, allowing attackers within radio range to alter device settings.

References: CVE-2019-6538 – NVD and ICS Medical Advisory (Update C) – CISA

Real-world example: Cisco VPN backdoor (CVE-2018-0101)

In early 2018, a critical vulnerability (CVE-2018-0101) was identified in Cisco ASA software’s Secure Sockets Layer (SSL) VPN functionality. This flaw could have allowed an unauthenticated, remote attacker to cause a reload of the affected system or, under certain conditions, remotely execute code. While the primary issue was a denial of service, the vulnerability raised concerns about potential unauthorized access and key extraction.

References: https://www.cisco.com/c/en/us/support/docs/csa/cisco-sa-20180129-asa1.html and https://nvd.nist.gov/vuln/detail/cve-2018-0101

 

Fault injection attacks

Understanding the risk

Fault injection attacks involve intentionally disrupting a device’s operation to force it into an exploitable state. Common techniques include the following:

  • Voltage glitching
  • Clock glitching
  • Electromagnetic fault injection (EMFI).

By using these techniques at just the right moment, we can manipulate the processor execution state to bypass authentication, extract cryptographic keys, or manipulate firmware verification processes. The key to understanding fault injection attacks is understanding how the CPU works at a very low level. With that in mind, let’s review a couple things about CPU microarchitecture.

We’ll start with the obvious: computers operate on electricity. What may be less obvious is the fact that to represent digital information using electrical circuits, the CPU defines specific voltage ranges that it considers “1” and “0.” Therefore, if we manipulate the voltage in a circuit, we may be able to flip 1s to 0s and 0s to 1s, which can have some very significant downstream effects. For example, in the Intel 8085 instruction set, the Metal Oxide Varistor (MOV) instruction can be represented as the hex code 58 (0101 1000), and the CMP instruction can be represented as B8 (1011 1000). By flipping a single bit, we might be able to switch a CMP instruction to a MOV instruction.

The next important microarchitecture consideration is that most modern CPUs implement a fetch-decode-execute pipeline. As the name implies, this pipeline consists of three primary phases:

 

Fetch The CPU retrieves the next instruction from memory.
Decode The instruction is translated into control signals for execution.
Execute The CPU performs the actual operation (e.g., arithmetic and memory access).

 

When a voltage or clock glitch is injected mid-instruction, this pipeline can miss a fetch, decode, or execute signal, which leads to pipeline desynchronization and unintended control flow. As our goal with fault injection is to manipulate control flow in unexpected and unauthorized ways, both of these attack vectors go hand in hand with our goal. The challenge with fault injection is timing and scaling the injection(s) to yield consistent results that lead to impactful security outcomes. And for that, our greatest tools are persistence, patience, and creativity.

Real-world example: Voltage glitching on STM32-based hardware wallets

Voltage glitching is a fault injection technique where an attacker introduces brief disruptions to a device’s power supply to induce errors in its operation. For STM32-based hardware wallets, such as the Trezor One and KeepKey, this method can bypass security features, allowing unauthorized access to sensitive data.

Trezor One Wallets: In January 2020, Kraken Security Labs demonstrated a method to extract seeds from Trezor One hardware wallets by using voltage glitching to bypass security protections. The attack required physical access to the device and specialized equipment.

Reference: Kraken’s Report on the Trezor Vulnerability

KeepKey Wallets: In 2018, Riscure conducted research showing that voltage glitching could bypass the PIN protection on KeepKey hardware wallets, which are based on the STM32F205 microcontroller.

Reference: Riscure’s Analysis of the KeepKey Vulnerability

Real-world example: Fault injection on Google Titan Security Key

In January 2021, researchers from NinjaLab published a study titled “A Side Journey to Titan,” revealing that by observing electromagnetic emissions during the device’s cryptographic operations, they could extract the ECDSA private key. This side-channel attack required physical access to the device and specialized equipment.

References: A Side Journey to Titan – NinjaLab Research and Cloning Google Titan 2FA Keys – Schneier on Security

 

Unprotected debug interfaces (JTAG, UART)

Understanding the risk

Many hardware devices expose debugging interfaces such as JTAG and UART, which can be used to bypass security features and extract firmware. If these interfaces are not properly secured (e.g., disabled in production or require authentication), attackers can gain full control of a device, including access to cryptographic secrets.

JTAG stands for Joint Test Action Group, which is a standardized interface that’s used to provide testing and debugging capabilities for hardware devices. The connector layout for JTAG ports can vary depending on the specific use case, but it’s common to see 6-, 10-, and 20-pin layouts. Unsecured JTAG interfaces can be extremely valuable, as they typically provide deep system-level access to facilitate debugging.

Common JTAG interface layouts (from ULink2 User’s Guide)

From the bug bounty hunter’s perspective, these interfaces provide many opportunities for juicy findings. Oftentimes, the mere presence of an unsecured JTAG or UART interface is a reportable bug in itself, albeit likely a low-impact finding. But the real value of such a finding is the opportunity to leverage it to dig further into a device’s operations, with the goal of identifying high-impact bugs (e.g., hardcoded keys, poorly implemented cryptographic algorithms, the use of weak random number generators [RNGs], or key reuse).

 

Exposed UART Pins (from https://www.secureideas.com/blog/hardware-hacking-finding-uart-pinouts-on-pcbs)

One of the best tools for connecting to exposed UART pins is FTDI Friend, which provides an easy way to convert bare UART pins to USB. I recommend that every hardware hacker keep several FTDI Friends on hand, as they’re inexpensive and it is possible to burn them out if you wire them up incorrectly. Which brings me to another very important point—only attach the GND, DATA+, and DATA- pins. It’s almost never appropriate to attach the VCC pin, and doing so has a high likelihood of releasing the magic smoke from the target, your testing equipment, or both.

 

FTDI Friend (from https://www.adafruit.com/product/284)

Once connected to an insecure JTAG or UART port, we can dump the firmware, inspect configuration files, debug running services, and manipulate device behavior to reveal insecure cryptographic implementation flaws. It’s helpful to think of such ports as an open door—once inside, you can get to work applying all of the techniques covered throughout this series to hunt down juicy crypto bugs!

Real-world example: Ring doorbell hack

In November 2019, researchers from Bitdefender discovered that the Ring Video Doorbell Pro had an exposed UART interface. By accessing this interface, an attacker with physical access could have extracted the device’s firmware, potentially revealing Wi-Fi credentials and other sensitive information.

References: https://www.bitdefender.com/en-us/blog/hotforsecurity/bitdefender-finds-ring-doorbell-vulnerability-exposes-users-wi-fi-password

Real-world example: Google Chromecast key recovery

In 2014, security researcher Dan Petro demonstrated that the original Google Chromecast had an open UART port. By connecting to this port, an attacker could have accessed the device’s shell and retrieve the Wi-Fi credentials stored on the device.

References: https://bishopfox.com/blog/rickmote-controller-hacking-one-chromecast-time

 

Insecure firmware & software updates

Understanding the risk

Firmware updates should be cryptographically signed and verified to prevent tampering. If updates lack authentication or use weak/hardcoded signatures, attackers can inject malicious firmware, creating a backdoor into a device. Insecure update mechanisms also enable rollback attacks, where an attacker forces a device to revert to a vulnerable firmware version.

Real-world example: Medtronic insulin pumps

Medical devices like insulin pumps rely on wireless communication for monitoring and control. If these communications are not properly encrypted or authenticated, attackers can intercept or manipulate data, potentially leading to harmful consequences for patients. Certain models were found to have vulnerabilities due to a lack of encryption in their wireless communication, allowing unauthorized users to alter pump settings.

Reference: “Cybersecurity Vulnerabilities of Insulin Pumps” – U.S. Food & Drug Administration (FDA), 2019.

Real-world example: Siemens S7-300 and S7-400 PLCs

Programmable logic controllers (PLCs) are critical components in industrial control systems. Malware targeting specific PLCs can manipulate industrial processes, leading to physical damage. The Stuxnet worm is a notable example of such an attack. Stuxnet specifically targeted these models, exploiting vulnerabilities in the Step7 software to alter PLC code and sabotage Iran’s uranium enrichment process.

Reference: “Stuxnet: Dissecting a Cyberwarfare Weapon” – IEEE Security & Privacy, 2012.

Real-world example: Juniper Dual_EC_DRBG backdoor (CVE-2015-7755)

The Dual Elliptic Curve Deterministic Random Bit Generator (Dual_EC_DRBG) is a cryptographic algorithm that was suspected to contain a backdoor, potentially allowing those who knew certain parameters to predict its output. Incorporating this algorithm into security products could have introduced vulnerabilities, especially if the implementation was flawed. In December 2015, Juniper Networks disclosed that unauthorized code had been inserted into their ScreenOS firmware, introducing two vulnerabilities: an authentication bypass (CVE-2015-7755) and a VPN decryption issue. The latter was linked to the use of Dual_EC_DRBG, which, due to its known weaknesses, could have allowed attackers to decrypt VPN traffic.

References: On the Juniper backdoor, Details about Juniper’s Firewall Backdoor, and CVE-2015-7755 Details

 

Lack of cryptographic authentication

Understanding the risk

Without proper cryptographic authentication, attackers can impersonate trusted entities or modify critical data. Devices relying solely on MAC addresses, serial numbers, or simple challenge-response mechanisms without cryptographic integrity checks are vulnerable to replay attacks and spoofing.

Real-world example: Gas station controllers (Veeder-Root TLS-300/TLS-350)

Automated tank gauges (ATGs) like the Veeder-Root TLS series monitor fuel levels at gas stations. If these systems have weak authentication or default credentials, attackers can gain unauthorized access, leading to potential fuel theft or safety hazards. These models were found to have default passwords, making them vulnerable to unauthorized remote access.

Reference: “Automated Tank Gauges: Attack Surface and Vulnerabilities” – Rapid7 Research, 2015.

Real-world example: Electrical grid SCADA systems

SCADA systems in electrical grids are essential for monitoring and control. Without robust cryptographic authentication, these systems are vulnerable to unauthorized commands, potentially leading to widespread outages. In the 2015 attack on Ukraine’s power grid, attackers exploited weak authentication in SCADA systems to remotely control circuit breakers, causing significant power outages.

Reference: “Analysis of the Cyber Attack on the Ukrainian Power Grid” – Electricity Information Sharing and Analysis Center (E-ISAC), 2016.