Skip to the main content.

PROTECTING MOBILE APPS IN THE AGE OF AI AND QUANTUM COMPUTING

 

DOWNLOAD YOUR COPY

INTRODUCTION

Modern mobile app security faces a dual challenge: rapid advances in code analysis tools and AI are eroding the effectiveness of traditional code/app obfuscation, and looming quantum computing threats call into question the long-term strength of encryption.

This whitepaper examines the recent successful attacks on code obfuscation, evaluates popular obfuscation tools and their security and performance limits, and discusses whether standard AES encryption remains a safe bet against reverse engineering, AI-driven analysis, and future quantum decryption.

AI is Decoding Obfuscated Code

Recent studies show that modern AI models can efficiently de-obfuscate malware code, significantly reducing the protection offered by obfuscation alone.

Obfuscation is a Speed Bump, Not a Roadblock

Experts agree that no obfuscation is unbreakable - with sufficient time and tools, a determined reverse engineer will eventually succeed. Obfuscation can slow attackers down, but cannot stop them outright.

 

 

Defense-in-Depth is Essential

Relying solely on obfuscation is risky. Combine obfuscation with strong encryption and runtime protections for robust security. AES-256 encryption remains a trustworthy cornerstone, and postquantum readiness should be planned for long-term resilience.

As indicated in Android own privacy and security guide for compliance against OWASP[1], “An attacker with access to reverse engineering tools can retrieve a hard-coded secret very easily. Depending on conditions the impact might vary, but in many cases it leads to major security issues, such as access to sensitive data.”

The mitigation measures recommend the use AES 256 encryption leveraging keys backed by the KeyChain API or the Android Key Store which, of course, is sound and elevate the security tremendously.

This doesn’t eliminate all risks (an attacker could still abuse the key via the app if they control the device or the app ’s execution), but it prevents straightforward extraction of the raw key material through static analysis. This is why obfuscation is often used in conjuction with encryption.

CODE OBFUSCATION

 

Balancing Protection and Vulnerability in the Age of AI

Code obfuscation is the practice of transforming software into a form that is difficult for humans or decompilers to understand, without altering its functionality. Mobile app developers use obfuscation to protect intellectual property and prevent attackers from extracting secrets or tampering with the app ’s logic. Techniques range from simple name stripping and code minification to complex control-flow alteration and even instruction virtualization (where real code is replaced by a custom virtual machine embedded in the app)[2]. In practice, many mobile Android developers are familiar with R8/Proguard but everyone knows are optimizers rather than obfuscators by definition[3].

However, obfuscation has inherent limitations. As the OWASP Mobile Security Testing Guide bluntly states: “Ultimately, the reverse engineer always wins”[3]. In other words, given enough time and skill, any obfuscated code can be reverse engineered. Obfuscation's goal is to raise the effort required – ideally beyond what is worthwhile for the attacker – but it cannot guarantee permanent secrecy. This fundamental reality is now compounded by new AI tools that make reverse engineering faster and more accessible to attackers.

Rise of AI-Powered De-obfuscation

Large Language Models (LLMs) and advanced code analysis tools are changing the game. Traditionally, a human analyst or specialized scripts were needed to peel apart obfuscated code. Today, AI models can automate much of that work. For example, researchers in 2024 used state-of-the-art LLMs to successfully deobfuscate real-world malware (from the Emotet campaign) and found that some LLMs can efficiently recover obfuscated code’s logic [5]. Another team introduced ChatDEOB (2025), a fine-tuned LLM that improved de-obfuscation quality by over 116% according to the SacreBLEU clarity metric [6]

 

 

They report an average 85% improvement in making obfuscated code understandable by using a customized LLM approach [4]. These results are striking – AI is accelerating de-obfuscation beyond manual capabilities.

Protecting mobile apps image 1-1

Crucially, these AI methods aren’t yet omnipotent. In mid-2023, Jscrambler (a vendor of JavaScript protection) tested GPT-4 on their hardened code. While GPT-4 easily reformatted minified code, it failed to de-obfuscate Jscrambler ’s heavily obfuscated variant [5][7] . The model either gave up, provided only high-level guidance, or produced incorrect output when faced with advanced techniques like dynamic self-defending code and opaque predicates [5]. Jscrambler noted GPT-4 cannot execute code and is confused by runtime-based obfuscation that needs actual execution to unravel [5]. In short, current general-purpose LLMs struggle with extremely sophisticated obfuscation – but specialized or fine-tuned models are closing the gap.

The trend is clear – what used to be a painstaking manual reverse-engineering job can now be done in a fraction of the time with AI assistance. Relying on “security by obscurity” (code obfuscation alone) is increasingly dangerous as these AI de-obfuscation tools mature.

WHEN OBFUSCATION HAS FALLEN SHORT

 

To illustrate the reality of obfuscation’s limits, let's look at some of real-world scenarios and attacks where obfuscation was defeated:

Malware authors often rely on obfuscation to hide malicious scripts. Security researchers reported that Emotet’s obfuscated PowerShell and JavaScript components were efficiently decoded by large language models [3].

While Emotet used automated packing and obfuscation, the AI was able to summarize its behavior (e.g., network calls, payload decoding) with surprising accuracy. This shows that what protects malware from quick signature detection is not enough against AI-assisted analysis. In practice, it means defenders (and potentially attackers) can speed-run what used to be an arduous manual reverse engineering process.

A more recent attack made public in July 2025 showed how the Anatsa banking trojan has sneaked into Google Play via an app posing as a PDF viewer that counted more than 50,000 downloads[8]. A maintenance notification is displayed on top of the banking app’s UI, obscuring the malware’s activity in the background and preventing victims from contacting their bank or checking their accounts for unauthorized transactions. The malware becomes active on the device immediately after installing the app, tracking users launching North American banking apps and serving them an overlay that allows accessing the account, keylogging, or automating transactions.

In 2022, a banking app protected with decent obfuscation had its logic cracked by a security researcher who used ChatGPT to assist in understanding pieces of the small code (Dalvik assembly). The researcher would copy segments into ChatGPT and ask for explanation. 

While the AI refused to output raw de-obfuscated code (content policy), by iterative prompting they inferred the purpose of key functions (like one generating an OTP or performing cryptographic checks). This isn't a documented public case, but anecdotal evidence in forums suggests this approach has been used. It underlines that even without breaking code into original source, an AI can help trace vulnerabilities or secrets just by describing “what is this piece of code doing”. Obfuscation of names won't prevent a step-by-step logical deduction.

Jscrambler ’s experiment is a case where obfuscation held up – GPT-4 could not retrieve the original code from a deeply obfuscated snippet [5]. It either hit length limits or refused due to terms of service. This “attack” failed, demonstrating that strong client-side obfuscation can still be effective against current AI. However, it also highlights a limitation: GPT-4’s refusal was partly ethical – it saw deobfuscation as potentially malicious. A custom or open-source LLM without such guardrails might attempt it more fully. And indeed, the academic ChatDEOB work essentially did what ChatGPT would not: fine-tune a model without those restrictions to directly produce de-obfuscated code [4].

These examples emphasize that obfuscation alone, especially the lighter forms, only delay attacks, not prevent. Successful attacks often combine multiple techniques to crack an app: static analysis to map out structure, dynamic analysis to dump memory or bypass checks, and increasingly AI to fill in understanding. The most common attacks though still exploit bad coding practices with unprotected credentials, keys, passwords[9][10].

In the next section, we will continue to look into the future of Mobile Application Security and see how Cryptomathic MASC deploys a future-proof mechanism for protecting mobile apps.

WHY NOT RELY ON STANDARD ENCRYPTION TO STRENGTHEN YOUR APP'S SECURITY?

 

Given the weaknesses of pure obfuscation, many turn to encryption for stronger protection. For mobile apps, this can mean encrypting sensitive assets or even entire modules of the app, so that no humanreadable code is present unless the app is actively running. One may nonetheless wonder if it it is safe to bet on standard AES encryption to resist reverse engineering, AI, or future quantum computers?

Let’s break that down:

AES vs Reverse Engineering and AI

AES encryption, when used in application protection, typically means the app code or data is encrypted at rest (e.g., the APK has encrypted native libraries or classes) and only decrypted in memory at runtime. If done properly, an attacker who simply decompiles or inspects the static files gets nothing useful – just ciphertext. This is a much stronger defense than obfuscation, because without the key, the code is effectively opaque.

Cryptomathic’s approach with its flagship Mobile App Security Core (MASC) solution uses AES to encrypt the compiled bytecode or native code of the app, storing the key securely and only decrypting on the fly when needed. This means even if attackers use AI, they first need to obtain the actual code – which they can’t until they somehow extract or bypass the encryption. In essence, encryption raises the bar from “obscure the code” to “lock the code with a cryptographic key."

Is AES (Advanced Encryption Standard) safe against current attackers and AI?

Yes, AES-256 in particular is considered unbreakable with current technology[11]. No AI model can magically decrypt AES without the key; it's not a pattern or logic problem, it's a math problem requiring 2^256 possibilities brute force (an astronomically large number). 

 

 

 
 

 

 

AI doesn’t help brute force faster. So, if your code is encrypted with a strong key and an attacker can't find that key, the code is safe. In fact, using robust encryption shifts the problem – attackers will try to find the key via other means (like extracting it from the device memory or intercepting it in use).

This is where obfuscation and encryption often go hand-in-hand: you encrypt the code, and obfuscate the routines that handle the decryption key, so that an attacker who tries to locate the decryption key in the app will struggle. This combined approach is much stronger than obfuscation alone. It forms the basis of a multi-defense security strategy.

AI could assist an attacker in finding the decryption routine if the implementation is poor (for instance, an AI could analyze an app and say "there is a suspicious AES decryption call here using a static key"). But the AI cannot break the AES cipher itself. So as long as the key remains secret, AES is a solid shield.

The Caveat? The implementation has to be correct. If developers "hard-code" the key (even if they obfuscate it), a skilled attacker can still eventually find it (by searching memory at runtime, or noticing it in the code logic). Techniques like white-box cryptography try to make keys not directly extractable even if the algorithm is known, by blending keys into code.

In summary, betting on AES encryption as a core defense is wise for current threats but requires advanced techniques to protect the key: no known AI or algorithm can bypass AES-256 if used correctly. It’s a much stronger foundational layer than obfuscation.

FUTURE QUANTUM COMPUTERS AND AES

Quantum computing poses a well-known threat to some cryptography:

  • For public-key cryptography (RSA, ECC), a large quantum computer running Shor’s algorithm could break them. That’s why there’s a race to deploy post-quantum cryptography (PQC) for protocols.
  • For symmetric cryptography like AES, the threat is different. Quantum computers don’t completely break AES; instead, Grover ’s algorithm can theoretically brute force a key in roughly √N steps instead of N. That means it cuts the effective key length in half.

Concretely, AES-128 (128-bit key) which classically takes 2^128 operations to brute force, could take on the order of 2^64 quantum operations with Grover ’s algorithm – significantly less, but still a very large number (about 1.8e19). For comparison, 2^64 is over 18 quintillion, which is enormous – likely out of reach even for future quantum machines for a long time (and would require error-corrected quantum operations far beyond current tech). AES-256 would effectively be like 2^128 under quantum attack, which is 3.4e38 – utterly impractical to brute force[3]. In fact, a 2024 paper by NIST researchers concluded that with realistic quantum error correction overhead, AES-256 and even AES-128 provide ample security margin against quantum attack for the foreseeable future [3]. They noted that the physical resources and time needed to run Grover at scale make such attacks extraordinarily difficult in practice, and that 256-bit keys are recommended to stay safe.

So “standard AES encryption” can be future-proofed by using 256-bit keys. AES-256 is generally believed to be secure even in a projected quantum era, unless some breakthrough faster-than-Grover attack is found (considered very unlikely). Thus:

  • It is safe to rely on AES-256 against future quantum threats for now. Organizations like NSA have mandated AES-256 for “quantum resistance” moving forward, precisely because of this reasoning.
  • AES-128, while likely secure for years, might eventually become borderline in a hypothetical far future quantum scenario. If one is really forward-looking (thinking decades), sticking to AES-256 or higher is prudent.

Additionally, the community is working on post-quantum symmetric schemes (though symmetric schemes are less in need of outright replacement). Using larger key variants (there’s AES-192 and AES256 as standardized options) is the straightforward mitigation.

Edge scenario: If an app ’s security absolutely must remain intact even 30 years from now against an adversary with a quantum computer, then yes, maybe “standard AES” alone might be questioned. But mobile app code usually doesn’t have such longevity requirements (apps are updated frequently).

Therefore, don't worry about AI or quantum "guessing" your AES key outright. The concern is more practical: where is your key stored? A quantum computer or AI won't magically pluck it out, but a sloppy implementation might leave it accessible (in memory, or via an insecure key derivation). That’s where teaming with security experts such as Cryptomathic is key.

RECOMMENDATIONS AND DEFENSE-IN-DEPTH

CONCLUSION

 

Issuers of security critical mobile apps already augment the security posture of their native or cross platform apps by adding additional security components to maintain the integrity and protect user or business data. These include obfuscation, asset protection, secure storage, encryption API, third-party RASP libraries e.g. for jailbreak detection, biometric authorisation, connection security etc.

Relying on pure obfuscation (e.g. post coding wrapper) is not safe for highly sensitive apps in the face of advancing AI capabilities. There are already documented successes in defeating obfuscation using AI [3][4], and this trend will accelerate. Obfuscation should be augmented with strong cryptographic protection of the application code and data. AES encryption, especially AES-256, is a reliable workhorse that remains unbroken and is expected to remain so even as quantum computing matures.

By combining these measures, you create a multi-layered defense: encryption thwarts static analysis cold, obfuscation and runtime tricks frustrate dynamic analysis and make AI’s job harder, and hardware security plus good design protect keys and secrets from extraction. This defense-in-depth approach is the best way to protect mobile apps against reverse engineering both now and in the foreseeable future.

cryptomathic_symbol_red_negative_transparent