Loading HuntDB...

CVE-2021-46989

MEDIUM
Published 2024-02-28T08:13:15.930Z
Actions:

Expert Analysis

Professional remediation guidance

Get tailored security recommendations from our analyst team for CVE-2021-46989. We'll provide specific mitigation strategies based on your environment and risk profile.

CVSS Score

V3.1
5.5
/10
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Base Score Metrics
Exploitability: N/A Impact: N/A

EPSS Score

v2023.03.01
0.000
probability
of exploitation in the wild

There is a 0.0% chance that this vulnerability will be exploited in the wild within the next 30 days.

Updated: 2025-01-25
Exploit Probability
Percentile: 0.124
Higher than 12.4% of all CVEs

Attack Vector Metrics

Attack Vector
LOCAL
Attack Complexity
LOW
Privileges Required
LOW
User Interaction
NONE
Scope
UNCHANGED

Impact Metrics

Confidentiality
NONE
Integrity
NONE
Availability
HIGH

Description

In the Linux kernel, the following vulnerability has been resolved:

hfsplus: prevent corruption in shrinking truncate

I believe there are some issues introduced by commit 31651c607151
("hfsplus: avoid deadlock on file truncation")

HFS+ has extent records which always contains 8 extents. In case the
first extent record in catalog file gets full, new ones are allocated from
extents overflow file.

In case shrinking truncate happens to middle of an extent record which
locates in extents overflow file, the logic in hfsplus_file_truncate() was
changed so that call to hfs_brec_remove() is not guarded any more.

Right action would be just freeing the extents that exceed the new size
inside extent record by calling hfsplus_free_extents(), and then check if
the whole extent record should be removed. However since the guard
(blk_cnt > start) is now after the call to hfs_brec_remove(), this has
unfortunate effect that the last matching extent record is removed
unconditionally.

To reproduce this issue, create a file which has at least 10 extents, and
then perform shrinking truncate into middle of the last extent record, so
that the number of remaining extents is not under or divisible by 8. This
causes the last extent record (8 extents) to be removed totally instead of
truncating into middle of it. Thus this causes corruption, and lost data.

Fix for this is simply checking if the new truncated end is below the
start of this extent record, making it safe to remove the full extent
record. However call to hfs_brec_remove() can't be moved to it's previous
place since we're dropping ->tree_lock and it can cause a race condition
and the cached info being invalidated possibly corrupting the node data.

Another issue is related to this one. When entering into the block
(blk_cnt > start) we are not holding the ->tree_lock. We break out from
the loop not holding the lock, but hfs_find_exit() does unlock it. Not
sure if it's possible for someone else to take the lock under our feet,
but it can cause hard to debug errors and premature unlocking. Even if
there's no real risk of it, the locking should still always be kept in
balance. Thus taking the lock now just before the check.

Available Exploits

No exploits available for this CVE.

Related News

No news articles found for this CVE.

Affected Products

GitHub Security Advisories

Community-driven vulnerability intelligence from GitHub

⚠ Unreviewed MODERATE

GHSA-758g-j8h5-xvgg

Advisory Details

In the Linux kernel, the following vulnerability has been resolved: hfsplus: prevent corruption in shrinking truncate I believe there are some issues introduced by commit 31651c607151 ("hfsplus: avoid deadlock on file truncation") HFS+ has extent records which always contains 8 extents. In case the first extent record in catalog file gets full, new ones are allocated from extents overflow file. In case shrinking truncate happens to middle of an extent record which locates in extents overflow file, the logic in hfsplus_file_truncate() was changed so that call to hfs_brec_remove() is not guarded any more. Right action would be just freeing the extents that exceed the new size inside extent record by calling hfsplus_free_extents(), and then check if the whole extent record should be removed. However since the guard (blk_cnt > start) is now after the call to hfs_brec_remove(), this has unfortunate effect that the last matching extent record is removed unconditionally. To reproduce this issue, create a file which has at least 10 extents, and then perform shrinking truncate into middle of the last extent record, so that the number of remaining extents is not under or divisible by 8. This causes the last extent record (8 extents) to be removed totally instead of truncating into middle of it. Thus this causes corruption, and lost data. Fix for this is simply checking if the new truncated end is below the start of this extent record, making it safe to remove the full extent record. However call to hfs_brec_remove() can't be moved to it's previous place since we're dropping ->tree_lock and it can cause a race condition and the cached info being invalidated possibly corrupting the node data. Another issue is related to this one. When entering into the block (blk_cnt > start) we are not holding the ->tree_lock. We break out from the loop not holding the lock, but hfs_find_exit() does unlock it. Not sure if it's possible for someone else to take the lock under our feet, but it can cause hard to debug errors and premature unlocking. Even if there's no real risk of it, the locking should still always be kept in balance. Thus taking the lock now just before the check.

CVSS Scoring

CVSS Score

5.0

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Advisory provided by GitHub Security Advisory Database. Published: February 28, 2024, Modified: November 4, 2024

References

Published: 2024-02-28T08:13:15.930Z
Last Modified: 2025-05-04T07:01:50.434Z
Copied to clipboard!