Incomplete Cleanup Affecting kernel-devel package, versions *


Severity

Recommended
0.0
low
0
10

Based on CentOS security rating.

Threat Intelligence

EPSS
0.02% (4th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-CENTOS6-KERNELDEVEL-14242213
  • published9 Dec 2025
  • disclosed8 Dec 2025

Introduced: 8 Dec 2025

NewCVE-2025-40303  (opens in a new tab)
CWE-459  (opens in a new tab)

How to fix?

There is no fixed version for Centos:6 kernel-devel.

NVD Description

Note: Versions mentioned in the description apply only to the upstream kernel-devel package and not the kernel-devel package as distributed by Centos. See How to fix? for Centos:6 relevant fixed versions and status.

In the Linux kernel, the following vulnerability has been resolved:

btrfs: ensure no dirty metadata is written back for an fs with errors

[BUG] During development of a minor feature (make sure all btrfs_bio::end_io() is called in task context), I noticed a crash in generic/388, where metadata writes triggered new works after btrfs_stop_all_workers().

It turns out that it can even happen without any code modification, just using RAID5 for metadata and the same workload from generic/388 is going to trigger the use-after-free.

[CAUSE] If btrfs hits an error, the fs is marked as error, no new transaction is allowed thus metadata is in a frozen state.

But there are some metadata modifications before that error, and they are still in the btree inode page cache.

Since there will be no real transaction commit, all those dirty folios are just kept as is in the page cache, and they can not be invalidated by invalidate_inode_pages2() call inside close_ctree(), because they are dirty.

And finally after btrfs_stop_all_workers(), we call iput() on btree inode, which triggers writeback of those dirty metadata.

And if the fs is using RAID56 metadata, this will trigger RMW and queue new works into rmw_workers, which is already stopped, causing warning from queue_work() and use-after-free.

[FIX] Add a special handling for write_one_eb(), that if the fs is already in an error state, immediately mark the bbio as failure, instead of really submitting them.

Then during close_ctree(), iput() will just discard all those dirty tree blocks without really writing them back, thus no more new jobs for already stopped-and-freed workqueues.

The extra discard in write_one_eb() also acts as an extra safenet. E.g. the transaction abort is triggered by some extent/free space tree corruptions, and since extent/free space tree is already corrupted some tree blocks may be allocated where they shouldn't be (overwriting existing tree blocks). In that case writing them back will further corrupting the fs.

CVSS Base Scores

version 3.1