CVE-2024-26960 Affecting kernel-modules package, versions <0:4.18.0-553.16.1.el8_10
Threat Intelligence
Do your applications use this vulnerable package?
In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.
Test your applications- Snyk ID SNYK-RHEL8-KERNELMODULES-6779797
- published 2 May 2024
- disclosed 1 May 2024
Introduced: 1 May 2024
CVE-2024-26960 Open this link in a new tabHow to fix?
Upgrade RHEL:8
kernel-modules
to version 0:4.18.0-553.16.1.el8_10 or higher.
This issue was patched in RHSA-2024:5101
.
NVD Description
Note: Versions mentioned in the description apply only to the upstream kernel-modules
package and not the kernel-modules
package as distributed by RHEL
.
See How to fix?
for RHEL:8
relevant fixed versions and status.
In the Linux kernel, the following vulnerability has been resolved:
mm: swap: fix race between free_swap_and_cache() and swapoff()
There was previously a theoretical window where swapoff() could run and teardown a swap_info_struct while a call to free_swap_and_cache() was running in another thread. This could cause, amongst other bad possibilities, swap_page_trans_huge_swapped() (called by free_swap_and_cache()) to access the freed memory for swap_map.
This is a theoretical problem and I haven't been able to provoke it from a test case. But there has been agreement based on code review that this is possible (see link below).
Fix it by using get_swap_device()/put_swap_device(), which will stall swapoff(). There was an extra check in _swap_info_get() to confirm that the swap entry was not free. This isn't present in get_swap_device() because it doesn't make sense in general due to the race between getting the reference and swapoff. So I've added an equivalent check directly in free_swap_and_cache().
Details of how to provoke one possible issue (thanks to David Hildenbrand for deriving this):
--8<-----
__swap_entry_free() might be the last user and result in "count == SWAP_HAS_CACHE".
swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0.
So the question is: could someone reclaim the folio and turn si->inuse_pages==0, before we completed swap_page_trans_huge_swapped().
Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are still references by swap entries.
Process 1 still references subpage 0 via swap entry. Process 2 still references subpage 1 via swap entry.
Process 1 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE [then, preempted in the hypervisor etc.]
Process 2 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE
Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls __try_to_reclaim_swap().
__try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> swap_entry_free()->swap_range_free()-> ... WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries);
What stops swapoff to succeed after process 2 reclaimed the swap cache but before process1 finished its call to swap_page_trans_huge_swapped()?
--8<-----
References
- https://access.redhat.com/security/cve/CVE-2024-26960
- https://git.kernel.org/stable/c/0f98f6d2fb5fad00f8299b84b85b6bc1b6d7d19a
- https://git.kernel.org/stable/c/1ede7f1d7eed1738d1b9333fd1e152ccb450b86a
- https://git.kernel.org/stable/c/2da5568ee222ce0541bfe446a07998f92ed1643e
- https://git.kernel.org/stable/c/363d17e7f7907c8e27a9e86968af0eaa2301787b
- https://git.kernel.org/stable/c/3ce4c4c653e4e478ecb15d3c88e690f12cbf6b39
- https://git.kernel.org/stable/c/82b1c07a0af603e3c47b906c8e991dc96f01688e
- https://git.kernel.org/stable/c/d85c11c97ecf92d47a4b29e3faca714dc1f18d0d
- https://lists.debian.org/debian-lts-announce/2024/06/msg00017.html