CVE-2024-35809 Affecting kernel-zfcpdump-core package, versions <0:4.18.0-553.22.1.el8_10


Severity

Recommended
high

Based on AlmaLinux security rating.

Threat Intelligence

EPSS
0.04% (15th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-ALMALINUX8-KERNELZFCPDUMPCORE-8093214
  • published25 Sept 2024
  • disclosed24 Sept 2024

Introduced: 24 Sep 2024

CVE-2024-35809  (opens in a new tab)

How to fix?

Upgrade AlmaLinux:8 kernel-zfcpdump-core to version 0:4.18.0-553.22.1.el8_10 or higher.
This issue was patched in ALSA-2024:7000.

NVD Description

Note: Versions mentioned in the description apply only to the upstream kernel-zfcpdump-core package and not the kernel-zfcpdump-core package as distributed by AlmaLinux. See How to fix? for AlmaLinux:8 relevant fixed versions and status.

In the Linux kernel, the following vulnerability has been resolved:

PCI/PM: Drain runtime-idle callbacks before driver removal

A race condition between the .runtime_idle() callback and the .remove() callback in the rtsx_pcr PCI driver leads to a kernel crash due to an unhandled page fault [1].

The problem is that rtsx_pci_runtime_idle() is not expected to be running after pm_runtime_get_sync() has been called, but the latter doesn't really guarantee that. It only guarantees that the suspend and resume callbacks will not be running when it returns.

However, if a .runtime_idle() callback is already running when pm_runtime_get_sync() is called, the latter will notice that the runtime PM status of the device is RPM_ACTIVE and it will return right away without waiting for the former to complete. In fact, it cannot wait for .runtime_idle() to complete because it may be called from that callback (it arguably does not make much sense to do that, but it is not strictly prohibited).

Thus in general, whoever is providing a .runtime_idle() callback needs to protect it from running in parallel with whatever code runs after pm_runtime_get_sync(). [Note that .runtime_idle() will not start after pm_runtime_get_sync() has returned, but it may continue running then if it has started earlier.]

One way to address that race condition is to call pm_runtime_barrier() after pm_runtime_get_sync() (not before it, because a nonzero value of the runtime PM usage counter is necessary to prevent runtime PM callbacks from being invoked) to wait for the .runtime_idle() callback to complete should it be running at that point. A suitable place for doing that is in pci_device_remove() which calls pm_runtime_get_sync() before removing the driver, so it may as well call pm_runtime_barrier() subsequently, which will prevent the race in question from occurring, not just in the rtsx_pcr driver, but in any PCI drivers providing .runtime_idle() callbacks.

CVSS Scores

version 3.1