CVE-2024-53058 Affecting kernel-default-livepatch-devel package, versions <5.14.21-150500.55.88.1


Severity

Recommended
0.0
medium
0
10

Based on SUSE Linux Enterprise Server security rating.

Threat Intelligence

EPSS
0.05% (18th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-SLES155-KERNELDEFAULTLIVEPATCHDEVEL-8528047
  • published18 Dec 2024
  • disclosed17 Dec 2024

Introduced: 17 Dec 2024

NewCVE-2024-53058  (opens in a new tab)

How to fix?

Upgrade SLES:15.5 kernel-default-livepatch-devel to version 5.14.21-150500.55.88.1 or higher.

NVD Description

Note: Versions mentioned in the description apply only to the upstream kernel-default-livepatch-devel package and not the kernel-default-livepatch-devel package as distributed by SLES. See How to fix? for SLES:15.5 relevant fixed versions and status.

In the Linux kernel, the following vulnerability has been resolved:

net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data

In case the non-paged data of a SKB carries protocol header and protocol payload to be transmitted on a certain platform that the DMA AXI address width is configured to 40-bit/48-bit, or the size of the non-paged data is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI address width is configured to 32-bit, then this SKB requires at least two DMA transmit descriptors to serve it.

For example, three descriptors are allocated to split one DMA buffer mapped from one piece of non-paged data: dma_desc[N + 0], dma_desc[N + 1], dma_desc[N + 2]. Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold extra information to be reused in stmmac_tx_clean(): tx_q->tx_skbuff_dma[N + 0], tx_q->tx_skbuff_dma[N + 1], tx_q->tx_skbuff_dma[N + 2]. Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer address returned by DMA mapping call. stmmac_tx_clean() will try to unmap the DMA buffer ONLY_IF tx_q->tx_skbuff_dma[entry].buf is a valid buffer address.

The expected behavior that saves DMA buffer address of this non-paged data to tx_q->tx_skbuff_dma[entry].buf is: tx_q->tx_skbuff_dma[N + 0].buf = NULL; tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single(); Unfortunately, the current code misbehaves like this: tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single(); tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = NULL;

On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address obviously, then the DMA buffer will be unmapped immediately. There may be a rare case that the DMA engine does not finish the pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go horribly wrong, DMA is going to access a unmapped/unreferenced memory region, corrupted data will be transmited or iommu fault will be triggered :(

In contrast, the for-loop that maps SKB fragments behaves perfectly as expected, and that is how the driver should do for both non-paged data and paged frags actually.

This patch corrects DMA map/unmap sequences by fixing the array index for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address.

Tested and verified on DWXGMAC CORE 3.20a

CVSS Scores

version 3.1