CVE-2025-37821 Affecting kernel-kdump-devel package, versions *


Severity

Recommended
low

Based on CentOS security rating.

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-CENTOS7-KERNELKDUMPDEVEL-10104734
  • published9 May 2025
  • disclosed8 May 2025

Introduced: 8 May 2025

NewCVE-2025-37821  (opens in a new tab)

How to fix?

There is no fixed version for Centos:7 kernel-kdump-devel.

NVD Description

Note: Versions mentioned in the description apply only to the upstream kernel-kdump-devel package and not the kernel-kdump-devel package as distributed by Centos. See How to fix? for Centos:7 relevant fixed versions and status.

In the Linux kernel, the following vulnerability has been resolved:

sched/eevdf: Fix se->slice being set to U64_MAX and resulting crash

There is a code path in dequeue_entities() that can set the slice of a sched_entity to U64_MAX, which sometimes results in a crash.

The offending case is when dequeue_entities() is called to dequeue a delayed group entity, and then the entity's parent's dequeue is delayed. In that case:

  1. In the if (entity_is_task(se)) else block at the beginning of dequeue_entities(), slice is set to cfs_rq_min_slice(group_cfs_rq(se)). If the entity was delayed, then it has no queued tasks, so cfs_rq_min_slice() returns U64_MAX.
  2. The first for_each_sched_entity() loop dequeues the entity.
  3. If the entity was its parent's only child, then the next iteration tries to dequeue the parent.
  4. If the parent's dequeue needs to be delayed, then it breaks from the first for_each_sched_entity() loop without updating slice.
  5. The second for_each_sched_entity() loop sets the parent's ->slice to the saved slice, which is still U64_MAX.

This throws off subsequent calculations with potentially catastrophic results. A manifestation we saw in production was:

  1. In update_entity_lag(), se->slice is used to calculate limit, which ends up as a huge negative number.
  2. limit is used in se->vlag = clamp(vlag, -limit, limit). Because limit is negative, vlag > limit, so se->vlag is set to the same huge negative number.
  3. In place_entity(), se->vlag is scaled, which overflows and results in another huge (positive or negative) number.
  4. The adjusted lag is subtracted from se->vruntime, which increases or decreases se->vruntime by a huge number.
  5. pick_eevdf() calls entity_eligible()/vruntime_eligible(), which incorrectly returns false because the vruntime is so far from the other vruntimes on the queue, causing the (vruntime - cfs_rq->min_vruntime) * load calulation to overflow.
  6. Nothing appears to be eligible, so pick_eevdf() returns NULL.
  7. pick_next_entity() tries to dereference the return value of pick_eevdf() and crashes.

Dumping the cfs_rq states from the core dumps with drgn showed tell-tale huge vruntime ranges and bogus vlag values, and I also traced se->slice being set to U64_MAX on live systems (which was usually "benign" since the rest of the runqueue needed to be in a particular state to crash).

Fix it in dequeue_entities() by always setting slice from the first non-empty cfs_rq.

CVSS Base Scores

version 3.1