CVE-2025-22053 Affecting kernel-debug-modules-core package, versions *


Severity

Recommended
0.0
medium
0
10

Based on CentOS security rating.

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-CENTOS9-KERNELDEBUGMODULESCORE-9733491
  • published17 Apr 2025
  • disclosed16 Apr 2025

Introduced: 16 Apr 2025

NewCVE-2025-22053  (opens in a new tab)

How to fix?

There is no fixed version for Centos:9 kernel-debug-modules-core.

NVD Description

Note: Versions mentioned in the description apply only to the upstream kernel-debug-modules-core package and not the kernel-debug-modules-core package as distributed by Centos. See How to fix? for Centos:9 relevant fixed versions and status.

In the Linux kernel, the following vulnerability has been resolved:

net: ibmveth: make veth_pool_store stop hanging

v2:

  • Created a single error handling unlock and exit in veth_pool_store
  • Greatly expanded commit message with previous explanatory-only text

Summary: Use rtnl_mutex to synchronize veth_pool_store with itself, ibmveth_close and ibmveth_open, preventing multiple calls in a row to napi_disable.

Background: Two (or more) threads could call veth_pool_store through writing to /sys/devices/vio/30000002/pool*/*. You can do this easily with a little shell script. This causes a hang.

I configured LOCKDEP, compiled ibmveth.c with DEBUG, and built a new kernel. I ran this test again and saw:

Setting pool0/active to 0
Setting pool1/active to 1
[   73.911067][ T4365] ibmveth 30000002 eth0: close starting
Setting pool1/active to 1
Setting pool1/active to 0
[   73.911367][ T4366] ibmveth 30000002 eth0: close starting
[   73.916056][ T4365] ibmveth 30000002 eth0: close complete
[   73.916064][ T4365] ibmveth 30000002 eth0: open starting
[  110.808564][  T712] systemd-journald[712]: Sent WATCHDOG=1 notification.
[  230.808495][  T712] systemd-journald[712]: Sent WATCHDOG=1 notification.
[  243.683786][  T123] INFO: task stress.sh:4365 blocked for more than 122 seconds.
[  243.683827][  T123]       Not tainted 6.14.0-01103-g2df0c02dab82-dirty #8
[  243.683833][  T123] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  243.683838][  T123] task:stress.sh       state:D stack:28096 pid:4365  tgid:4365  ppid:4364   task_flags:0x400040 flags:0x00042000
[  243.683852][  T123] Call Trace:
[  243.683857][  T123] [c00000000c38f690] [0000000000000001] 0x1 (unreliable)
[  243.683868][  T123] [c00000000c38f840] [c00000000001f908] __switch_to+0x318/0x4e0
[  243.683878][  T123] [c00000000c38f8a0] [c000000001549a70] __schedule+0x500/0x12a0
[  243.683888][  T123] [c00000000c38f9a0] [c00000000154a878] schedule+0x68/0x210
[  243.683896][  T123] [c00000000c38f9d0] [c00000000154ac80] schedule_preempt_disabled+0x30/0x50
[  243.683904][  T123] [c00000000c38fa00] [c00000000154dbb0] __mutex_lock+0x730/0x10f0
[  243.683913][  T123] [c00000000c38fb10] [c000000001154d40] napi_enable+0x30/0x60
[  243.683921][  T123] [c00000000c38fb40] [c000000000f4ae94] ibmveth_open+0x68/0x5dc
[  243.683928][  T123] [c00000000c38fbe0] [c000000000f4aa20] veth_pool_store+0x220/0x270
[  243.683936][  T123] [c00000000c38fc70] [c000000000826278] sysfs_kf_write+0x68/0xb0
[  243.683944][  T123] [c00000000c38fcb0] [c0000000008240b8] kernfs_fop_write_iter+0x198/0x2d0
[  243.683951][  T123] [c00000000c38fd00] [c00000000071b9ac] vfs_write+0x34c/0x650
[  243.683958][  T123] [c00000000c38fdc0] [c00000000071bea8] ksys_write+0x88/0x150
[  243.683966][  T123] [c00000000c38fe10] [c0000000000317f4] system_call_exception+0x124/0x340
[  243.683973][  T123] [c00000000c38fe50] [c00000000000d05c] system_call_vectored_common+0x15c/0x2ec
...
[  243.684087][  T123] Showing all locks held in the system:
[  243.684095][  T123] 1 lock held by khungtaskd/123:
[  243.684099][  T123]  #0: c00000000278e370 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x50/0x248
[  243.684114][  T123] 4 locks held by stress.sh/4365:
[  243.684119][  T123]  #0: c00000003a4cd3f8 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x88/0x150
[  243.684132][  T123]  #1: c000000041aea888 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x154/0x2d0
[  243.684143][  T123]  #2: c0000000366fb9a8 (kn->active#64){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x160/0x2d0
[  243.684155][  T123]  #3: c000000035ff4cb8 (&dev->lock){+.+.}-{3:3}, at: napi_enable+0x30/0x60
[  243.684166][  T123] 5 locks held by stress.sh/4366:
[  243.684170][  T123]  #0: c00000003a4cd3f8 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x88/0x150
[  243.

---truncated---

CVSS Base Scores

version 3.1