Race Condition Affecting kernel-debug-core package, versions *


Severity

Recommended
0.0
medium
0
10

Based on Red Hat Enterprise Linux security rating

    Threat Intelligence

    EPSS
    0.04% (6th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk ID SNYK-RHEL8-KERNELDEBUGCORE-8283469
  • published 23 Oct 2024
  • disclosed 21 Oct 2024

How to fix?

There is no fixed version for RHEL:8 kernel-debug-core.

NVD Description

Note: Versions mentioned in the description apply only to the upstream kernel-debug-core package and not the kernel-debug-core package as distributed by RHEL. See How to fix? for RHEL:8 relevant fixed versions and status.

In the Linux kernel, the following vulnerability has been resolved:

vfs: fix race between evice_inodes() and find_inode()&iput()

Hi, all

Recently I noticed a bug1 in btrfs, after digged it into and I believe it'a race in vfs.

Let's assume there's a inode (ie ino 261) with i_count 1 is called by iput(), and there's a concurrent thread calling generic_shutdown_super().

cpu0: cpu1: iput() // i_count is 1 ->spin_lock(inode) ->dec i_count to 0 ->iput_final() generic_shutdown_super() ->__inode_add_lru() ->evict_inodes() // cause some reason[2] ->if (atomic_read(inode->i_count)) continue; // return before // inode 261 passed the above check // list_lru_add_obj() // and then schedule out ->spin_unlock() // note here: the inode 261 // was still at sb list and hash list, // and I_FREEING|I_WILL_FREE was not been set

btrfs_iget() // after some function calls ->find_inode() // found the above inode 261 ->spin_lock(inode) // check I_FREEING|I_WILL_FREE // and passed ->__iget() ->spin_unlock(inode) // schedule back ->spin_lock(inode) // check (I_NEW|I_FREEING|I_WILL_FREE) flags, // passed and set I_FREEING iput() ->spin_unlock(inode) ->spin_lock(inode) ->evict() // dec i_count to 0 ->iput_final() ->spin_unlock() ->evict()

Now, we have two threads simultaneously evicting the same inode, which may trigger the BUG(inode->i_state & I_CLEAR) statement both within clear_inode() and iput().

To fix the bug, recheck the inode->i_count after holding i_lock. Because in the most scenarios, the first check is valid, and the overhead of spin_lock() can be reduced.

If there is any misunderstanding, please let me know, thanks.

[2]: The reason might be 1. SB_ACTIVE was removed or 2. mapping_shrinkable() return false when I reproduced the bug.

CVSS Scores

version 3.1
Expand this section

NVD

4.7 medium
  • Attack Vector (AV)
    Local
  • Attack Complexity (AC)
    High
  • Privileges Required (PR)
    Low
  • User Interaction (UI)
    None
  • Scope (S)
    Unchanged
  • Confidentiality (C)
    None
  • Integrity (I)
    None
  • Availability (A)
    High
Expand this section

Red Hat

5.5 medium