CVE-2023-52587 Affecting kernel-source-azure package, versions <5.14.21-150500.33.48.1


Severity

Recommended
0.0
medium
0
10

Based on SUSE Linux Enterprise Server security rating.

Threat Intelligence

EPSS
0.04% (15th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-SLES155-KERNELSOURCEAZURE-6808309
  • published4 May 2024
  • disclosed3 May 2024

Introduced: 3 May 2024

CVE-2023-52587  (opens in a new tab)

How to fix?

Upgrade SLES:15.5 kernel-source-azure to version 5.14.21-150500.33.48.1 or higher.

NVD Description

Note: Versions mentioned in the description apply only to the upstream kernel-source-azure package and not the kernel-source-azure package as distributed by SLES. See How to fix? for SLES:15.5 relevant fixed versions and status.

In the Linux kernel, the following vulnerability has been resolved:

IB/ipoib: Fix mcast list locking

Releasing the priv-&gt;lock while iterating the priv-&gt;multicast_list in ipoib_mcast_join_task() opens a window for ipoib_mcast_dev_flush() to remove the items while in the middle of iteration. If the mcast is removed while the lock was dropped, the for loop spins forever resulting in a hard lockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel):

Task A (kworker/u72:2 below)       | Task B (kworker/u72:0 below)
-----------------------------------+-----------------------------------
ipoib_mcast_join_task(work)        | ipoib_ib_dev_flush_light(work)
  spin_lock_irq(&amp;priv-&gt;lock)       | __ipoib_ib_dev_flush(priv, ...)
  list_for_each_entry(mcast,       | ipoib_mcast_dev_flush(dev = priv-&gt;dev)
      &amp;priv-&gt;multicast_list, list) |
    ipoib_mcast_join(dev, mcast)   |
      spin_unlock_irq(&amp;priv-&gt;lock) |
                                   |   spin_lock_irqsave(&amp;priv-&gt;lock, flags)
                                   |   list_for_each_entry_safe(mcast, tmcast,
                                   |                  &amp;priv-&gt;multicast_list, list)
                                   |     list_del(&amp;mcast-&gt;list);
                                   |     list_add_tail(&amp;mcast-&gt;list, &amp;remove_list)
                                   |   spin_unlock_irqrestore(&amp;priv-&gt;lock, flags)
      spin_lock_irq(&amp;priv-&gt;lock)   |
                                   |   ipoib_mcast_remove_list(&amp;remove_list)

(Here, mcast is no longer on the | list_for_each_entry_safe(mcast, tmcast, priv-&gt;multicast_list and we keep | remove_list, list) spinning on the remove_list of | >>> wait_for_completion(&mcast->done) the other thread which is blocked | and the list is still valid on | it's stack.)

Fix this by keeping the lock held and changing to GFP_ATOMIC to prevent eventual sleeps. Unfortunately we could not reproduce the lockup and confirm this fix but based on the code review I think this fix should address such lockups.

crash> bc 31 PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: "kworker/u72:2"

[exception RIP: ipoib_mcast_join_task+0x1b1]
RIP: ffffffffc0944ac1  RSP: ff646f199a8c7e00  RFLAGS: 00000002
RAX: 0000000000000000  RBX: ff1c6a1a04dc82f8  RCX: 0000000000000000
                              work (&amp;priv-&gt;mcast_task{,.work})
RDX: ff1c6a192d60ac68  RSI: 0000000000000286  RDI: ff1c6a1a04dc8000
       &amp;mcast-&gt;list
RBP: ff646f199a8c7e90   R8: ff1c699980019420   R9: ff1c6a1920c9a000
R10: ff646f199a8c7e00  R11: ff1c6a191a7d9800  R12: ff1c6a192d60ac00
                                                     mcast
R13: ff1c6a1d82200000  R14: ff1c6a1a04dc8000  R15: ff1c6a1a04dc82d8
       dev                    priv (&amp;priv-&gt;lock)     &amp;priv-&gt;multicast_list (aka head)
ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018

--- <NMI exception stack> --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_ipoib] #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967

crash> rx ff646f199a8c7e68 ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work = &priv->mcast_task.work

crash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000 (empty)

crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a1a04dc8000 mcast_task.work.func = 0xffffffffc0944910 <ipoib_mcast_join_task>, mcast_mutex.owner.counter = 0xff1c69998efec000

crash> b 8 PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: "kworker/u72:0"

#3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib_ipoib] #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_ipoib] #7 [ff ---truncated---

CVSS Scores

version 3.1