The probability is the direct output of the EPSS model, and conveys an overall sense of the threat of exploitation in the wild. The percentile measures the EPSS probability relative to all known EPSS scores. Note: This data is updated daily, relying on the latest available EPSS model version. Check out the EPSS documentation for more details.
In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.
Test your applicationsUpgrade SLES:15.6
dlm-kmp-default
to version 6.4.0-150600.23.73.1 or higher.
Note: Versions mentioned in the description apply only to the upstream dlm-kmp-default
package and not the dlm-kmp-default
package as distributed by SLES
.
See How to fix?
for SLES:15.6
relevant fixed versions and status.
In the Linux kernel, the following vulnerability has been resolved:
RDMA/siw: Fix the sendmsg byte count in siw_tcp_sendpages
Ever since commit c2ff29e99a76 ("siw: Inline do_tcp_sendpages()"), we have been doing this:
static int siw_tcp_sendpages(struct socket *s, struct page *page, int offset, size_t size) [...] / Calculate the number of bytes we need to push, for this page * specifically / size_t bytes = min_t(size_t, PAGE_SIZE - offset, size); / If we can't splice it, then copy it in, as normal / if (!sendpage_ok(page[i])) msg.msg_flags &= ~MSG_SPLICE_PAGES; / Set the bvec pointing to the page, with len $bytes / bvec_set_page(&bvec, page[i], bytes, offset); / Set the iter to $size, aka the size of the whole sendpages (!!!) / iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); try_page_again: lock_sock(sk); / Sendmsg with $size size (!!!) */ rv = tcp_sendmsg_locked(sk, &msg, size);
This means we've been sending oversized iov_iters and tcp_sendmsg calls for a while. This has a been a benign bug because sendpage_ok() always returned true. With the recent slab allocator changes being slowly introduced into next (that disallow sendpage on large kmalloc allocations), we have recently hit out-of-bounds crashes, due to slight differences in iov_iter behavior between the MSG_SPLICE_PAGES and "regular" copy paths:
(MSG_SPLICE_PAGES) skb_splice_from_iter iov_iter_extract_pages iov_iter_extract_bvec_pages uses i->nr_segs to correctly stop in its tracks before OoB'ing everywhere skb_splice_from_iter gets a "short" read
(!MSG_SPLICE_PAGES) skb_copy_to_page_nocache copy=iov_iter_count [...] copy_from_iter /* this doesn't help */ if (unlikely(iter->count < len)) len = iter->count; iterate_bvec ... and we run off the bvecs
Fix this by properly setting the iov_iter's byte count, plus sending the correct byte count to tcp_sendmsg_locked.