Known Vulnerabilities
CVE-2021-47378
In the Linux kernel, the following vulnerability has been resolved: nvme-rdma: destroy cm id before destroy qp to avoid use after free We should always destroy cm_id before destroy qp to avoid to get cma event after qp was destroyed, which may lead to use after free. In RDMA connection establishment error flow, don't destroy qp in cm event handler.Just report cm_error to upper level, qp will be destroy in nvme_rdma_alloc_queue() after destroy cm id.
CVE-2021-47369
In the Linux kernel, the following vulnerability has been resolved: s390/qeth: fix NULL deref in qeth_clear_working_pool_list() When qeth_set_online() calls qeth_clear_working_pool_list() to roll back after an error exit from qeth_hardsetup_card(), we are at risk of accessing card->qdio.in_q before it was allocated by qeth_alloc_qdio_queues() via qeth_mpc_initialize(). qeth_clear_working_pool_list() then dereferences NULL, and by writing to queue->bufs[i].pool_entry scribbles all over the CPU's lowcore. Resulting in a crash when those lowcore areas are used next (eg. on the next machine-check interrupt). Such a scenario would typically happen when the device is first set online and its queues aren't allocated yet. An early IO error or certain misconfigs (eg. mismatched transport mode, bad portno) then cause us to error out from qeth_hardsetup_card() with card->qdio.in_q still being NULL. Fix it by checking the pointer for NULL before accessing it. Note that we also have (rare) paths inside qeth_mpc_initialize() where a configuration change can cause us to free the existing queues, expecting that subsequent code will allocate them again. If we then error out before that re-allocation happens, the same bug occurs. Root-caused-by: Heiko Carstens <[email protected]>
CVE-2021-47368
In the Linux kernel, the following vulnerability has been resolved: enetc: Fix illegal access when reading affinity_hint irq_set_affinity_hit() stores a reference to the cpumask_t parameter in the irq descriptor, and that reference can be accessed later from irq_affinity_hint_proc_show(). Since the cpu_mask parameter passed to irq_set_affinity_hit() has only temporary storage (it's on the stack memory), later accesses to it are illegal. Thus reads from the corresponding procfs affinity_hint file can result in paging request oops. The issue is fixed by the get_cpu_mask() helper, which provides a permanent storage for the cpumask_t parameter.