In the Linux kernel, the following vulnerability has been resolved: mm: avoid overflows in dirty throttling logic The dirty throttling logic is interspersed with assumptions that dirty limits in PAGE_SIZE units fit into 32-bit (so that various multiplications fit into 64-bits). If limits end up being larger, we will hit overflows, possible divisions by 0 etc. Fix these problems by never allowing so large dirty limits as they have dubious practical value anyway. For dirty_bytes / dirty_background_bytes interfaces we can just refuse to set so large limits. For dirty_ratio / dirty_background_ratio it isn't so simple as the dirty limit is computed from the amount of available memory which can change due to memory hotplug etc. So when converting dirty limits from ratios to numbers of pages, we just don't allow the result to exceed UINT_MAX. This is root-only triggerable problem which occurs when the operator sets dirty limits to >16 TB.
https://git.kernel.org/stable/c/c83ed422c24f0d4b264f89291d4fabe285f80dbc
https://git.kernel.org/stable/c/bd16a7ee339aef3ee4c90cb23902afb6af379ea0
https://git.kernel.org/stable/c/a25e8536184516b55ef89ab91dd2eea429de28d2
https://git.kernel.org/stable/c/8e0b5e7f2895eccef5c2a0018b589266f90c4805
https://git.kernel.org/stable/c/7a49389771ae7666f4dc3426e2a4594bf23ae290
https://git.kernel.org/stable/c/4d3817b64eda07491bdd86a234629fe0764fb42a
https://git.kernel.org/stable/c/385d838df280eba6c8680f9777bfa0d0bfe7e8b2
https://git.kernel.org/stable/c/2b2d2b8766db028bd827af34075f221ae9e9efff