aboutsummaryrefslogtreecommitdiffstats
path: root/target/linux/generic/backport-5.15/020-v6.1-13-mm-mglru-don-t-sync-disk-for-each-aging-cycle.patch
diff options
context:
space:
mode:
authorKazuki H <kazukih0205@gmail.com>2023-03-21 06:51:03 +0900
committerChristian Marangi <ansuelsmth@gmail.com>2023-03-27 14:16:10 +0200
commit0d0928f58795e336646ad31ea96d2919b5328f39 (patch)
treeeb321324f622f740f72233d019ef01873a4f97cf /target/linux/generic/backport-5.15/020-v6.1-13-mm-mglru-don-t-sync-disk-for-each-aging-cycle.patch
parentdc79b51533cfe9a7806353f6c6fd6b22cd80d536 (diff)
downloadupstream-0d0928f58795e336646ad31ea96d2919b5328f39.tar.gz
upstream-0d0928f58795e336646ad31ea96d2919b5328f39.tar.bz2
upstream-0d0928f58795e336646ad31ea96d2919b5328f39.zip
kernel: Update MGLRU patchset
The current patches are old, update them from mainline. Backports taken from https://github.com/yuzhaogoogle/linux/commits/mglru-5.15 Tested-by: Kazuki H <kazukih0205@gmail.com> #mt7622/Linksys E8450 UBI Signed-off-by: Kazuki H <kazukih0205@gmail.com>
Diffstat (limited to 'target/linux/generic/backport-5.15/020-v6.1-13-mm-mglru-don-t-sync-disk-for-each-aging-cycle.patch')
-rw-r--r--target/linux/generic/backport-5.15/020-v6.1-13-mm-mglru-don-t-sync-disk-for-each-aging-cycle.patch37
1 files changed, 37 insertions, 0 deletions
diff --git a/target/linux/generic/backport-5.15/020-v6.1-13-mm-mglru-don-t-sync-disk-for-each-aging-cycle.patch b/target/linux/generic/backport-5.15/020-v6.1-13-mm-mglru-don-t-sync-disk-for-each-aging-cycle.patch
new file mode 100644
index 0000000000..a2318499e7
--- /dev/null
+++ b/target/linux/generic/backport-5.15/020-v6.1-13-mm-mglru-don-t-sync-disk-for-each-aging-cycle.patch
@@ -0,0 +1,37 @@
+From 92d430e8955c976eacb7cc91d7ff849c0dd009af Mon Sep 17 00:00:00 2001
+From: Yu Zhao <yuzhao@google.com>
+Date: Wed, 28 Sep 2022 13:36:58 -0600
+Subject: [PATCH 13/29] mm/mglru: don't sync disk for each aging cycle
+
+wakeup_flusher_threads() was added under the assumption that if a system
+runs out of clean cold pages, it might want to write back dirty pages more
+aggressively so that they can become clean and be dropped.
+
+However, doing so can breach the rate limit a system wants to impose on
+writeback, resulting in early SSD wearout.
+
+Link: https://lkml.kernel.org/r/YzSiWq9UEER5LKup@google.com
+Fixes: bd74fdaea146 ("mm: multi-gen LRU: support page table walks")
+Signed-off-by: Yu Zhao <yuzhao@google.com>
+Reported-by: Axel Rasmussen <axelrasmussen@google.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+---
+ mm/vmscan.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index b74b334488d8..1c0875e6514a 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4165,8 +4165,6 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq,
+ if (wq_has_sleeper(&lruvec->mm_state.wait))
+ wake_up_all(&lruvec->mm_state.wait);
+
+- wakeup_flusher_threads(WB_REASON_VMSCAN);
+-
+ return true;
+ }
+
+--
+2.40.0
+