On Friday, August 17, 2018 at 2:57:31 AM UTC+2, Marcus Linsner wrote:
> "For example, consider a case where you have zero swap and system is nearly 
> running out of RAM. The kernel will take memory from e.g. Firefox (it can do 
> this because Firefox is running executable code that has been loaded from 
> disk - the code can be loaded from disk again if needed). If Firefox then 
> needs to access that RAM again N seconds later, the CPU generates "hard 
> fault" which forces Linux to free some RAM (e.g. take some RAM from another 
> process), load the missing data from disk and then allow Firefox to continue 
> as usual. This is pretty similar to normal swapping and kswapd0 does it.  " - 
> Mikko Rantalainen Feb 15 at 13:08

Good news: no more disk thrashing with this patch [1] (also attached) and I'm 
keeping track of how to properly get rid of this disk thrashing in this [2].

Bad news: I made the patch and I've no idea how good it is(since I am noob :D) 
and what are the side-effects of using it. Likely a better patch can be made! 
(but none who know how to do it right have answered/helped yet :D so ... it's, 
for me, better than nothing)

I'm not going to post here anymore, to allow OP to be answered (since, it seems 
to be a different issue)

[1] 
https://github.com/constantoverride/qubes-linux-kernel/blob/acd686a5019c7ab6ec10dc457bdee4830e2d741f/patches.addon/le9b.patch
[2] https://stackoverflow.com/q/52067753/10239615

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/7b9ba803-0e87-4525-9d8e-2f256ffc5122%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 32699b2..7636498 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -208,7 +208,7 @@ enum lru_list {
 
 #define for_each_lru(lru) for (lru = 0; lru < NR_LRU_LISTS; lru++)
 
-#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_ACTIVE_FILE; lru++)
+#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_INACTIVE_FILE; lru++)
 
 static inline int is_file_lru(enum lru_list lru)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 03822f8..1f3ffb5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2234,7 +2234,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
 
 	anon  = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON, MAX_NR_ZONES) +
 		lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, MAX_NR_ZONES);
-	file  = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
+	file  = //lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
 		lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES);
 
 	spin_lock_irq(&pgdat->lru_lock);
@@ -2345,7 +2345,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 			 sc->priority == DEF_PRIORITY);
 
 	blk_start_plug(&plug);
-	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
+	while (nr[LRU_INACTIVE_ANON] || //nr[LRU_ACTIVE_FILE] ||
 					nr[LRU_INACTIVE_FILE]) {
 		unsigned long nr_anon, nr_file, percentage;
 		unsigned long nr_scanned;
@@ -2372,7 +2372,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 		 * stop reclaiming one LRU and reduce the amount scanning
 		 * proportional to the original scan target.
 		 */
-		nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
+		nr_file = nr[LRU_INACTIVE_FILE] //+ nr[LRU_ACTIVE_FILE]
+			;
 		nr_anon = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON];
 
 		/*
@@ -2391,7 +2392,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 			percentage = nr_anon * 100 / scan_target;
 		} else {
 			unsigned long scan_target = targets[LRU_INACTIVE_FILE] +
-						targets[LRU_ACTIVE_FILE] + 1;
+						//targets[LRU_ACTIVE_FILE] + 
+						1;
 			lru = LRU_FILE;
 			percentage = nr_file * 100 / scan_target;
 		}

Reply via email to