Files
kernel/kernel-rt/centos/patches/Affine-irqs-and-workqueues-with-kthread_cpus.patch
Dongqi Chen c0fee2da8e [kernel-rt 4.18] Upgrade kernel-rt to version 4.18.0-147.3.1 based on SRPM
Story: 2007308
Task: 38795
Depends-On: https://review.opendev.org/#/c/722017/
Depends-On: https://review.opendev.org/#/c/722019/
Change-Id: I942b2aa6167120157a38af83135ed358b17ee78b
Signed-off-by: Dongqi Chen <chen.dq@neusoft.com>
2020-04-30 11:25:20 +08:00

72 lines
2.5 KiB
Diff

From 3ceddd371d3d53ef0bbe81bad548f2d6889ec3d2 Mon Sep 17 00:00:00 2001
From: Chris Friesen <chris.friesen@windriver.com>
Date: Tue, 24 Nov 2015 16:27:29 -0500
Subject: [PATCH 04/10] Affine irqs and workqueues with kthread_cpus
If the kthread_cpus boot arg is set it means we want to affine
kernel threads to the specified CPU mask as much as possible
in order to avoid doing work on other CPUs.
In this commit we extend the meaning of that boot arg to also
apply to the CPU affinity of unbound and ordered workqueues.
We also use the kthread_cpus value to determine the default irq
affinity. Specifically, as long as the previously-calculated
irq affinity intersects with the kthread_cpus affinity then we'll
use the intersection of the two as the default irq affinity.
Signed-off-by: Chris Friesen <chris.friesen@windriver.com>
[VT: replacing spaces with tabs. Performed tests]
Signed-off-by: Vu Tran <vu.tran@windriver.com>
Signed-off-by: Jim Somerville <Jim.Somerville@windriver.com>
Signed-off-by: Zhang Zhiguo <zhangzhg@neusoft.com>
---
kernel/irq/manage.c | 7 +++++++
kernel/workqueue.c | 4 ++++
2 files changed, 11 insertions(+)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index c4e31f4..002dec3 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -400,6 +400,13 @@ int irq_setup_affinity(struct irq_desc *desc)
if (cpumask_intersects(&mask, nodemask))
cpumask_and(&mask, &mask, nodemask);
}
+
+ /* This will narrow down the affinity further if we've specified
+ * a reduced cpu_kthread_mask in the boot args.
+ */
+ if (cpumask_intersects(&mask, cpu_kthread_mask))
+ cpumask_and(&mask, &mask, cpu_kthread_mask);
+
ret = irq_do_set_affinity(&desc->irq_data, &mask, false);
raw_spin_unlock(&mask_lock);
return ret;
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4228207..c7fa0ec 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5730,6 +5730,8 @@ int __init workqueue_init_early(void)
BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));
attrs->nice = std_nice[i];
+ /* If we've specified a kthread mask apply it here too. */
+ cpumask_copy(attrs->cpumask, cpu_kthread_mask);
unbound_std_wq_attrs[i] = attrs;
/*
@@ -5740,6 +5742,8 @@ int __init workqueue_init_early(void)
BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));
attrs->nice = std_nice[i];
attrs->no_numa = true;
+ /* If we've specified a kthread mask apply it here too. */
+ cpumask_copy(attrs->cpumask, cpu_kthread_mask);
ordered_wq_attrs[i] = attrs;
}
--
2.7.4