On Fri, Oct 30, 2015 at 07:26:35PM -0400, Waiman Long wrote: > This patch enables the accumulation of kicking and waiting related > PV qspinlock statistics when the new QUEUED_LOCK_STAT configuration > option is selected. It also enables the collection of data which > enable us to calculate the kicking and wakeup latencies which have > a heavy dependency on the CPUs being used. > > The statistical counters are per-cpu variables to minimize the > performance overhead in their updates. These counters are exported > via the sysfs filesystem under the /sys/kernel/qlockstat directory. > When the corresponding sysfs files are read, summation and computing > of the required data are then performed.
Why did you switch to sysfs? You can create custom debugfs files too. > @@ -259,7 +275,7 @@ static void pv_wait_head(struct qspinlock *lock, struct > mcs_spinlock *node) > if (READ_ONCE(pn->state) == vcpu_hashed) > lp = (struct qspinlock **)1; > > - for (;;) { > + for (;; waitcnt++) { > for (loop = SPIN_THRESHOLD; loop; loop--) { > if (!READ_ONCE(l->locked)) > return; Did you check that goes away when !STAT ? > +/* > + * Return the average kick latency (ns) = pv_latency_kick/pv_kick_unlock > + */ > +static ssize_t > +kick_latency_show(struct kobject *kobj, struct kobj_attribute *attr, char > *buf) > +{ > + int cpu; > + u64 latencies = 0, kicks = 0; > + > + for_each_online_cpu(cpu) { I think you need for_each_possible_cpu(), otherwise the results will change with hotplug operations. > + kicks += per_cpu(qstats[qstat_pv_kick_unlock], cpu); > + latencies += per_cpu(qstats[qstat_pv_latency_kick], cpu); > + } > + > + /* Rounded to the nearest ns */ > + return sprintf(buf, "%llu\n", kicks ? (latencies + kicks/2)/kicks : 0); > +} -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/