Just one clarification ...
On 8/10/2018 9:25 AM, Reinette Chatre wrote:
> static inline int x86_perf_event_error_state(struct perf_event *event)
> {
> int ret = 0;
> u64 tmp;
>
> ret = perf_event_read_local(event, &tmp, NULL, NULL);
> if (ret < 0)
>
Hi Peter,
On 8/8/2018 10:33 AM, Reinette Chatre wrote:
> On 8/8/2018 12:51 AM, Peter Zijlstra wrote:
>> On Tue, Aug 07, 2018 at 03:47:15PM -0700, Reinette Chatre wrote:
- I don't much fancy people accessing the guts of events like that;
would not an inline function like:
Hi Peter and Tony,
On 8/8/2018 12:51 AM, Peter Zijlstra wrote:
> On Tue, Aug 07, 2018 at 03:47:15PM -0700, Reinette Chatre wrote:
>>> FWIW, how long is that IRQ disabled section? It looks like something
>>> that could be taking a bit of time. We have these people that care about
>>> IRQ latency.
>
On 8/8/2018 9:47 AM, Peter Zijlstra wrote:
> On Wed, Aug 08, 2018 at 03:55:54PM +, Luck, Tony wrote:
>>> So _why_ doesn't this work? As said by Tony, that first call should
>>> prime the caches, so the second and third calls should not generate any
>>> misses.
>>
>> How much code/data is involv
On Wed, Aug 08, 2018 at 03:55:54PM +, Luck, Tony wrote:
> > So _why_ doesn't this work? As said by Tony, that first call should
> > prime the caches, so the second and third calls should not generate any
> > misses.
>
> How much code/data is involved? If there is a lot, then you may be unlucky
> So _why_ doesn't this work? As said by Tony, that first call should
> prime the caches, so the second and third calls should not generate any
> misses.
How much code/data is involved? If there is a lot, then you may be unlucky
with cache coloring and the later parts of the "prime the caches" cod
On Tue, Aug 07, 2018 at 03:47:15PM -0700, Reinette Chatre wrote:
> > FWIW, how long is that IRQ disabled section? It looks like something
> > that could be taking a bit of time. We have these people that care about
> > IRQ latency.
>
> We work closely with customers needing low latency as well as
On Tue, Aug 07, 2018 at 10:44:44PM -0700, Reinette Chatre wrote:
> Hi Tony,
>
> On 8/7/2018 6:28 PM, Luck, Tony wrote:
> > Would it help to call routines to read the "before" values of the counter
> > twice. The first time to preload the cache with anything needed to execute
> > the perf code path
Hi Tony,
On 8/7/2018 6:28 PM, Luck, Tony wrote:
> Would it help to call routines to read the "before" values of the counter
> twice. The first time to preload the cache with anything needed to execute
> the perf code path.
>>> In an attempt to improve the accuracy of the above I modified it to th
Would it help to call routines to read the "before" values of the counter
twice. The first time to preload the cache with anything needed to execute
the perf code path.
>> In an attempt to improve the accuracy of the above I modified it to the
>> following:
>>
>> /* create the two events as before
On Mon, Aug 06, 2018 at 04:07:09PM -0700, Reinette Chatre wrote:
> I've modified your suggestion slightly in an attempt to gain accuracy.
> Now it looks like:
>
> local_irq_disable();
> /* disable hw prefetchers */
> /* init local vars to loop through pseudo-locked mem */
> perf_event_read_local(
Hi Peter,
On 8/6/2018 3:12 PM, Peter Zijlstra wrote:
> On Mon, Aug 06, 2018 at 12:50:50PM -0700, Reinette Chatre wrote:
>> In my previous email I provided the details of the Cache Pseudo-Locking
>> feature implemented on top of resctrl. Please let me know if you would
>> like any more details abou
On Mon, Aug 06, 2018 at 12:50:50PM -0700, Reinette Chatre wrote:
> In my previous email I provided the details of the Cache Pseudo-Locking
> feature implemented on top of resctrl. Please let me know if you would
> like any more details about that. I can send you more materials.
I've no yet had tim
Hi Peter,
On 8/3/2018 11:37 AM, Reinette Chatre wrote:
> On 8/3/2018 8:25 AM, Peter Zijlstra wrote:
>> On Fri, Aug 03, 2018 at 08:18:09AM -0700, Reinette Chatre wrote:
>>> You state that you understand what we are trying to do and I hope that I
>>> convinced you that we are not able to accomplish
Hi Peter,
On 8/3/2018 8:25 AM, Peter Zijlstra wrote:
> On Fri, Aug 03, 2018 at 08:18:09AM -0700, Reinette Chatre wrote:
>> You state that you understand what we are trying to do and I hope that I
>> convinced you that we are not able to accomplish the same by following
>> your guidance.
>
> No, I
On Fri, Aug 03, 2018 at 08:18:09AM -0700, Reinette Chatre wrote:
> You state that you understand what we are trying to do and I hope that I
> convinced you that we are not able to accomplish the same by following
> your guidance.
No, I said I understood your pmc reserve patch and its implications.
Hi Peter,
On 8/3/2018 3:49 AM, Peter Zijlstra wrote:
> On Thu, Aug 02, 2018 at 01:43:42PM -0700, Reinette Chatre wrote:
>
>> The goal of this work is to use the existing PMU hardware coordination
>> mechanism to ensure that perf and resctrl will not interfere with each
>> other.
>
> I understand
On Thu, Aug 02, 2018 at 01:43:42PM -0700, Reinette Chatre wrote:
> The goal of this work is to use the existing PMU hardware coordination
> mechanism to ensure that perf and resctrl will not interfere with each
> other.
I understand what it does.. but if I'd seen you frobbing at the PMU
earlier y
Hi Peter,
On 8/2/2018 1:13 PM, Peter Zijlstra wrote:
> On Thu, Aug 02, 2018 at 01:06:19PM -0700, Dave Hansen wrote:
>> On 08/02/2018 12:54 PM, Peter Zijlstra wrote:
I totally understand not wanting to fill the tree with code hijacking
the raw PMU. Is your reaction to this really around
On Thu, Aug 02, 2018 at 01:06:19PM -0700, Dave Hansen wrote:
> On 08/02/2018 12:54 PM, Peter Zijlstra wrote:
> >> I totally understand not wanting to fill the tree with code hijacking
> >> the raw PMU. Is your reaction to this really around not wanting to
> >> start down the slippery slope that en
On 08/02/2018 12:54 PM, Peter Zijlstra wrote:
>> I totally understand not wanting to fill the tree with code hijacking
>> the raw PMU. Is your reaction to this really around not wanting to
>> start down the slippery slope that ends up with lots of raw PMU "owners"?
> That and the fact that multipl
On Thu, Aug 02, 2018 at 11:18:01AM -0700, Dave Hansen wrote:
> On 08/02/2018 10:37 AM, Peter Zijlstra wrote:
> >> I do not see how I can do so without incurring the cache hits and misses
> >> from the data needed and instructions run by this interface. Could you
> >> please share how I can do so an
On 08/02/2018 10:37 AM, Peter Zijlstra wrote:
>> I do not see how I can do so without incurring the cache hits and misses
>> from the data needed and instructions run by this interface. Could you
>> please share how I can do so and still obtain the accurate measurement
>> of cache residency of a sp
On Thu, Aug 02, 2018 at 09:44:13AM -0700, Reinette Chatre wrote:
> On 8/2/2018 9:18 AM, Peter Zijlstra wrote:
> > On Thu, Aug 02, 2018 at 09:14:10AM -0700, Reinette Chatre wrote:
> >
> >> The current implementation does not coordinate with perf and this is
> >> what I am trying to fix in this seri
On 8/2/2018 9:18 AM, Peter Zijlstra wrote:
> On Thu, Aug 02, 2018 at 09:14:10AM -0700, Reinette Chatre wrote:
>
>> The current implementation does not coordinate with perf and this is
>> what I am trying to fix in this series.
>>
>> I do respect your NAK but it is not clear to me how to proceed af
On Thu, Aug 02, 2018 at 09:14:10AM -0700, Reinette Chatre wrote:
> The current implementation does not coordinate with perf and this is
> what I am trying to fix in this series.
>
> I do respect your NAK but it is not clear to me how to proceed after
> obtaining it. Could you please elaborate on
Hi Peter,
On 8/2/2018 5:39 AM, Peter Zijlstra wrote:
> On Tue, Jul 31, 2018 at 12:38:27PM -0700, Reinette Chatre wrote:
>> Dear Maintainers,
>>
>> The success of Cache Pseudo-Locking can be measured via the use of
>> performance events. Specifically, the number of cache hits and misses
>> reading
On Tue, Jul 31, 2018 at 12:38:27PM -0700, Reinette Chatre wrote:
> Dear Maintainers,
>
> The success of Cache Pseudo-Locking can be measured via the use of
> performance events. Specifically, the number of cache hits and misses
> reading a memory region after it has been pseudo-locked to cache. Th
Dear Maintainers,
The success of Cache Pseudo-Locking can be measured via the use of
performance events. Specifically, the number of cache hits and misses
reading a memory region after it has been pseudo-locked to cache. This
measurement is triggered via the resctrl debugfs interface.
To ensure m
29 matches
Mail list logo