that or
deal with wrap at PMU read time.
So thanks for dealing with it, some small comments below but overall it
is fine.
Signed-off-by: Thomas Gleixner
Cc: Tvrtko Ursulin
Cc: Jani Nikula
Cc: Joonas Lahtinen
Cc: Rodrigo Vivi
Cc: David Airlie
Cc: Daniel Vetter
Cc: intel
On 10/12/2020 17:44, Thomas Gleixner wrote:
On Thu, Dec 10 2020 at 17:09, Tvrtko Ursulin wrote:
On 10/12/2020 16:35, Thomas Gleixner wrote:
I'll send out a series addressing irq_to_desc() (ab)use all over the
place shortly. i915 is in there...
Yep we don't need atomic, my bad. And we would
On 10/12/2020 16:35, Thomas Gleixner wrote:
On Thu, Dec 10 2020 at 10:45, Tvrtko Ursulin wrote:
On 10/12/2020 07:53, Joonas Lahtinen wrote:
I think later in the thread there was a suggestion to replace this with
simple counter increment in IRQ handler.
It was indeed unsafe until recent
On 10/12/2020 07:53, Joonas Lahtinen wrote:
+ Tvrtko and Chris for comments
Code seems to be added in:
commit 0cd4684d6ea9a4ffec33fc19de4dd667bb90d0a5
Author: Tvrtko Ursulin
Date: Tue Nov 21 18:18:50 2017 +
drm/i915/pmu: Add interrupt count metric
I think later in the thread
On 03/11/2020 02:53, Lu Baolu wrote:
On 11/2/20 7:52 PM, Tvrtko Ursulin wrote:
On 02/11/2020 02:00, Lu Baolu wrote:
Hi Tvrtko,
On 10/12/20 4:44 PM, Tvrtko Ursulin wrote:
On 29/09/2020 01:11, Lu Baolu wrote:
Hi Tvrtko,
On 9/28/20 5:44 PM, Tvrtko Ursulin wrote:
On 27/09/2020 07:34, Lu
On 02/11/2020 02:00, Lu Baolu wrote:
Hi Tvrtko,
On 10/12/20 4:44 PM, Tvrtko Ursulin wrote:
On 29/09/2020 01:11, Lu Baolu wrote:
Hi Tvrtko,
On 9/28/20 5:44 PM, Tvrtko Ursulin wrote:
On 27/09/2020 07:34, Lu Baolu wrote:
Hi,
The previous post of this series could be found here.
https
On 29/09/2020 01:11, Lu Baolu wrote:
Hi Tvrtko,
On 9/28/20 5:44 PM, Tvrtko Ursulin wrote:
On 27/09/2020 07:34, Lu Baolu wrote:
Hi,
The previous post of this series could be found here.
https://lore.kernel.org/linux-iommu/20200912032200.11489-1-baolu...@linux.intel.com/
This version
On 27/09/2020 07:34, Lu Baolu wrote:
Hi,
The previous post of this series could be found here.
https://lore.kernel.org/linux-iommu/20200912032200.11489-1-baolu...@linux.intel.com/
This version introduce a new patch [4/7] to fix an issue reported here.
!static_cpu_has(X86_FEATURE_PAT)))
+ ptr = NULL;
+ else if (i915_gem_object_has_struct_page(obj))
+ ptr = i915_gem_object_map_page(obj, type);
+ else
+ ptr = i915_gem_object_map_pfn(obj, type);
if (!ptr) {
err = -ENOMEM;
goto err_unpin;
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
turn NULL;
}
void shmem_unpin_map(struct file *file, void *ptr)
{
mapping_clear_unevictable(file->f_mapping);
- __shmem_unpin_map(file, ptr, shmem_npte(file));
+ vfree(ptr);
}
static int __shmem_rw(struct file *file, loff_t off,
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
.
*/
if (!PageHighMem(page))
- return kmap(page);
+ return page_address(page);
}
mem = stack;
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
On 23/09/2020 15:44, Christoph Hellwig wrote:
On Wed, Sep 23, 2020 at 02:58:43PM +0100, Tvrtko Ursulin wrote:
Series did not get a CI run from our side because of a different base so I
don't know if you would like to have a run there? If so you would need to
rebase against git
On 23/09/2020 14:41, Christoph Hellwig wrote:
On Wed, Sep 23, 2020 at 10:52:33AM +0100, Tvrtko Ursulin wrote:
On 18/09/2020 17:37, Christoph Hellwig wrote:
i915_gem_object_map implements fairly low-level vmap functionality in
a driver. Split it into two helpers, one for remapping kernel
On 18/09/2020 17:37, Christoph Hellwig wrote:
i915_gem_object_map implements fairly low-level vmap functionality in
a driver. Split it into two helpers, one for remapping kernel memory
which can use vmap, and one for I/O memory that uses vmap_pfn.
The only practical difference is that
On 22/09/2020 07:22, Christoph Hellwig wrote:
On Mon, Sep 21, 2020 at 08:11:57PM +0100, Matthew Wilcox wrote:
This is awkward. I'd like it if we had a vfree() variant which called
put_page() instead of __free_pages(). I'd like it even more if we
used release_pages() instead of our own loop
On 08/09/2020 23:43, Tom Murphy wrote:
On Tue, 8 Sep 2020 at 16:56, Tvrtko Ursulin
wrote:
On 08/09/2020 16:44, Logan Gunthorpe wrote:
On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
b/drivers/gpu/drm/i915/i915
index b7b59328cb76
Hi,
On 27/08/2020 22:36, Logan Gunthorpe wrote:
On 2020-08-23 6:04 p.m., Tom Murphy wrote:
I have added a check for the sg_dma_len == 0 :
"""
} __sgt_iter(struct scatterlist *sgl, bool dma) {
struct sgt_iter s = { .sgp = sgl };
+ if (sgl && sg_dma_len(sgl) == 0)
+
On 11/06/2020 12:29, Daniel Vetter wrote:
> On Thu, Jun 11, 2020 at 12:36 PM Tvrtko Ursulin
> wrote:
>> On 10/06/2020 16:17, Daniel Vetter wrote:
>>> On Wed, Jun 10, 2020 at 4:22 PM Tvrtko Ursulin
>>> wrote:
>>>>
>>>>
>>>>
On 10/06/2020 16:17, Daniel Vetter wrote:
> On Wed, Jun 10, 2020 at 4:22 PM Tvrtko Ursulin
> wrote:
>>
>>
>> On 04/06/2020 09:12, Daniel Vetter wrote:
>>> Design is similar to the lockdep annotations for workers, but with
>>> some twists:
>>>
On 04/06/2020 09:12, Daniel Vetter wrote:
Design is similar to the lockdep annotations for workers, but with
some twists:
- We use a read-lock for the execution/worker/completion side, so that
this explicit annotation can be more liberally sprinkled around.
With read locks lockdep isn't
On 28/04/2020 14:19, Marek Szyprowski wrote:
The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
numer of the created entries in the DMA address space. However the
subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be
called with the original number of
On 21/12/2018 13:26, Vincent Guittot wrote:
On Fri, 21 Dec 2018 at 12:33, Tvrtko Ursulin
wrote:
Hi,
On 21/12/2018 10:33, Vincent Guittot wrote:
Use the new pm runtime interface to get the accounted suspended time:
pm_runtime_suspended_time().
This new interface helps to simplify
Hi,
On 21/12/2018 10:33, Vincent Guittot wrote:
Use the new pm runtime interface to get the accounted suspended time:
pm_runtime_suspended_time().
This new interface helps to simplify and cleanup the code that computes
__I915_SAMPLE_RC6_ESTIMATED and to remove direct access to internals of
PM
On 28/09/2018 15:02, Thomas Gleixner wrote:
Tvrtko,
On Fri, 28 Sep 2018, Tvrtko Ursulin wrote:
On 28/09/2018 11:26, Thomas Gleixner wrote:
On Wed, 19 Sep 2018, Tvrtko Ursulin wrote:
It would be very helpful if you cc all involved people on the cover letter
instead of just cc'ing your own
On 28/09/2018 15:02, Thomas Gleixner wrote:
Tvrtko,
On Fri, 28 Sep 2018, Tvrtko Ursulin wrote:
On 28/09/2018 11:26, Thomas Gleixner wrote:
On Wed, 19 Sep 2018, Tvrtko Ursulin wrote:
It would be very helpful if you cc all involved people on the cover letter
instead of just cc'ing your own
Hi,
On 28/09/2018 11:26, Thomas Gleixner wrote:
Tvrtko,
On Wed, 19 Sep 2018, Tvrtko Ursulin wrote:
It would be very helpful if you cc all involved people on the cover letter
instead of just cc'ing your own pile of email addresses. CC'ed now.
I accept it was by bad to miss adding Cc's
Hi,
On 28/09/2018 11:26, Thomas Gleixner wrote:
Tvrtko,
On Wed, 19 Sep 2018, Tvrtko Ursulin wrote:
It would be very helpful if you cc all involved people on the cover letter
instead of just cc'ing your own pile of email addresses. CC'ed now.
I accept it was by bad to miss adding Cc's
On 27/09/2018 21:15, Andi Kleen wrote:
+ mutex_lock(_lock);
+ list_for_each_entry(pmu, , entry)
+ pmu->perf_event_paranoid = sysctl_perf_event_paranoid;
+ mutex_unlock(_lock);
What happens to pmus that got added later?
There is a hunk a bit lower in the
On 27/09/2018 21:15, Andi Kleen wrote:
+ mutex_lock(_lock);
+ list_for_each_entry(pmu, , entry)
+ pmu->perf_event_paranoid = sysctl_perf_event_paranoid;
+ mutex_unlock(_lock);
What happens to pmus that got added later?
There is a hunk a bit lower in the
From: Tvrtko Ursulin
There is an incosistency between data types used for order and number of
sg elements in the API.
Fix it so both are always unsigned int which, in the case of number of
elements, matches the underlying struct scatterlist.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
From: Tvrtko Ursulin
There is an incosistency between data types used for order and number of
sg elements in the API.
Fix it so both are always unsigned int which, in the case of number of
elements, matches the underlying struct scatterlist.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
From: Tvrtko Ursulin
It is necessary to ensure types on both sides of the comparison are of the
same width. Otherwise the check overflows sooner than expect due left hand
side being an unsigned long length, and the right hand side unsigned int
number of elements multiplied by element size
From: Tvrtko Ursulin
If a higher-order allocation fails, the existing abort and cleanup path
would consider all segments allocated so far as 0-order page allocations
and would therefore leak memory.
Fix this by cleaning up using sgl_free_n_order which allows the correct
page order to be passed
From: Tvrtko Ursulin
None of the callers need unsinged long long (they either pass in an int,
u32, or size_t) so it is not required to burden the 32-bit builds with an
overspecified length parameter.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
From: Tvrtko Ursulin
sg_init_table will clear the allocated block so requesting zeroed memory
from the allocator is redundant.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
lib/scatterlist.c | 3 +--
1 file changed, 1
From: Tvrtko Ursulin
It is necessary to ensure types on both sides of the comparison are of the
same width. Otherwise the check overflows sooner than expect due left hand
side being an unsigned long length, and the right hand side unsigned int
number of elements multiplied by element size
From: Tvrtko Ursulin
If a higher-order allocation fails, the existing abort and cleanup path
would consider all segments allocated so far as 0-order page allocations
and would therefore leak memory.
Fix this by cleaning up using sgl_free_n_order which allows the correct
page order to be passed
From: Tvrtko Ursulin
None of the callers need unsinged long long (they either pass in an int,
u32, or size_t) so it is not required to burden the 32-bit builds with an
overspecified length parameter.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
From: Tvrtko Ursulin
sg_init_table will clear the allocated block so requesting zeroed memory
from the allocator is redundant.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
lib/scatterlist.c | 3 +--
1 file changed, 1
From: Tvrtko Ursulin
Mostly same fixes and cleanups I've sent earlier in the year, but with some
patches dropped and some split into smaller ones as per request.
Tvrtko Ursulin (6):
lib/scatterlist: Use natural long for sgl_alloc(_order) length
parameter
lib/scatterlist: Use consistent
From: Tvrtko Ursulin
Mostly same fixes and cleanups I've sent earlier in the year, but with some
patches dropped and some split into smaller ones as per request.
Tvrtko Ursulin (6):
lib/scatterlist: Use natural long for sgl_alloc(_order) length
parameter
lib/scatterlist: Use consistent
From: Tvrtko Ursulin
We should not use an explicit width u32 for elem_len but unsinged int to
match the underlying type in struct scatterlist.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
lib/scatterlist.c | 5 ++---
1
From: Tvrtko Ursulin
We should not use an explicit width u32 for elem_len but unsinged int to
match the underlying type in struct scatterlist.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
lib/scatterlist.c | 5 ++---
1
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch we need to start
passing in the PMU object pointer to perf_paranoid_* helpers.
This patch only changes the API across the code base without changing the
behaviour.
v2:
* Correct errors in core-book3s.c as reported
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch we need to start
passing in the PMU object pointer to perf_paranoid_* helpers.
This patch only changes the API across the code base without changing the
behaviour.
v2:
* Correct errors in core-book3s.c as reported
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
C
From: Tvrtko Ursulin
Explain behaviour of the new control knob along side the existing perf
event documentation.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Arnaldo Carvalho de Melo
Cc: Alexander Shishkin
Cc: Jir
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
From: Tvrtko Ursulin
Now that the perf core supports per-PMU paranoid settings we need to
extend the tool to support that.
We handle the per-PMU setting in the platform support code where
applicable and also notify the user of the new facility on failures to
open the event.
Thanks to Jiri Olsa
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
C
From: Tvrtko Ursulin
Explain behaviour of the new control knob along side the existing perf
event documentation.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Arnaldo Carvalho de Melo
Cc: Alexander Shishkin
Cc: Jir
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
From: Tvrtko Ursulin
Now that the perf core supports per-PMU paranoid settings we need to
extend the tool to support that.
We handle the per-PMU setting in the platform support code where
applicable and also notify the user of the new facility on failures to
open the event.
Thanks to Jiri Olsa
would
progress it, but otherwise it felt like it's not going anywhere.
Regards,
Tvrtko
Thanks,
Alexey
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per
would
progress it, but otherwise it felt like it's not going anywhere.
Regards,
Tvrtko
Thanks,
Alexey
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per
Hi Ravi,
On 03/07/18 11:24, Ravi Bangoria wrote:
Hi Tvrtko,
@@ -199,7 +199,7 @@ static inline void perf_get_data_addr(struct pt_regs *regs,
u64 *addrp)
if (!(mmcra & MMCRA_SAMPLE_ENABLE) || sdar_valid)
*addrp = mfspr(SPRN_SDAR);
- if (perf_paranoid_kernel() &&
Hi Ravi,
On 03/07/18 11:24, Ravi Bangoria wrote:
Hi Tvrtko,
@@ -199,7 +199,7 @@ static inline void perf_get_data_addr(struct pt_regs *regs,
u64 *addrp)
if (!(mmcra & MMCRA_SAMPLE_ENABLE) || sdar_valid)
*addrp = mfspr(SPRN_SDAR);
- if (perf_paranoid_kernel() &&
On 27/06/18 10:47, Alexey Budankov wrote:
On 27.06.2018 12:15, Tvrtko Ursulin wrote:
On 26/06/18 18:25, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control
On 27/06/18 10:47, Alexey Budankov wrote:
On 27.06.2018 12:15, Tvrtko Ursulin wrote:
On 26/06/18 18:25, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control
On 26/06/18 18:25, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs
On 26/06/18 18:25, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs
On 26/06/18 18:24, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc
On 26/06/18 18:24, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch we need to start
passing in the PMU object pointer to perf_paranoid_* helpers.
This patch only changes the API across the code base without changing the
behaviour.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch we need to start
passing in the PMU object pointer to perf_paranoid_* helpers.
This patch only changes the API across the code base without changing the
behaviour.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
From: Tvrtko Ursulin
Explain behaviour of the new control knob along side the existing perf
event documentation.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Arnaldo Carvalho de Melo
Cc: Alexander Shishkin
Cc: Jir
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
From: Tvrtko Ursulin
Explain behaviour of the new control knob along side the existing perf
event documentation.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Arnaldo Carvalho de Melo
Cc: Alexander Shishkin
Cc: Jir
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
C
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
C
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
Hi,
On 22/05/2018 18:19, Andi Kleen wrote:
IMHO, it is unsafe for CBOX pmu but could IMC, UPI pmus be an exception here?
Because currently perf stat -I from IMC, UPI counters is only allowed when
system wide monitoring is permitted and this prevents joint perf record and
perf stat -I in
Hi,
On 22/05/2018 18:19, Andi Kleen wrote:
IMHO, it is unsafe for CBOX pmu but could IMC, UPI pmus be an exception here?
Because currently perf stat -I from IMC, UPI counters is only allowed when
system wide monitoring is permitted and this prevents joint perf record and
perf stat -I in
On 22/05/2018 13:32, Peter Zijlstra wrote:
On Tue, May 22, 2018 at 10:29:29AM +0100, Tvrtko Ursulin wrote:
On 22/05/18 10:05, Peter Zijlstra wrote:
On Mon, May 21, 2018 at 10:25:49AM +0100, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
For situations where sys
On 22/05/2018 13:32, Peter Zijlstra wrote:
On Tue, May 22, 2018 at 10:29:29AM +0100, Tvrtko Ursulin wrote:
On 22/05/18 10:05, Peter Zijlstra wrote:
On Mon, May 21, 2018 at 10:25:49AM +0100, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow
On 22/05/18 10:05, Peter Zijlstra wrote:
On Mon, May 21, 2018 at 10:25:49AM +0100, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating p
On 22/05/18 10:05, Peter Zijlstra wrote:
On Mon, May 21, 2018 at 10:25:49AM +0100, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls
From: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes
On 13/03/2018 20:10, Arnd Bergmann wrote:
On Tue, Mar 13, 2018 at 6:46 PM, Tvrtko Ursulin
<tvrtko.ursu...@linux.intel.com> wrote:
On 13/03/2018 16:19, Arnd Bergmann wrote:
The conditional spinlock confuses gcc into thinking the 'flags' value
might contain uninitialized data:
drive
On 13/03/2018 20:10, Arnd Bergmann wrote:
On Tue, Mar 13, 2018 at 6:46 PM, Tvrtko Ursulin
wrote:
On 13/03/2018 16:19, Arnd Bergmann wrote:
The conditional spinlock confuses gcc into thinking the 'flags' value
might contain uninitialized data:
drivers/gpu/drm/i915/i915_pmu.c: In function
On 13/03/2018 16:19, Arnd Bergmann wrote:
The conditional spinlock confuses gcc into thinking the 'flags' value
might contain uninitialized data:
drivers/gpu/drm/i915/i915_pmu.c: In function '__i915_pmu_event_read':
arch/x86/include/asm/paravirt_types.h:573:3: error: 'flags' may be used
On 13/03/2018 16:19, Arnd Bergmann wrote:
The conditional spinlock confuses gcc into thinking the 'flags' value
might contain uninitialized data:
drivers/gpu/drm/i915/i915_pmu.c: In function '__i915_pmu_event_read':
arch/x86/include/asm/paravirt_types.h:573:3: error: 'flags' may be used
On 07/03/18 17:06, Tvrtko Ursulin wrote:
On 07/03/18 16:10, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
sgl_alloc_order explicitly takes a 64-bit length (unsigned long long)
but
then rejects it in overflow checking if greater than 4GiB allocation
On 07/03/18 17:06, Tvrtko Ursulin wrote:
On 07/03/18 16:10, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
sgl_alloc_order explicitly takes a 64-bit length (unsigned long long)
but
then rejects it in overflow checking if greater than 4GiB allocation
On 08/03/18 15:56, Bart Van Assche wrote:
On Thu, 2018-03-08 at 07:59 +, Tvrtko Ursulin wrote:
However there is a different bug in my patch relating to the last entry
which can have shorter length from the rest. So get_order on the last
entry is incorrect - I have to store the deduced
On 08/03/18 15:56, Bart Van Assche wrote:
On Thu, 2018-03-08 at 07:59 +, Tvrtko Ursulin wrote:
However there is a different bug in my patch relating to the last entry
which can have shorter length from the rest. So get_order on the last
entry is incorrect - I have to store the deduced
On 07/03/18 18:38, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
I spotted a few small issues in the recent added SGL code so am sending some
patches to tidy this.
Can you send the fixes as a separate series and keep the rework / behavior
changes
for later
On 07/03/18 18:38, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
I spotted a few small issues in the recent added SGL code so am sending some
patches to tidy this.
Can you send the fixes as a separate series and keep the rework / behavior
changes
for later
Hi,
On 07/03/18 18:30, James Bottomley wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin <tvrtko.ursu...@intel.com>
Firstly, I don't see any justifiable benefit to churning this API, so
why bother? but secondly this:
Primarily because I wanted to
Hi,
On 07/03/18 18:30, James Bottomley wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Firstly, I don't see any justifiable benefit to churning this API, so
why bother? but secondly this:
Primarily because I wanted to extend sgl_alloc_order slightly
On 07/03/18 15:35, Andy Shevchenko wrote:
On Wed, Mar 7, 2018 at 2:47 PM, Tvrtko Ursulin <tursu...@ursulin.net> wrote:
+ sgl = kmalloc_array(nent, sizeof(struct scatterlist), (gfp & ~GFP_DMA));
The parens now become redundant.
True thanks! I am also not sure that re-usin
On 07/03/18 15:35, Andy Shevchenko wrote:
On Wed, Mar 7, 2018 at 2:47 PM, Tvrtko Ursulin wrote:
+ sgl = kmalloc_array(nent, sizeof(struct scatterlist), (gfp & ~GFP_DMA));
The parens now become redundant.
True thanks! I am also not sure that re-using the same gfp_t for
meta
On 07/03/18 16:23, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
We can derive the order from sg->length and so do not need to pass it in
explicitly. Rename the function to sgl_free_n.
Using get_order() to compute the order looks fine to me but this pa
On 07/03/18 16:23, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
We can derive the order from sg->length and so do not need to pass it in
explicitly. Rename the function to sgl_free_n.
Using get_order() to compute the order looks fine to me but this pa
On 07/03/18 16:19, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
Save some kernel size by moving trivial wrappers to header as static
inline instead of exporting symbols for them.
Something that you may be unaware of is that the introduction of the sgl
1 - 100 of 385 matches
Mail list logo