either do that or
deal with wrap at PMU read time.
So thanks for dealing with it, some small comments below but overall it
is fine.
Signed-off-by: Thomas Gleixner
Cc: Tvrtko Ursulin
Cc: Jani Nikula
Cc: Joonas Lahtinen
Cc: Rodrigo Vivi
Cc: David Airlie
Cc: Daniel Vetter
On 10/12/2020 17:44, Thomas Gleixner wrote:
On Thu, Dec 10 2020 at 17:09, Tvrtko Ursulin wrote:
On 10/12/2020 16:35, Thomas Gleixner wrote:
I'll send out a series addressing irq_to_desc() (ab)use all over the
place shortly. i915 is in there...
Yep we don't need atomic, my bad. An
On 10/12/2020 16:35, Thomas Gleixner wrote:
On Thu, Dec 10 2020 at 10:45, Tvrtko Ursulin wrote:
On 10/12/2020 07:53, Joonas Lahtinen wrote:
I think later in the thread there was a suggestion to replace this with
simple counter increment in IRQ handler.
It was indeed unsafe until recent
On 10/12/2020 07:53, Joonas Lahtinen wrote:
+ Tvrtko and Chris for comments
Code seems to be added in:
commit 0cd4684d6ea9a4ffec33fc19de4dd667bb90d0a5
Author: Tvrtko Ursulin
Date: Tue Nov 21 18:18:50 2017 +
drm/i915/pmu: Add interrupt count metric
I think later in the thread
On 03/11/2020 02:53, Lu Baolu wrote:
On 11/2/20 7:52 PM, Tvrtko Ursulin wrote:
On 02/11/2020 02:00, Lu Baolu wrote:
Hi Tvrtko,
On 10/12/20 4:44 PM, Tvrtko Ursulin wrote:
On 29/09/2020 01:11, Lu Baolu wrote:
Hi Tvrtko,
On 9/28/20 5:44 PM, Tvrtko Ursulin wrote:
On 27/09/2020 07:34, Lu
On 02/11/2020 02:00, Lu Baolu wrote:
Hi Tvrtko,
On 10/12/20 4:44 PM, Tvrtko Ursulin wrote:
On 29/09/2020 01:11, Lu Baolu wrote:
Hi Tvrtko,
On 9/28/20 5:44 PM, Tvrtko Ursulin wrote:
On 27/09/2020 07:34, Lu Baolu wrote:
Hi,
The previous post of this series could be found here.
https
On 29/09/2020 01:11, Lu Baolu wrote:
Hi Tvrtko,
On 9/28/20 5:44 PM, Tvrtko Ursulin wrote:
On 27/09/2020 07:34, Lu Baolu wrote:
Hi,
The previous post of this series could be found here.
https://lore.kernel.org/linux-iommu/20200912032200.11489-1-baolu...@linux.intel.com/
This version
On 27/09/2020 07:34, Lu Baolu wrote:
Hi,
The previous post of this series could be found here.
https://lore.kernel.org/linux-iommu/20200912032200.11489-1-baolu...@linux.intel.com/
This version introduce a new patch [4/7] to fix an issue reported here.
https://lore.kernel.org/linux-iommu/51a
WC &&
+ !static_cpu_has(X86_FEATURE_PAT)))
+ ptr = NULL;
+ else if (i915_gem_object_has_struct_page(obj))
+ ptr = i915_gem_object_map_page(obj, type);
+ else
+ ptr = i915_gem_object_map_pfn(obj, type);
if (!ptr) {
err = -ENOMEM;
goto err_unpin;
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
return NULL;
}
void shmem_unpin_map(struct file *file, void *ptr)
{
mapping_clear_unevictable(file->f_mapping);
- __shmem_unpin_map(file, ptr, shmem_npte(file));
+ vfree(ptr);
}
static int __shmem_rw(struct file *file, loff_t off,
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
(!PageHighMem(page))
- return kmap(page);
+ return page_address(page);
}
mem = stack;
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
On 23/09/2020 15:44, Christoph Hellwig wrote:
On Wed, Sep 23, 2020 at 02:58:43PM +0100, Tvrtko Ursulin wrote:
Series did not get a CI run from our side because of a different base so I
don't know if you would like to have a run there? If so you would need to
rebase agains
On 23/09/2020 14:41, Christoph Hellwig wrote:
On Wed, Sep 23, 2020 at 10:52:33AM +0100, Tvrtko Ursulin wrote:
On 18/09/2020 17:37, Christoph Hellwig wrote:
i915_gem_object_map implements fairly low-level vmap functionality in
a driver. Split it into two helpers, one for remapping kernel
On 18/09/2020 17:37, Christoph Hellwig wrote:
i915_gem_object_map implements fairly low-level vmap functionality in
a driver. Split it into two helpers, one for remapping kernel memory
which can use vmap, and one for I/O memory that uses vmap_pfn.
The only practical difference is that alloc_v
On 22/09/2020 07:22, Christoph Hellwig wrote:
On Mon, Sep 21, 2020 at 08:11:57PM +0100, Matthew Wilcox wrote:
This is awkward. I'd like it if we had a vfree() variant which called
put_page() instead of __free_pages(). I'd like it even more if we
used release_pages() instead of our own loop t
On 08/09/2020 23:43, Tom Murphy wrote:
On Tue, 8 Sep 2020 at 16:56, Tvrtko Ursulin
wrote:
On 08/09/2020 16:44, Logan Gunthorpe wrote:
On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
b/drivers/gpu/drm/i915/i915
index b7b59328cb76
Hi,
On 27/08/2020 22:36, Logan Gunthorpe wrote:
On 2020-08-23 6:04 p.m., Tom Murphy wrote:
I have added a check for the sg_dma_len == 0 :
"""
} __sgt_iter(struct scatterlist *sgl, bool dma) {
struct sgt_iter s = { .sgp = sgl };
+ if (sgl && sg_dma_len(sgl) == 0)
+
On 11/06/2020 12:29, Daniel Vetter wrote:
> On Thu, Jun 11, 2020 at 12:36 PM Tvrtko Ursulin
> wrote:
>> On 10/06/2020 16:17, Daniel Vetter wrote:
>>> On Wed, Jun 10, 2020 at 4:22 PM Tvrtko Ursulin
>>> wrote:
>>>>
>>>>
>>>> On
On 10/06/2020 16:17, Daniel Vetter wrote:
> On Wed, Jun 10, 2020 at 4:22 PM Tvrtko Ursulin
> wrote:
>>
>>
>> On 04/06/2020 09:12, Daniel Vetter wrote:
>>> Design is similar to the lockdep annotations for workers, but with
>>> some twists:
>>>
On 04/06/2020 09:12, Daniel Vetter wrote:
Design is similar to the lockdep annotations for workers, but with
some twists:
- We use a read-lock for the execution/worker/completion side, so that
this explicit annotation can be more liberally sprinkled around.
With read locks lockdep isn't
On 28/04/2020 14:19, Marek Szyprowski wrote:
The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
numer of the created entries in the DMA address space. However the
subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be
called with the original number of ent
On 21/12/2018 13:26, Vincent Guittot wrote:
On Fri, 21 Dec 2018 at 12:33, Tvrtko Ursulin
wrote:
Hi,
On 21/12/2018 10:33, Vincent Guittot wrote:
Use the new pm runtime interface to get the accounted suspended time:
pm_runtime_suspended_time().
This new interface helps to simplify and
Hi,
On 21/12/2018 10:33, Vincent Guittot wrote:
Use the new pm runtime interface to get the accounted suspended time:
pm_runtime_suspended_time().
This new interface helps to simplify and cleanup the code that computes
__I915_SAMPLE_RC6_ESTIMATED and to remove direct access to internals of
PM
On 28/09/2018 15:02, Thomas Gleixner wrote:
Tvrtko,
On Fri, 28 Sep 2018, Tvrtko Ursulin wrote:
On 28/09/2018 11:26, Thomas Gleixner wrote:
On Wed, 19 Sep 2018, Tvrtko Ursulin wrote:
It would be very helpful if you cc all involved people on the cover letter
instead of just cc'ing you
Hi,
On 28/09/2018 11:26, Thomas Gleixner wrote:
Tvrtko,
On Wed, 19 Sep 2018, Tvrtko Ursulin wrote:
It would be very helpful if you cc all involved people on the cover letter
instead of just cc'ing your own pile of email addresses. CC'ed now.
I accept it was by bad to miss addi
On 27/09/2018 21:15, Andi Kleen wrote:
+ mutex_lock(&pmus_lock);
+ list_for_each_entry(pmu, &pmus, entry)
+ pmu->perf_event_paranoid = sysctl_perf_event_paranoid;
+ mutex_unlock(&pmus_lock);
What happens to pmus that got added later?
There is a hunk a bit low
From: Tvrtko Ursulin
There is an incosistency between data types used for order and number of
sg elements in the API.
Fix it so both are always unsigned int which, in the case of number of
elements, matches the underlying struct scatterlist.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
From: Tvrtko Ursulin
It is necessary to ensure types on both sides of the comparison are of the
same width. Otherwise the check overflows sooner than expect due left hand
side being an unsigned long length, and the right hand side unsigned int
number of elements multiplied by element size
From: Tvrtko Ursulin
If a higher-order allocation fails, the existing abort and cleanup path
would consider all segments allocated so far as 0-order page allocations
and would therefore leak memory.
Fix this by cleaning up using sgl_free_n_order which allows the correct
page order to be passed
From: Tvrtko Ursulin
None of the callers need unsinged long long (they either pass in an int,
u32, or size_t) so it is not required to burden the 32-bit builds with an
overspecified length parameter.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
From: Tvrtko Ursulin
sg_init_table will clear the allocated block so requesting zeroed memory
from the allocator is redundant.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
lib/scatterlist.c | 3 +--
1 file changed, 1
From: Tvrtko Ursulin
Mostly same fixes and cleanups I've sent earlier in the year, but with some
patches dropped and some split into smaller ones as per request.
Tvrtko Ursulin (6):
lib/scatterlist: Use natural long for sgl_alloc(_order) length
parameter
lib/scatterlist: Use consi
From: Tvrtko Ursulin
We should not use an explicit width u32 for elem_len but unsinged int to
match the underlying type in struct scatterlist.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
lib/scatterlist.c | 5 ++---
1
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch we need to start
passing in the PMU object pointer to perf_paranoid_* helpers.
This patch only changes the API across the code base without changing the
behaviour.
v2:
* Correct errors in core-book3s.c as reported by
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes the
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
C
From: Tvrtko Ursulin
Explain behaviour of the new control knob along side the existing perf
event documentation.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Arnaldo Carvalho de Melo
Cc: Alexander Shishkin
Cc: Jir
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes the
From: Tvrtko Ursulin
Now that the perf core supports per-PMU paranoid settings we need to
extend the tool to support that.
We handle the per-PMU setting in the platform support code where
applicable and also notify the user of the new facility on failures to
open the event.
Thanks to Jiri Olsa
aspects I would
progress it, but otherwise it felt like it's not going anywhere.
Regards,
Tvrtko
Thanks,
Alexey
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we sta
Hi Ravi,
On 03/07/18 11:24, Ravi Bangoria wrote:
Hi Tvrtko,
@@ -199,7 +199,7 @@ static inline void perf_get_data_addr(struct pt_regs *regs,
u64 *addrp)
if (!(mmcra & MMCRA_SAMPLE_ENABLE) || sdar_valid)
*addrp = mfspr(SPRN_SDAR);
- if (perf_paranoid_kernel() && !ca
On 27/06/18 10:47, Alexey Budankov wrote:
On 27.06.2018 12:15, Tvrtko Ursulin wrote:
On 26/06/18 18:25, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for
On 26/06/18 18:25, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These
On 26/06/18 18:24, Alexey Budankov wrote:
Hi,
On 26.06.2018 18:36, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch we need to start
passing in the PMU object pointer to perf_paranoid_* helpers.
This patch only changes the API across the code base without changing the
behaviour.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes the
From: Tvrtko Ursulin
Explain behaviour of the new control knob along side the existing perf
event documentation.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Arnaldo Carvalho de Melo
Cc: Alexander Shishkin
Cc: Jir
From: Tvrtko Ursulin
To enable per-PMU access controls in a following patch first move all call
sites of perf_paranoid_kernel() to after the event has been created.
Signed-off-by: Tvrtko Ursulin
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
C
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes the
Hi,
On 22/05/2018 18:19, Andi Kleen wrote:
IMHO, it is unsafe for CBOX pmu but could IMC, UPI pmus be an exception here?
Because currently perf stat -I from IMC, UPI counters is only allowed when
system wide monitoring is permitted and this prevents joint perf record and
perf stat -I in cluste
On 22/05/2018 13:32, Peter Zijlstra wrote:
On Tue, May 22, 2018 at 10:29:29AM +0100, Tvrtko Ursulin wrote:
On 22/05/18 10:05, Peter Zijlstra wrote:
On Mon, May 21, 2018 at 10:25:49AM +0100, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow
On 22/05/18 10:05, Peter Zijlstra wrote:
On Mon, May 21, 2018 at 10:25:49AM +0100, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes the
On 13/03/2018 20:10, Arnd Bergmann wrote:
On Tue, Mar 13, 2018 at 6:46 PM, Tvrtko Ursulin
wrote:
On 13/03/2018 16:19, Arnd Bergmann wrote:
The conditional spinlock confuses gcc into thinking the 'flags' value
might contain uninitialized data:
drivers/gpu/drm/i915/i915_pmu.c: I
On 13/03/2018 16:19, Arnd Bergmann wrote:
The conditional spinlock confuses gcc into thinking the 'flags' value
might contain uninitialized data:
drivers/gpu/drm/i915/i915_pmu.c: In function '__i915_pmu_event_read':
arch/x86/include/asm/paravirt_types.h:573:3: error: 'flags' may be used
uninit
On 07/03/18 17:06, Tvrtko Ursulin wrote:
On 07/03/18 16:10, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
sgl_alloc_order explicitly takes a 64-bit length (unsigned long long)
but
then rejects it in overflow checking if greater than 4GiB allocation was
On 08/03/18 15:56, Bart Van Assche wrote:
On Thu, 2018-03-08 at 07:59 +, Tvrtko Ursulin wrote:
However there is a different bug in my patch relating to the last entry
which can have shorter length from the rest. So get_order on the last
entry is incorrect - I have to store the deduced
On 07/03/18 18:38, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
I spotted a few small issues in the recent added SGL code so am sending some
patches to tidy this.
Can you send the fixes as a separate series and keep the rework / behavior
changes
for later
Hi,
On 07/03/18 18:30, James Bottomley wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Firstly, I don't see any justifiable benefit to churning this API, so
why bother? but secondly this:
Primarily because I wanted to extend sgl_alloc_order slight
On 07/03/18 15:35, Andy Shevchenko wrote:
On Wed, Mar 7, 2018 at 2:47 PM, Tvrtko Ursulin wrote:
+ sgl = kmalloc_array(nent, sizeof(struct scatterlist), (gfp & ~GFP_DMA));
The parens now become redundant.
True thanks! I am also not sure that re-using the same gfp_t for
metadat
On 07/03/18 16:23, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
We can derive the order from sg->length and so do not need to pass it in
explicitly. Rename the function to sgl_free_n.
Using get_order() to compute the order looks fine to me but this pa
On 07/03/18 16:19, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
Save some kernel size by moving trivial wrappers to header as static
inline instead of exporting symbols for them.
Something that you may be unaware of is that the introduction of the sgl
On 07/03/18 16:16, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 9884be50a2c0..e13a759c5c49 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -493,7 +493,7 @@ struct scatterlist *sgl_alloc_order
On 07/03/18 16:10, Bart Van Assche wrote:
On Wed, 2018-03-07 at 12:47 +, Tvrtko Ursulin wrote:
sgl_alloc_order explicitly takes a 64-bit length (unsigned long long) but
then rejects it in overflow checking if greater than 4GiB allocation was
requested. This is a consequence of using
From: Tvrtko Ursulin
No code calls it so remove it.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
include/linux/scatterlist.h | 12 +---
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/include/linux
From: Tvrtko Ursulin
We can derive the order from sg->length and so do not need to pass it in
explicitly. Rename the function to sgl_free_n.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
Cc: "Nicholas A. Bellinger"
From: Tvrtko Ursulin
I spotted a few small issues in the recent added SGL code so am sending some
patches to tidy this.
My motivation was looking at sgl_alloc_order to potentially use from the i915
driver, with a small addition to support fall-back to smaller order allocation
if so was
From: Tvrtko Ursulin
Save some kernel size by moving trivial wrappers to header as static
inline instead of exporting symbols for them.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
include/linux/scatterlist.h | 38
From: Tvrtko Ursulin
There are several issues in sgl_alloc_order and accompanying APIs:
1.
sgl_alloc_order explicitly takes a 64-bit length (unsigned long long) but
then rejects it in overflow checking if greater than 4GiB allocation was
requested. This is a consequence of using unsigned int
From: Tvrtko Ursulin
If a higher-order allocation fails, the existing abort and cleanup path
would consider all segments allocated so far as 0-order page allocations
and would therefore leak memory.
Fix this by cleaning up using sgl_free_n_order which allows the correct
page order to be passed
From: Tvrtko Ursulin
sg_init_table will clear the allocated block so requesting zeroed memory
from the allocator is redundant.
Signed-off-by: Tvrtko Ursulin
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Cc: Jens Axboe
---
lib/scatterlist.c | 3 +--
1 file changed, 1
Hi,
On 19/01/2018 17:10, Tvrtko Ursulin wrote:
Hi,
On 19/01/2018 16:45, Peter Zijlstra wrote:
On Thu, Jan 18, 2018 at 06:40:07PM +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
of access control for different PMUs
On 19/02/18 04:12, Anshuman Khandual wrote:
On 02/18/2018 12:58 AM, kbuild test robot wrote:
Hi Anshuman,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on linus/master]
[also build test WARNING on v4.16-rc1 next-20180216]
[if your patch is applied to the wron
On 14/02/18 08:32, Johannes Thumshirn wrote:
On Wed, 2018-02-14 at 10:28 +0530, Anshuman Khandual wrote:
This replaces scatterlist->page_link LSB encodings with SG_CHAIN and
SG_EMARK definitions without any functional change.
Signed-off-by: Anshuman Khandual
---
include/linux/scatterlist.h
Hi,
On 13/02/18 14:39, Thomas Gleixner wrote:
On Tue, 13 Feb 2018, Tvrtko Ursulin wrote:
On 07/02/18 12:48, Tvrtko Ursulin wrote:
We are seeing failures to online the CPU0 on Apollo Lake in the form of:
<6>[ 126.508783] smpboot: CPU 0 is now offline
<6>[ 127.520746] smpb
Hi all,
On 07/02/18 12:48, Tvrtko Ursulin wrote:
Hi,
We are seeing failures to online the CPU0 on Apollo Lake in the form of:
<6>[ 126.508783] smpboot: CPU 0 is now offline
<6>[ 127.520746] smpboot: Booting Node 0 Processor 0 APIC 0x0
<3>[ 137.521036] smpboot: do_
Hi,
We are seeing failures to online the CPU0 on Apollo Lake in the form of:
<6>[ 126.508783] smpboot: CPU 0 is now offline
<6>[ 127.520746] smpboot: Booting Node 0 Processor 0 APIC 0x0
<3>[ 137.521036] smpboot: do_boot_cpu failed(-1) to wakeup CPU#0
I unfortunately cannot say with which
Hi,
On 19/01/2018 16:45, Peter Zijlstra wrote:
On Thu, Jan 18, 2018 at 06:40:07PM +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating per-PMU
perf_event_paranoid
From: Tvrtko Ursulin
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
These work in equivalent fashion as the existing perf_event_paranoid
sysctl, which now becomes the
Hi,
On 12/01/2018 17:36, Colin King wrote:
From: Colin Ian King
I believe the sizeof(attr) should be in fact sizeof(*attr), fortunately
the current code works because sizeof(struct attribute **) is the same
as sizeof(struct attribute *) for x86.
Thanks, kbuild also reported it and I just pu
if (!attr)
goto err_alloc;
Luckily it is the same size so no actual bug, but fix is still valid.
Will merge it once it passes CI, thanks!
Reviewed-by: Tvrtko Ursulin
Regards,
Tvrtko
From: Tvrtko Ursulin
We implement the new pmu->is_privileged callback and add our own sysctl
as /proc/sys/dev/i915/pmu_stream_paranoid (defaulting to true), which
enables system administrators to override the global
/proc/sys/kernel/perf_event_paranoid setting for i915 PMU only.
Signed-off
From: Tvrtko Ursulin
To allow system administrators finer-grained control over security
settings, we add an optional pmu->is_privileged(pmu, event) callback
which is consulted when unprivileged system-wide uncore event collection
is disabled.
Signed-off-by: Tvrtko Ursulin
Cc: Peter Zijls
the
operation which set them has completed.
Fixes: 96abb968549c ("smp/hotplug: Allow external multi-instance rollback")
Reported-by: Tvrtko Ursulin
Signed-off-by: Thomas Gleixner
---
kernel/cpu.c |5 +
1 file changed, 5 insertions(+)
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -
Hi all,
Detected by ubsan:
UBSAN: Undefined behaviour in drivers/iommu/dmar.c:1345:3
shift exponent 64 is too large for 32-bit type 'int'
CPU: 2 PID: 1167 Comm: perf_pmu Not tainted 4.14.0-rc5+ #532
Hardware name: LENOVO 80MX/Lenovo E31-80, BIOS DCCN34WW(V2.03) 12/01/2015
Call Trace:
Hi all,
Triggered with ubsan:
UBSAN: Undefined behaviour in drivers/acpi/sysfs.c:845:33
shift exponent 64 is too large for 64-bit type 'long long unsigned int'
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.14.0-rc5+ #532
Hardware name: LENOVO 80MX/Lenovo E31-80, BIOS DCCN34WW(V2.03) 12/01/201
Hi all,
I think I've found in bug in the CPU hotplug handling when multi-instance
states are used. That is in the 4.14.0-rc5 kernel.
I have not attempted to get to the bottom of the issue to propose an actual
fix, since the logic there looks somewhat complex, but thought to first
seek opinion of
From: Tvrtko Ursulin
It looks like all completions as created by flush_workqueue map
into a single lock class which creates lockdep false positives.
Example of a false positive:
[ 20.805315] ==
[ 20.805316] WARNING: possible circular
On 06/10/2017 15:23, Daniel Vetter wrote:
On Fri, Oct 06, 2017 at 12:34:02PM +0100, Tvrtko Ursulin wrote:
On 06/10/2017 10:06, Daniel Vetter wrote:
4.14-rc1 gained the fancy new cross-release support in lockdep, which
seems to have uncovered a few more rules about what is allowed and
isn
003 R14: c0186473 R15: 7fff188dc2ac
? __this_cpu_preempt_check+0x13/0x20
v2: Set ret correctly when we raced with another thread.
v3: Use Chris' diff. Attach the right lockdep splat.
Cc: Chris Wilson
Cc: Tvrtko Ursulin
Cc: Joonas Lahtinen
Cc: Peter Zijlstra
Cc: Thom
From: Tvrtko Ursulin
Exercise the new __sg_alloc_table_from_pages API (and through
it also the old sg_alloc_table_from_pages), checking that the
created table has the expected number of segments depending on
the sequence of input pages and other conditions.
v2: Move to data driven for
On 06/09/2017 11:48, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2017-09-05 11:24:03)
From: Tvrtko Ursulin
Exercise the new __sg_alloc_table_from_pages API (and through
it also the old sg_alloc_table_from_pages), checking that the
created table has the expected number of segments depending
From: Tvrtko Ursulin
Exercise the new __sg_alloc_table_from_pages API (and through
it also the old sg_alloc_table_from_pages), checking that the
created table has the expected number of segments depending on
the sequence of input pages and other conditions.
v2: Move to data driven for
On 03/08/2017 00:01, Andrew Morton wrote:
On Wed, 2 Aug 2017 14:06:39 +0100 Tvrtko Ursulin
wrote:
Hi Andrew,
We have a couple of small lib/scatterlist.c tidies here, plus exporting
the new API which allows drivers to control the maximum coalesced entry
as created by
From: Tvrtko Ursulin
With the addition of __sg_alloc_table_from_pages we can control
the maximum coalescing size and eliminate a separate path for
allocating backing store here.
Similar to 871dfbd67d4e ("drm/i915: Allow compaction upto
SWIOTLB max segment size") this enables more
From: Tvrtko Ursulin
Drivers like i915 benefit from being able to control the maxium
size of the sg coalesced segment while building the scatter-
gather list.
Introduce and export the __sg_alloc_table_from_pages function
which will allow it that control.
v2: Reorder parameters. (Chris Wilson
From: Tvrtko Ursulin
Since the scatterlist length field is an unsigned int, make
sure that sg_alloc_table_from_pages does not overflow it while
coalescing pages to a single entry.
v2: Drop reference to future use. Use UINT_MAX.
v3: max_segment must be page aligned.
v4: Do not rely on compiler
,
Tvrtko
On 27/07/2017 10:05, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
Drivers like i915 benefit from being able to control the maxium
size of the sg coallesced segment while building the scatter-
gather list.
Introduce and export the __sg_alloc_table_from_pages function
which will allow it that
From: Tvrtko Ursulin
Scatterlist entries have an unsigned int for the offset so
correct the sg_alloc_table_from_pages function accordingly.
Since these are offsets withing a page, unsigned int is
wide enough.
Also converts callers which were using unsigned long locally
with the lower_32_bits
From: Tvrtko Ursulin
Drivers like i915 benefit from being able to control the maxium
size of the sg coallesced segment while building the scatter-
gather list.
Introduce and export the __sg_alloc_table_from_pages function
which will allow it that control.
v2: Reorder parameters. (Chris Wilson
1 - 100 of 204 matches
Mail list logo