From: Tomoki Sekiyama <tomoki.sekiy...@gmail.com>
On some architectures such as arm64, siano chip based TV-tuner
USB devices are not recognized correctly due to coherent memory
allocation failure with the following error:
[ 663.556135] usbcore: deregistering interface driver
From: Tomoki Sekiyama
On some architectures such as arm64, siano chip based TV-tuner
USB devices are not recognized correctly due to coherent memory
allocation failure with the following error:
[ 663.556135] usbcore: deregistering interface driver smsusb
[ 683.624809] smsusb:smsusb_probe
From: Tomoki Sekiyama <tomoki.sekiy...@gmail.com>
On some architectures such as arm64, siano chip based TV-tuner
USB devices are not recognized correctly due to coherent memory
allocation failure with the following error:
[ 663.556135] usbcore: deregistering interface driver
From: Tomoki Sekiyama
On some architectures such as arm64, siano chip based TV-tuner
USB devices are not recognized correctly due to coherent memory
allocation failure with the following error:
[ 663.556135] usbcore: deregistering interface driver smsusb
[ 683.624809] smsusb:smsusb_probe
for DMA
memory allocation for USB devices in such architectures.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiy...@gmail.com>
---
drivers/media/common/siano/smscoreapi.c | 34 +++--
drivers/media/common/siano/smscoreapi.h | 2 ++
drivers/media/usb/siano/smsusb.c
for DMA
memory allocation for USB devices in such architectures.
Signed-off-by: Tomoki Sekiyama
---
drivers/media/common/siano/smscoreapi.c | 34 +++--
drivers/media/common/siano/smscoreapi.h | 2 ++
drivers/media/usb/siano/smsusb.c| 1 +
3 files changed, 27
Currently sched_out_state() converts the prev_state u64 bitmask to a char
using the bitmask as an index, which may cause invalid memory access.
This fixes the issue by using the __ffs() returned value as an index.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama...@hitachi.com>
uce TASK_NOLOAD and TASK_IDLE"):
Introduces new state 'N'
- commit 7dc603c9028e ("sched/fair: Fix PELT integrity for new tasks"):
Introduces new state 'n'
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama...@hitachi.com>
Cc: Jiri Olsa <jo...@kernel.org>
Cc: David Ahern &
Currently sched_out_state() converts the prev_state u64 bitmask to a char
using the bitmask as an index, which may cause invalid memory access.
This fixes the issue by using the __ffs() returned value as an index.
Signed-off-by: Tomoki Sekiyama
Fixes: cdce9d738b91e ("perf sched: Add
uce TASK_NOLOAD and TASK_IDLE"):
Introduces new state 'N'
- commit 7dc603c9028e ("sched/fair: Fix PELT integrity for new tasks"):
Introduces new state 'n'
Signed-off-by: Tomoki Sekiyama
Cc: Jiri Olsa
Cc: David Ahern
Cc: Namhyung Kim
Cc: Peter Zijlstra
Cc: Masami Hiramatsu
Currently sched_out_state() converts the prev_state u64 bitmask to a char
using the bitmask as an index, which may cause invalid memory access.
This fixes the issue by using the __ffs() returned value as an index.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama...@hitachi.com>
Update TASK_STATE_TO_CHAR_STR macro to one from sched.h in the latest
kernel, where 'N' and 'n' are introduced, 'X' and 'Z' are swapped.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama...@hitachi.com>
Fixes: cdce9d738b91e ("perf sched: Add sched latency profiling")
Cc: Jiri Olsa &
Currently sched_out_state() converts the prev_state u64 bitmask to a char
using the bitmask as an index, which may cause invalid memory access.
This fixes the issue by using the __ffs() returned value as an index.
Signed-off-by: Tomoki Sekiyama
Fixes: cdce9d738b91e ("perf sched: Add
Update TASK_STATE_TO_CHAR_STR macro to one from sched.h in the latest
kernel, where 'N' and 'n' are introduced, 'X' and 'Z' are swapped.
Signed-off-by: Tomoki Sekiyama
Fixes: cdce9d738b91e ("perf sched: Add sched latency profiling")
Cc: Jiri Olsa
Cc: David Ahern
Cc: Namhyung Kim
+return bit < sizeof(str) - 1 ? str[bit] : '?';
>
> You'd better use ARRAY_SIZE(str) instead of sizeof() for array here.
OK, will change this to use ARRAY_SIZE on the next update.
Thanks,
Tomoki Sekiyama
+return bit < sizeof(str) - 1 ? str[bit] : '?';
>
> You'd better use ARRAY_SIZE(str) instead of sizeof() for array here.
OK, will change this to use ARRAY_SIZE on the next update.
Thanks,
Tomoki Sekiyama
sched_out_state() converts the prev_state u64 bitmask to a char in
a wrong way, which may cause invalid memory access.
TASK_STATE_TO_CHAR_STR should also be fixed to adapt current
kernel's sched.h.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama...@hitachi.com>
Cc: Jiri Olsa <jo...@kerne
sched_out_state() converts the prev_state u64 bitmask to a char in
a wrong way, which may cause invalid memory access.
TASK_STATE_TO_CHAR_STR should also be fixed to adapt current
kernel's sched.h.
Signed-off-by: Tomoki Sekiyama
Cc: Jiri Olsa
Cc: David Ahern
Cc: Namhyung Kim
Cc: Peter
14187 |499.705 ms | 39 | avg: 12.838 ms |
yes:14188 |500.350 ms | 40 | avg: 12.506 ms |
gnome-terminal-:12722 | 0.285 ms |3 | avg:0.025 ms |
...
Thanks,
Tomoki Sekiyama
14187 |499.705 ms | 39 | avg: 12.838 ms |
yes:14188 |500.350 ms | 40 | avg: 12.506 ms |
gnome-terminal-:12722 | 0.285 ms |3 | avg:0.025 ms |
...
Thanks,
Tomoki Sekiyama
sched_out_state() converts the prev_state u64 bitmask to a char in
a wrong way, which may cause wrong results of 'perf sched latency'.
This patch fixes the conversion.
Also, preempted tasks must be considered that they are in the
THREAD_WAIT_CPU state.
Signed-off-by: Tomoki Sekiyama
sched_out_state() converts the prev_state u64 bitmask to a char in
a wrong way, which may cause wrong results of 'perf sched latency'.
This patch fixes the conversion.
Also, preempted tasks must be considered that they are in the
THREAD_WAIT_CPU state.
Signed-off-by: Tomoki Sekiyama
Cc: Jiri
sched_out_state() converts the prev_state u64 bitmask to a char in
a wrong way, which may cause wrong results of 'perf sched latency'.
This patch fixes the conversion.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama...@hitachi.com>
Cc: Jiri Olsa <jo...@kernel.org>
Cc: David
sched_out_state() converts the prev_state u64 bitmask to a char in
a wrong way, which may cause wrong results of 'perf sched latency'.
This patch fixes the conversion.
Signed-off-by: Tomoki Sekiyama
Cc: Jiri Olsa
Cc: David Ahern
Cc: Namhyung Kim
Cc: Peter Zijlstra
Cc: Masami Hiramatsu
sched_out_state() converts the prev_state u64 bitmask to a char in
a wrong way, which may cause wrong results of 'perf sched latency'.
This patch fixes the conversion.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama...@hitachi.com>
Cc: Jiri Olsa <jo...@kernel.org>
Cc: David
sched_out_state() converts the prev_state u64 bitmask to a char in
a wrong way, which may cause wrong results of 'perf sched latency'.
This patch fixes the conversion.
Signed-off-by: Tomoki Sekiyama
Cc: Jiri Olsa
Cc: David Ahern
Cc: Namhyung Kim
Cc: Peter Zijlstra
Cc: Masami Hiramatsu
'__init' to boot the guest successfully with 'console=hvc0'.
Signed-off-by: Tomoki Sekiyama
---
drivers/tty/hvc/hvc_console.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
index 94f9e3a..0ff7fda 100644
'__init' to boot the guest successfully with 'console=hvc0'.
Signed-off-by: Tomoki Sekiyama
---
drivers/tty/hvc/hvc_console.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
index 94f9e3a..0ff7fda 100644
'__init' to boot the guest successfully with 'console=hvc0'.
Signed-off-by: Tomoki Sekiyama tomoki.sekiy...@hds.com
---
drivers/tty/hvc/hvc_console.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
index 94f9e3a
'__init' to boot the guest successfully with 'console=hvc0'.
Signed-off-by: Tomoki Sekiyama tomoki.sekiy...@hds.com
---
drivers/tty/hvc/hvc_console.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
index 94f9e3a
Hi all,
Is this patchset going to be merged into 3.12?
Thanks,
--
Tomoki
On 9/23/13 16:14 , "Tejun Heo" wrote:
>Hello,
>
>On Mon, Sep 23, 2013 at 08:11:55PM +0000, Tomoki Sekiyama wrote:
>> >Hmm... why aren't we just changing elevator_init() to grab sy
Hi all,
Is this patchset going to be merged into 3.12?
Thanks,
--
Tomoki
On 9/23/13 16:14 , Tejun Heo t...@kernel.org wrote:
Hello,
On Mon, Sep 23, 2013 at 08:11:55PM +, Tomoki Sekiyama wrote:
Hmm... why aren't we just changing elevator_init() to grab sysfs_lock
where necessary
Hi Tejun,
Thank you for the review.
On 9/22/13 13:04 , "Tejun Heo" wrote:
>On Fri, Aug 30, 2013 at 06:47:07PM -0400, Tomoki Sekiyama wrote:
>> @@ -739,9 +739,17 @@ blk_init_allocated_queue(struct request_queue *q,
>>request_fn_proc *rfn,
>>
>&g
Hi Tejun,
Thank you for the review.
On 9/22/13 13:04 , Tejun Heo t...@kernel.org wrote:
On Fri, Aug 30, 2013 at 06:47:07PM -0400, Tomoki Sekiyama wrote:
@@ -739,9 +739,17 @@ blk_init_allocated_queue(struct request_queue *q,
request_fn_proc *rfn,
q-sg_reserved_size = INT_MAX
Ping: any comments for this series?
On 8/30/13 18:47 , "Tomoki Sekiyama" wrote:
>The soft lockup below happens at the boot time of the system using dm
>multipath and the udev rules to switch scheduler.
>
>[ 356.127001] BUG: soft lockup - CPU#3 stuck for 22s! [sh:483]
>
Ping: any comments for this series?
On 8/30/13 18:47 , Tomoki Sekiyama tomoki.sekiy...@hds.com wrote:
The soft lockup below happens at the boot time of the system using dm
multipath and the udev rules to switch scheduler.
[ 356.127001] BUG: soft lockup - CPU#3 stuck for 22s! [sh:483
e lock is already taken by elv_iosched_store().
Signed-off-by: Tomoki Sekiyama
---
block/elevator.c | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/block/elevator.c b/block/elevator.c
index 02d4390..6d765f7 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@
tion of q->sysfs_lock around elevator_init()
into blk_init_allocated_queue(), to provide mutual exclusion between
initialization of the q->scheduler and switching of the scheduler.
This should fix this bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=902012
Signed-off-by: Tomoki Sekiyama
---
bloc
-scheduler and switching of the scheduler.
This should fix this bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=902012
Signed-off-by: Tomoki Sekiyama tomoki.sekiy...@hds.com
---
block/blk-core.c | 10 +-
block/elevator.c |6 ++
2 files changed, 15 insertions(+), 1 deletion(-)
diff
taken by elv_iosched_store().
Signed-off-by: Tomoki Sekiyama tomoki.sekiy...@hds.com
---
block/elevator.c | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/block/elevator.c b/block/elevator.c
index 02d4390..6d765f7 100644
--- a/block/elevator.c
+++ b/block
On 8/29/13 16:29 , "Vivek Goyal" wrote:
>On Mon, Aug 26, 2013 at 09:45:15AM -0400, Tomoki Sekiyama wrote:
>> The soft lockup below happes at the boot time of the system using dm
>> multipath and automated elevator switching udev rules.
>>
>> [ 356.127001] B
On 8/29/13 14:43 , "Vivek Goyal" wrote:
>On Thu, Aug 29, 2013 at 02:33:10PM -0400, Vivek Goyal wrote:
>> On Mon, Aug 26, 2013 at 09:45:15AM -0400, Tomoki Sekiyama wrote:
>> > The soft lockup below happes at the boot time of the system using dm
>> > multipath
Hi vivek,
Thanks for your comments.
On 8/29/13 14:33 , "Vivek Goyal" wrote:
>On Mon, Aug 26, 2013 at 09:45:15AM -0400, Tomoki Sekiyama wrote:
>> The soft lockup below happes at the boot time of the system using dm
>> multipath and automated elevator switching udev ru
Hi vivek,
Thanks for your comments.
On 8/29/13 14:33 , Vivek Goyal vgo...@redhat.com wrote:
On Mon, Aug 26, 2013 at 09:45:15AM -0400, Tomoki Sekiyama wrote:
The soft lockup below happes at the boot time of the system using dm
multipath and automated elevator switching udev rules
On 8/29/13 14:43 , Vivek Goyal vgo...@redhat.com wrote:
On Thu, Aug 29, 2013 at 02:33:10PM -0400, Vivek Goyal wrote:
On Mon, Aug 26, 2013 at 09:45:15AM -0400, Tomoki Sekiyama wrote:
The soft lockup below happes at the boot time of the system using dm
multipath and automated elevator switching
On 8/29/13 16:29 , Vivek Goyal vgo...@redhat.com wrote:
On Mon, Aug 26, 2013 at 09:45:15AM -0400, Tomoki Sekiyama wrote:
The soft lockup below happes at the boot time of the system using dm
multipath and automated elevator switching udev rules.
[ 356.127001] BUG: soft lockup - CPU#3 stuck
This patch adds acquisition of q->sysfs_lock in blk_init_allocated_queue().
This also adds the lock into elevator_change() to ensure locking from the
other path, as it is exposed function (and queue_attr_store will uses
__elevator_change() now, the non-locking version of elevator_change()).
Signed-off-b
the
other path, as it is exposed function (and queue_attr_store will uses
__elevator_change() now, the non-locking version of elevator_change()).
Signed-off-by: Tomoki Sekiyama tomoki.sekiy...@hds.com
---
block/blk-core.c |6 +-
block/elevator.c | 16 ++--
2 files changed
On 8/1/13 17:04 , "Jens Axboe" wrote:
>On 08/01/2013 02:28 PM, Tomoki Sekiyama wrote:
>> On 7/30/13 10:09 PM, Shaohua Li wrote:
>>> On Tue, Jul 30, 2013 at 03:30:33PM -0400, Tomoki Sekiyama wrote:
>>>> Hi,
>>>>
>>>> When some a
On 8/1/13 17:04 , Jens Axboe ax...@kernel.dk wrote:
On 08/01/2013 02:28 PM, Tomoki Sekiyama wrote:
On 7/30/13 10:09 PM, Shaohua Li wrote:
On Tue, Jul 30, 2013 at 03:30:33PM -0400, Tomoki Sekiyama wrote:
Hi,
When some application launches several hundreds of processes that
issue
only a few
On 7/30/13 10:09 PM, Shaohua Li wrote:
> On Tue, Jul 30, 2013 at 03:30:33PM -0400, Tomoki Sekiyama wrote:
>> Hi,
>>
>> When some application launches several hundreds of processes that issue
>> only a few small sync I/O requests, CFQ may cause heavy latencies
>&g
On 7/30/13 10:09 PM, Shaohua Li wrote:
On Tue, Jul 30, 2013 at 03:30:33PM -0400, Tomoki Sekiyama wrote:
Hi,
When some application launches several hundreds of processes that issue
only a few small sync I/O requests, CFQ may cause heavy latencies
(10+ seconds at the worst case), although
, avg=110236.79, stdev=303351.72
Average latency is reduced by 80%, and max is also reduced by 56%.
Any comments are appreciated.
Signed-off-by: Tomoki Sekiyama
---
block/cfq-iosched.c | 36 +++-
1 file changed, 31 insertions(+), 5 deletions(-)
diff --git a/block
, avg=110236.79, stdev=303351.72
Average latency is reduced by 80%, and max is also reduced by 56%.
Any comments are appreciated.
Signed-off-by: Tomoki Sekiyama tomoki.sekiy...@hds.com
---
block/cfq-iosched.c | 36 +++-
1 file changed, 31 insertions(+), 5 deletions
Hi Paul,
Thank you for your comments, and sorry for my late reply.
On 2012/09/21 2:34, Paul E. McKenney wrote:
> On Thu, Sep 06, 2012 at 08:27:40PM +0900, Tomoki Sekiyama wrote:
>> Initialize rcu related variables to avoid warnings about RCU usage while
>> slave CPUs is ru
Commit-ID: fd0f5869724ff6195c6e7f12f8287c66a132e0ba
Gitweb: http://git.kernel.org/tip/fd0f5869724ff6195c6e7f12f8287c66a132e0ba
Author: Tomoki Sekiyama
AuthorDate: Wed, 26 Sep 2012 11:11:28 +0900
Committer: H. Peter Anvin
CommitDate: Thu, 27 Sep 2012 22:52:34 -0700
x86: Distinguish TLB
Commit-ID: fd0f5869724ff6195c6e7f12f8287c66a132e0ba
Gitweb: http://git.kernel.org/tip/fd0f5869724ff6195c6e7f12f8287c66a132e0ba
Author: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
AuthorDate: Wed, 26 Sep 2012 11:11:28 +0900
Committer: H. Peter Anvin h...@linux.intel.com
CommitDate: Thu
Hi Paul,
Thank you for your comments, and sorry for my late reply.
On 2012/09/21 2:34, Paul E. McKenney wrote:
On Thu, Sep 06, 2012 at 08:27:40PM +0900, Tomoki Sekiyama wrote:
Initialize rcu related variables to avoid warnings about RCU usage while
slave CPUs is running specified functions
Hi Alex,
On 2012/09/25 11:57, Alex Shi wrote:
> On 09/24/2012 09:37 AM, Alex Shi wrote:
>
>> On 09/20/2012 04:50 PM, Tomoki Sekiyama wrote:
>>
>>> unsigned int irq_resched_count;
>>> unsigned int irq_call_count;
>>> + /* irq_tlb_count is
Hi Alex,
On 2012/09/25 11:57, Alex Shi wrote:
On 09/24/2012 09:37 AM, Alex Shi wrote:
On 09/20/2012 04:50 PM, Tomoki Sekiyama wrote:
unsigned int irq_resched_count;
unsigned int irq_call_count;
+ /* irq_tlb_count is double-counted in irq_call_count, so it must
ION_VECTOR").
This patch reverts TLB shootdowns entry in /proc/interrupts to count TLB
shootdowns separately from the other function call interrupts.
Signed-off-by: Tomoki Sekiyama
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Alex Shi
---
arch/x86/inclu
separately from the other function call interrupts.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@redhat.com
Cc: H. Peter Anvin h...@zytor.com
Cc: Alex Shi alex@intel.com
---
arch/x86/include/asm/hardirq.h |2
This patch reverts TLB shootdowns entry in /proc/interrupts to count TLB
shootdowns separately from the other function call interrupts.
Signed-off-by: Tomoki Sekiyama
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Alex Shi
---
arch/x86/include/asm/hardirq.h |2 +-
a
TLB shootdowns entry in /proc/interrupts to count TLB
shootdowns separately from the other function call interrupts.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@redhat.com
Cc: H. Peter Anvin h...@zytor.com
Cc: Alex Shi
Hi Jan,
On 2012/09/07 17:26, Jan Kiszka wrote:
> On 2012-09-06 13:27, Tomoki Sekiyama wrote:
>> This RFC patch series provides facility to dedicate CPUs to KVM guests
>> and enable the guests to handle interrupts from passed-through PCI devices
>> directly (without VM exit
Hi Jan,
On 2012/09/07 17:26, Jan Kiszka wrote:
On 2012-09-06 13:27, Tomoki Sekiyama wrote:
This RFC patch series provides facility to dedicate CPUs to KVM guests
and enable the guests to handle interrupts from passed-through PCI devices
directly (without VM exit and relay by the host
Enable virtualization when slave CPUs are activated, and disable when
the CPUs are dying using slave CPU notifier call chain.
In x86, TSC kHz must also be initialized by tsc_khz_changed when the
new slave CPUs are activated.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc
for the guest is resumed on an online CPU.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/include/asm/kvm_host.h | 15 +++
arch/x86/kvm/mmu.c | 13 +
arch/x
ion, kvm_arch_vcpu_put_migrate is used to avoid using IPI to
clear loaded vmcs from the old CPU. Instead, this immediately clears
vmcs.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/include/asm/kvm_
are called with CPU_SLAVE_UP when a slave CPU
becomes active. When the slave CPU is stopped, callbacks are called with
CPU_SLAVE_DYING on slave CPUs, and with CPU_SLAVE_DEAD on online CPUs.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc
to manage whether CPU is slave.
In addition, `cpu_online_or_slave_mask' is also provided for convenence of
APIC handling, etc.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/Kconfig
Split memory hotplug function from cpu_up() as cpu_memory_up(), which will
be used for assigning memory area to off-lined cpus at following patch
in this series.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter
-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/include/asm/kvm_host.h |5
arch/x86/kvm/mmu.c | 52 ---
arch/x86/kvm/mmu.h |4
this, if the guest issues EOI when there are no
in-service interrupts in the virtual APIC, physical EOI is issued.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/include/asm/kvm_host.h | 19 +
ar
. Then, NMI handler will check the
requests and handles the requests.
This implementation has an issue in scalability, and is just for PoC.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/i
after every virtual IRQ is handled.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/kvm/vmx.c | 69 ++--
1 files changed, 67 insertions(+), 2
for EXIT_REASON_PREEMPTION_TIMER,
which just goes back to VM execution soon.
These are currently intended only to be used with avoid entering the
guest on a slave CPU when vmx_prevent_run(vcpu, 1) is called.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H.
Initialize rcu related variables to avoid warnings about RCU usage while
slave CPUs is running specified functions. Also notify RCU subsystem before
the slave CPU is entered into idle state.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Avoid exiting from a guest on slave CPU even if HLT instruction is
executed. Since the slave CPU is dedicated to a vCPU, exit on HLT is
not required, and avoiding VM exit will improve the guest's performance.
This is a partial revert of
10166744b80a ("KVM: VMX: remove yield_on_hlt")
Cc:
to be routed either online CPUs or slave CPUs.
In this patch, if online CPUs are contained in specified affinity settings,
the affinity settings will be only applied to online CPUs. If every
specified CPU is slave, IRQ will be routed to slave CPUs.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc
on slave CPUs.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/lapic.c|5 +
arch/x86/kvm/vmx.c | 19 ++
, and
guest must use the same vector as host.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/include/asm/apic.h |4 +++
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kernel/apic/apic.c
.
This patch adds kvm_arch_vcpu_prevent_run(), which causes VM exit right
after VM enter. The NMI handler uses this to ensure the execution of the
guest is cancelled after NMI.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter
CPU with IRQ remapper of IOMMU.
This is intended to be used to routing interrupts directly to KVM guest
which is running on slave CPUs which do not cause VM EXIT by external
interrupts.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: &q
Add trace event "kvm_set_direct_interrupt" to trace enabling/disabling
direct interrupt delivery on slave CPUs. At the event, the guest rip and
whether the feature is enabled or not is logged.
Signed-off-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
If the slave CPU receives an interrupt in running a guest, current
implementation must once go back to onilne CPUs to handle the interupt.
This behavior will be replaced by later patch, which introduces direct
interrupt handling mechanism by the guest.
Signed-off-by: Tomoki Sekiyama
Cc: Avi
-by: Tomoki Sekiyama
Cc: Avi Kivity
Cc: Marcelo Tosatti
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/vmx.c |7 +
arch/x86/kvm/x86.c | 58 ++
cpu/cpu3/online
- Launch qemu-kvm with -no-kvm-pit option.
The offlined CPU is booted as a slave CPU and guest is runs on that CPU.
* To-do
- Enable slave CPUs to handle access fault
- Support AMD SVM
- Support non-Linux guests
---
Tomoki Sekiyama (21):
x86: request TLB flush to sl
/cpu3/online
- Launch qemu-kvm with -no-kvm-pit option.
The offlined CPU is booted as a slave CPU and guest is runs on that CPU.
* To-do
- Enable slave CPUs to handle access fault
- Support AMD SVM
- Support non-Linux guests
---
Tomoki Sekiyama (21):
x86: request TLB flush to slave
-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti mtosa...@redhat.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@redhat.com
Cc: H. Peter Anvin h...@zytor.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/vmx.c
If the slave CPU receives an interrupt in running a guest, current
implementation must once go back to onilne CPUs to handle the interupt.
This behavior will be replaced by later patch, which introduces direct
interrupt handling mechanism by the guest.
Signed-off-by: Tomoki Sekiyama
Add trace event kvm_set_direct_interrupt to trace enabling/disabling
direct interrupt delivery on slave CPUs. At the event, the guest rip and
whether the feature is enabled or not is logged.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo
CPU with IRQ remapper of IOMMU.
This is intended to be used to routing interrupts directly to KVM guest
which is running on slave CPUs which do not cause VM EXIT by external
interrupts.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti
.
This patch adds kvm_arch_vcpu_prevent_run(), which causes VM exit right
after VM enter. The NMI handler uses this to ensure the execution of the
guest is cancelled after NMI.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti mtosa
, and
guest must use the same vector as host.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti mtosa...@redhat.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@redhat.com
Cc: H. Peter Anvin h...@zytor.com
---
arch/x86
on slave CPUs.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti mtosa...@redhat.com
Cc: Thomas Gleixner t...@linutronix.de
Cc: Ingo Molnar mi...@redhat.com
Cc: H. Peter Anvin h...@zytor.com
---
arch/x86/include/asm/kvm_host.h |1
to be routed either online CPUs or slave CPUs.
In this patch, if online CPUs are contained in specified affinity settings,
the affinity settings will be only applied to online CPUs. If every
specified CPU is slave, IRQ will be routed to slave CPUs.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama
Avoid exiting from a guest on slave CPU even if HLT instruction is
executed. Since the slave CPU is dedicated to a vCPU, exit on HLT is
not required, and avoiding VM exit will improve the guest's performance.
This is a partial revert of
10166744b80a (KVM: VMX: remove yield_on_hlt)
Cc:
Initialize rcu related variables to avoid warnings about RCU usage while
slave CPUs is running specified functions. Also notify RCU subsystem before
the slave CPU is entered into idle state.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo
for EXIT_REASON_PREEMPTION_TIMER,
which just goes back to VM execution soon.
These are currently intended only to be used with avoid entering the
guest on a slave CPU when vmx_prevent_run(vcpu, 1) is called.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti mtosa
1 - 100 of 156 matches
Mail list logo