On Wed, Dec 14, 2016 at 10:51 PM, Paolo Bonzini wrote:
>
>> I am looking at the possibility to add a new QEMU configuration option
>> to make TCG optional (in qemu-system-*). What I am exploring is a way
>> to exclude any of the TCG code not needed by KVM from the QEMU binary.
>> There has been a
Hi all,
I am looking at the possibility to add a new QEMU configuration option
to make TCG optional (in qemu-system-*). What I am exploring is a way
to exclude any of the TCG code not needed by KVM from the QEMU binary.
There has been a previous attempt in the past from Paolo Bonzini,
namely https
Hi Sergey,
On Mon, Jun 20, 2016 at 12:28 AM, Sergey Fedorov
wrote:
>
> From: Sergey Fedorov
>
> This patch is based on the ideas found in work of KONRAD Frederic [1],
> Alex Bennée [2], and Alvise Rigo [3].
>
> This mechanism allows to perform an operation safely i
On Mon, Jun 20, 2016 at 4:12 PM, Alex Bennée wrote:
>
> alvise rigo writes:
>
> > Hi Alex,
> >
> > I'm looking into the worries that Sergey issued in his review of the
> > last LL/SC series. The target is to reduce the TLB flushes by using an
> >
Hi Alex,
I'm looking into the worries that Sergey issued in his review of the
last LL/SC series. The target is to reduce the TLB flushes by using an
exclusive history of dynamic length. I don't have anything ready yet
though.
Best regards,
alvise
On Mon, Jun 20, 2016 at 1:57 PM, Alex Bennée wro
On Wed, Jun 15, 2016 at 4:51 PM, Alex Bennée wrote:
>
> alvise rigo writes:
>
>> Hi Sergey,
>>
>> Nice review of the implementations we have so far.
>> Just few comments below.
>>
>> On Wed, Jun 15, 2016 at 2:59 PM, Sergey Fedorov wrote:
>>
Hi Sergey,
Nice review of the implementations we have so far.
Just few comments below.
On Wed, Jun 15, 2016 at 2:59 PM, Sergey Fedorov wrote:
> On 10/06/16 00:51, Sergey Fedorov wrote:
>> For certain kinds of tasks we might need a quiescent state to perform an
>> operation safely. Quiescent stat
, 2016 at 2:00 PM, Alex Bennée wrote:
>
> alvise rigo writes:
>
>> On Fri, Jun 10, 2016 at 5:21 PM, Sergey Fedorov wrote:
>>> On 26/05/16 19:35, Alvise Rigo wrote:
>>>> Using tcg_exclusive_{lock,unlock}(), make the emulation of
>>>> LoadLink/StoreCondit
This would require to fill again the whole history which I find very
unlikely. In any case, this has to be documented.
Thank you,
alvise
On Fri, Jun 10, 2016 at 6:00 PM, Sergey Fedorov wrote:
> On 10/06/16 18:53, alvise rigo wrote:
>> On Fri, Jun 10, 2016 at 5:21 PM, Sergey Fedor
On Fri, Jun 10, 2016 at 5:21 PM, Sergey Fedorov wrote:
> On 26/05/16 19:35, Alvise Rigo wrote:
>> Using tcg_exclusive_{lock,unlock}(), make the emulation of
>> LoadLink/StoreConditional thread safe.
>>
>> During an LL access, this lock protects the load access i
I might have broken something while rebasing on top of
enable-mttcg-for-armv7-v1.
I will sort this problem out.
Thank you,
alvise
On Fri, Jun 10, 2016 at 5:21 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Hi,
>>
>> This series ports the latest iteration of t
Hi Sergey,
Thank you for this precise summary.
On Thu, Jun 9, 2016 at 1:42 PM, Sergey Fedorov wrote:
> Hi,
>
> On 19/04/16 16:39, Alvise Rigo wrote:
>> This patch series provides an infrastructure for atomic instruction
>> implementation in QEMU, thus offering a
alvise
On Wed, Jun 8, 2016 at 5:20 PM, Alex Bennée wrote:
>
> Sergey Fedorov writes:
>
>> On 08/06/16 17:10, alvise rigo wrote:
>>> Using run_on_cpu() we might deadlock QEMU if other vCPUs are waiting
>>> for the current vCPU. We need to exit from the vCPU loop
Using run_on_cpu() we might deadlock QEMU if other vCPUs are waiting
for the current vCPU. We need to exit from the vCPU loop in order to
avoid this.
Regards,
alvise
On Wed, Jun 8, 2016 at 3:54 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Introduce a new function that all
ut are pretty much confined to the
locking/unlocking of a spinlock/mutex.
This made me think, how does linux-user can properly work with
upstream TCG, for instance, in an absurd configuration like target-arm
on ARM host?
alvise
On Wed, Jun 8, 2016 at 11:21 AM, Alex Bennée wrote:
>
&
Hi Pranith,
Thank you for the hint, I will keep this in mind for the next version.
Regards,
alvise
On Tue, May 31, 2016 at 5:03 PM, Pranith Kumar wrote:
> Hi Alvise,
>
> On Thu, May 26, 2016 at 12:35 PM, Alvise Rigo
> wrote:
>> Add tcg_exclusive_{lock,unlock}() functions tha
Add a simple helper function to flush the TLB at the indexes specified
by a bitmap. The function will be more useful in the following patches,
when it will be possible to query tlb_flush_by_mmuidx() to VCPUs.
Signed-off-by: Alvise Rigo
---
cputlb.c | 30 +++---
1 file
If a VCPU returns EXCP_HALTED from the guest code execution and in the
mean time receives a work item, it will go to sleep without processing
the job.
Before sleeping, check if any work has been added.
Signed-off-by: Alvise Rigo
---
cpus.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
Secure tlb_flush_page_all() by waiting the queried flushes to be
actually completed using async_wait_run_on_cpu();
Signed-off-by: Alvise Rigo
---
cputlb.c| 15 ++-
include/exec/exec-all.h | 4 ++--
target-arm/helper.c | 4 ++--
3 files changed, 14 insertions
, we can always get safely the CPUState of the
current VCPU without relying on current_cpu. This however complicates a
bit the function prototype by adding an argument pointing to the current
VCPU's CPUState.
Signed-off-by: Alvise Rigo
---
cputlb.c
also cope with the new multi-threaded
execution.
Signed-off-by: Alvise Rigo
---
softmmu_llsc_template.h | 11 +--
softmmu_template.h | 6 ++
target-arm/op_helper.c | 6 ++
3 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/softmmu_llsc_template.h b/softmmu_
case process pending work items.
Signed-off-by: Alvise Rigo
---
cpus.c| 44 ++--
include/qom/cpu.h | 31 +++
2 files changed, 73 insertions(+), 2 deletions(-)
diff --git a/cpus.c b/cpus.c
index b9ec903..7bc96e2
Similarly to the previous commit, make tlb_flush_page_by_mmuidx query the
flushes when targeting different VCPUs.
Signed-off-by: Alvise Rigo
---
cputlb.c| 90 ++---
include/exec/exec-all.h | 5 +--
target-arm/helper.c | 35
d-off-by: Alvise Rigo
---
cputlb.c| 28 +++-
softmmu_llsc_template.h | 2 +-
2 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/cputlb.c b/cputlb.c
index 1586b64..55f7447 100644
--- a/cputlb.c
+++ b/cputlb.c
@@ -81,12 +81,24 @@ static
.
Signed-off-by: Alvise Rigo
---
target-arm/translate-a64.c | 2 ++
target-arm/translate.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
index 376cb1c..2a14c14 100644
--- a/target-arm/translate-a64.c
+++ b/target-arm/translate-a64.c
Add tcg_exclusive_{lock,unlock}() functions that will be used for making
the emulation of LL and SC instructions thread safe.
Signed-off-by: Alvise Rigo
---
cpus.c| 2 ++
exec.c| 18 ++
include/qom/cpu.h | 5 +
3 files changed, 25 insertions
h-for-atomic-v8-mttcg".
Alvise Rigo (10):
exec: Introduce tcg_exclusive_{lock,unlock}()
softmmu_llsc_template.h: Move to multi-threading
cpus: Introduce async_wait_run_on_cpu()
cputlb: Introduce tlb_flush_other()
target-arm: End TB after ldrex instruction
cputlb: Add tlb_tables_fl
Hi Alex,
I finally solved the issue I had, the branch is working well as far as I
can say.
The work I will share, in addition to making the LL/SC work mttcg-aware,
extends the various TLB flushes calls with the query-based mechanism: the
requesting CPU queries the flushes to the target CPUs and wa
Not from my side.
Hope to have some news by the end of the week.
Regards,
alvise
On Mon, May 9, 2016 at 1:56 PM, Alex Bennée wrote:
>
> Hi,
>
> Do we have anything we want to discuss today?
>
> --
> Alex Bennée
>
Hi Alex,
On Mon, Apr 25, 2016 at 11:53 AM, Alex Bennée
wrote:
> Hi,
>
> We are due to have a sync-up call today but I don't think I'll be able
> to make it thanks to a very rough voice courtesy of my
> petri-dishes/children. However since the last call:
>
> * Posted final parts of the MTTCG pat
Use the new LL/SC runtime helpers to handle the aarch64 atomic instructions
in softmmu_llsc_template.h.
The STXP emulation required a dedicated helper to handle the paired
doubleword case.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
target-arm
check.
In addition, add a simple helper function to emulate the CLREX instruction.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
target-arm/cpu.h | 3 +
target-arm/helper.h| 2 +
target-arm/machine.c | 7 ++
target-arm/op_helper.c
for more
details).
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
target-arm/cpu64.c | 8
1 file changed, 8 insertions(+)
diff --git a/target-arm/cpu64.c b/target-arm/cpu64.c
index cc177bb..1d45e66 100644
--- a/target-arm/cpu64.c
+++ b/target
y: Claudio Fontana
Signed-off-by: Alvise Rigo
---
Makefile.target | 2 +-
include/exec/helper-gen.h | 3 ++
include/exec/helper-proto.h | 1 +
include/exec/helper-tcg.h | 3 ++
tcg-llsc-helper.c | 104
tcg-lls
the other CPUs to invalidate the
exclusive range in case of collision: basically, it serves the same
purpose as TLB_EXCL for the TLBEntries referring exclusive memory.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cputlb.c| 7 +-
Fontana
Signed-off-by: Alvise Rigo
---
include/qom/cpu.h | 20
qom/cpu.c | 27 +++
2 files changed, 47 insertions(+)
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 2e5229d..21f10eb 100644
--- a/include/qom/cpu.h
+++ b/include/qom
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cputlb.c | 36 ++
softmmu_template.h | 65 +++---
2 files changed, 89 insertions(+), 12 deletions(-)
diff --git a/cputlb.c b/cputlb.c
index 02b0d14
' can be a store made by *any* vCPU
(although, some implementations allow stores made by the CPU that issued
the LoadLink).
For the time being we do not support exclusive accesses to MMIO memory.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cputlb.c| 21 +
exec.c | 19 +++
include/qom/cpu.h | 8
softmmu_llsc_template.h | 1 +
vl.c| 3 +++
5 files change
anges in probe_write and
everything else is identical.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
CC: Alvise Rigo
Signed-off-by: Alex Bennée
[Alex Bennée: define smmu_helper and unified logic between be/le]
Signed-off-by: Alvise Rigo
---
softmmu_templat
starts, the whole memory is set to dirty.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
exec.c | 2 +-
include/exec/memory.h | 3 ++-
include/exec/ram_addr.h | 31 +++
3 files changed, 34 insertions
: Alvise Rigo
---
softmmu_template.h | 49 +++--
1 file changed, 27 insertions(+), 22 deletions(-)
diff --git a/softmmu_template.h b/softmmu_template.h
index 3eb54f8..9185486 100644
--- a/softmmu_template.h
+++ b/softmmu_template.h
@@ -410,6 +410,29
Bennée
Signed-off-by: Alvise Rigo
---
softmmu_template.h | 80 +++---
1 file changed, 40 insertions(+), 40 deletions(-)
diff --git a/softmmu_template.h b/softmmu_template.h
index 9185486..ea6a0fb 100644
--- a/softmmu_template.h
+++ b
Add a new TLB flag to force all the accesses made to a page to follow
the slow-path.
The TLB entries referring guest pages with the DIRTY_MEMORY_EXCLUSIVE
bit clean will have this flag set.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
include/exec
om Richard Henderson to improve the logic in
softmmu_template.h and to simplify the methods generation through
softmmu_llsc_template.h
- Added initial implementation of qemu_{ldlink,stcond}_i32 for tcg/i386
This work has been sponsored by Huawei Technologies Duesseldorf GmbH.
Alvise Rigo (14):
Hi Alex,
On Mon, Apr 11, 2016 at 1:21 PM, Alex Bennée wrote:
>
> Hi,
>
> It's been awhile since we synced-up with quite weeks and Easter out of
> the way are we good for a call today?
Indeed, it has been a while.
>
>
> Some items I can think would be worth covering:
>
> - State of MTTCG enab
Hi Paolo,
On Mon, Mar 7, 2016 at 10:18 PM, Paolo Bonzini wrote:
>
>
> On 04/03/2016 15:28, alvise rigo wrote:
>> A small update on this. I have a working implementation of the "halted
>> state" mechanism for waiting all the pending flushes to be completed.
>&g
On Thu, Feb 18, 2016 at 6:02 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Use the new LL/SC runtime helpers to handle the ARM atomic instructions
>> in softmmu_llsc_template.h.
>>
>> In general, the helper generator
>> gen_{ldrex,strex}_{8,16a,32a,
On Thu, Feb 18, 2016 at 5:25 PM, Alex Bennée wrote:
>
> alvise rigo writes:
>
>> On Wed, Feb 17, 2016 at 7:55 PM, Alex Bennée wrote:
>>>
>>> Alvise Rigo writes:
>>>
>>>> As for the RAM case, also the MMIO exclusive ranges have to be prote
On Thu, Feb 18, 2016 at 5:40 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Use the new slow path for atomic instruction translation when the
>> softmmu is enabled.
>>
>> At the moment only arm and aarch64 use the new LL/SC backend. It is
>> possibl
to use
cpu_exit(). Is there another better solution?
Thank you,
alvise
On Mon, Feb 29, 2016 at 3:18 PM, alvise rigo
wrote:
> I see the risk. I will come back with something and let you know.
>
> Thank you,
> alvise
>
> On Mon, Feb 29, 2016 at 3:06 PM, Paolo Bonzini
> w
I see the risk. I will come back with something and let you know.
Thank you,
alvise
On Mon, Feb 29, 2016 at 3:06 PM, Paolo Bonzini wrote:
>
>
> On 29/02/2016 15:02, alvise rigo wrote:
>> > Yeah, that's the other approach -- really split the things that can
>> >
On Mon, Feb 29, 2016 at 2:55 PM, Peter Maydell wrote:
> On 29 February 2016 at 13:50, Paolo Bonzini wrote:
>>
>>
>> On 29/02/2016 14:21, Peter Maydell wrote:
>>> On 29 February 2016 at 13:16, Alvise Rigo
>>> wrote:
>>>> > As in the case of
multi_tcg_v8 branch.
Signed-off-by: Alvise Rigo
---
cputlb.c | 65
1 file changed, 53 insertions(+), 12 deletions(-)
diff --git a/cputlb.c b/cputlb.c
index 29252d1..1eeeccb 100644
--- a/cputlb.c
+++ b/cputlb.c
@@ -103,9 +103,11 @@ void
On Fri, Feb 19, 2016 at 12:44 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> This is the seventh iteration of the patch series which applies to the
>> upstream branch of QEMU (v2.5.0-rc4).
>>
>> Changes versus previous versions are at the bottom of this cov
On Tue, Feb 16, 2016 at 6:39 PM, Alex Bennée wrote:
>
>
> Alvise Rigo writes:
>
> > The pages set as exclusive (clean) in the DIRTY_MEMORY_EXCLUSIVE bitmap
> > have to have their TLB entries flagged with TLB_EXCL. The accesses to
> > pages with TLB_EXCL flag set
On Tue, Feb 16, 2016 at 6:49 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Enable exclusive accesses when the MMIO/invalid flag is set in the TLB
>> entry.
>>
>> In case a LL access is done to MMIO memory, we treat it differently from
>> a RAM ac
On Tue, Feb 16, 2016 at 6:07 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Add a circular buffer to store the hw addresses used in the last
>> EXCLUSIVE_HISTORY_LEN exclusive accesses.
>>
>> When an address is pop'ed from the buffer, its page will b
On Wed, Feb 17, 2016 at 7:55 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> As for the RAM case, also the MMIO exclusive ranges have to be protected
>> by other CPU's accesses. In order to do that, we flag the accessed
>> MemoryRegion to mark that an exclusi
On Thu, Feb 11, 2016 at 5:33 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> The new helpers rely on the legacy ones to perform the actual read/write.
>>
>> The LoadLink helper (helper_ldlink_name) prepares the way for the
>> following StoreCond operation. I
On Thu, Feb 11, 2016 at 2:22 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> The excl_protected_range is a hwaddr range set by the VCPU at the
>> execution of a LoadLink instruction. If a normal access writes to this
>> range, the corresponding StoreCond will fa
You are right, the for loop with i < DIRTY_MEMORY_NUM works just fine.
Thank you,
alvise
On Thu, Feb 11, 2016 at 2:00 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> The purpose of this new bitmap is to flag the memory pages that are in
>> the middle of LL/SC operat
ility to forget the EXCL bit set.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cputlb.c| 29 +++--
exec.c | 19 +++
include/qom/cpu.h | 8
softmmu_llsc_template.h
Use the new LL/SC runtime helpers to handle the aarch64 atomic instructions
in softmmu_llsc_template.h.
The STXP emulation required a dedicated helper to handle the paired
doubleword case.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
configure
for more
details).
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
target-arm/cpu64.c | 8
1 file changed, 8 insertions(+)
diff --git a/target-arm/cpu64.c b/target-arm/cpu64.c
index cc177bb..1d45e66 100644
--- a/target-arm/cpu64.c
+++ b/target
-off-by: Alvise Rigo
---
cputlb.c | 44 --
softmmu_template.h | 80 --
2 files changed, 113 insertions(+), 11 deletions(-)
diff --git a/cputlb.c b/cputlb.c
index ce6d720..aa9cc17 100644
--- a/cputlb.c
+++ b
y: Claudio Fontana
Signed-off-by: Alvise Rigo
---
Makefile.target | 2 +-
include/exec/helper-gen.h | 3 ++
include/exec/helper-proto.h | 1 +
include/exec/helper-tcg.h | 3 ++
tcg-llsc-helper.c | 104
tcg-lls
86
This work has been sponsored by Huawei Technologies Duesseldorf GmbH.
Alvise Rigo (16):
exec.c: Add new exclusive bitmap to ram_list
softmmu: Simplify helper_*_st_name, wrap unaligned code
softmmu: Simplify helper_*_st_name, wrap MMIO code
softmmu: Simplify helper_*_st_name, wrap RAM co
: Alvise Rigo
---
configure | 14 ++
1 file changed, 14 insertions(+)
diff --git a/configure b/configure
index 44ac9ab..915efcc 100755
--- a/configure
+++ b/configure
@@ -294,6 +294,7 @@ solaris="no"
profiler="no"
cocoa="no"
softmmu="yes"
usive range in
case of collision.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cputlb.c| 20 +---
include/exec/memory.h | 1 +
softmmu_llsc_template.h | 11 +++
softmmu_template.h | 22
.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
softmmu_template.h | 110 +
1 file changed, 68 insertions(+), 42 deletions(-)
diff --git a/softmmu_template.h b/softmmu_template.h
index 3d388ec..6279437
' can be a store made by *any* vCPU
(although, some implementations allow stores made by the CPU that issued
the LoadLink).
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cputlb.c| 3 ++
include/qom/cpu.h
Fontana
Signed-off-by: Alvise Rigo
---
include/qom/cpu.h | 15 +++
qom/cpu.c | 20
2 files changed, 35 insertions(+)
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 2e5229d..682c81d 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -29,6
the softmmu_helpers.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
softmmu_template.h | 96 ++
1 file changed, 60 insertions(+), 36 deletions(-)
diff --git a/softmmu_template.h b/softmmu_template.h
ed-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cputlb.c | 7 +++
softmmu_template.h | 26 --
2 files changed, 23 insertions(+), 10 deletions(-)
diff --git a/cputlb.c b/cputlb.c
index aa9cc17..87d09c8 100644
--- a/cputlb.c
+++ b/cputlb.c
@@ -424,7 +
check.
In addition, add a simple helper function to emulate the CLREX instruction.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
target-arm/cpu.h | 2 +
target-arm/helper.h| 4 ++
target-arm/machine.c | 2 +
target-arm/op_helper.c
Add a new TLB flag to force all the accesses made to a page to follow
the slow-path.
The TLB entries referring guest pages with the DIRTY_MEMORY_EXCLUSIVE
bit clean will have this flag set.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
include/exec
Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
softmmu_template.h | 66 --
1 file changed, 44 insertions(+), 22 deletions(-)
diff --git a/softmmu_template.h b/softmmu_template.h
index 7029a03..3d388ec 100644
--- a
starts, the whole memory is set to dirty.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
exec.c | 7 +--
include/exec/memory.h | 3 ++-
include/exec/ram_addr.h | 31 +++
3 files changed, 38
On Fri, Jan 8, 2016 at 4:53 PM, Alex Bennée wrote:
> From: Alvise Rigo
>
> Attempting to simplify the helper_*_st_name, wrap the
> do_unaligned_access code into an shared inline function. As this also
> removes the goto statement the inline code is expanded twice in each
> hel
On Mon, Jan 18, 2016 at 8:09 PM, Alex Bennée wrote:
>
>
> Alex Bennée writes:
>
> > alvise rigo writes:
> >
> >> On Fri, Jan 15, 2016 at 4:25 PM, Alex Bennée
> >> wrote:
> >>>
> >>> alvise rigo writes:
> &
On Fri, Jan 15, 2016 at 4:25 PM, Alex Bennée wrote:
>
> alvise rigo writes:
>
>> On Fri, Jan 15, 2016 at 3:51 PM, Alex Bennée wrote:
>>>
>>> alvise rigo writes:
>>>
>>>> This problem could be related to a missing multi-threaded aware
On Fri, Jan 15, 2016 at 3:51 PM, Alex Bennée wrote:
>
> alvise rigo writes:
>
>> This problem could be related to a missing multi-threaded aware
>> translation of the atomic instructions.
>> I'm working on this missing piece, probably the next week I will
>
This problem could be related to a missing multi-threaded aware
translation of the atomic instructions.
I'm working on this missing piece, probably the next week I will
publish something.
Regards,
alvise
On Fri, Jan 15, 2016 at 3:24 PM, Pranith Kumar wrote:
> Hi Alex,
>
> On Fri, Jan 15, 2016 at
index indicating a stage 1
translation regime.
Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.
Signed-off-by: Alvise Rigo
---
target-arm/helper.c| 12
target-arm/internals.h | 5 +++--
target-arm/op_helper.c | 2 +-
3
On Fri, Jan 15, 2016 at 11:04 AM, Peter Maydell
wrote:
> On 15 January 2016 at 09:59, Alvise Rigo
> wrote:
>> arm_regime_using_lpae_format checks whether the LPAE extension is used
>> for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
>> regime won
index indicating a stage 1
translation regime.
Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.
Signed-off-by: Alvise Rigo
---
target-arm/helper.c| 8
target-arm/internals.h | 5 +++--
target-arm/op_helper.c | 8 ++--
3
Forcing an unaligned LDREX access in aarch32, QEMU fails the following assert:
target-arm/helper.c:5921:regime_el: code should not be reached
Running this snippet both baremetal and on top of Linux will trigger
the problem:
static inline int cmpxchg(volatile void *ptr, unsigned int old,
On Mon, Jan 11, 2016 at 10:54 AM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Attempting to simplify the helper_*_st_name, wrap the MMIO code into an
>> inline function.
>>
>> Suggested-by: Jani Kokkonen
>> Suggested-by: Claudio
On Thu, Jan 7, 2016 at 3:46 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Attempting to simplify the helper_*_st_name, wrap the
>> do_unaligned_access code into an inline function.
>> Remove also the goto statement.
>
> As I said in the other thread I think
On Thu, Jan 7, 2016 at 11:22 AM, Peter Maydell wrote:
> On 7 January 2016 at 10:21, alvise rigo wrote:
>> Hi,
>>
>> On Wed, Jan 6, 2016 at 7:00 PM, Andrew Baumann
>> wrote:
>>> As a heads up, we just added support for alignment checks in LDREX:
>
Hi,
On Wed, Jan 6, 2016 at 7:00 PM, Andrew Baumann
wrote:
>
> Hi,
>
> > From: qemu-devel-bounces+andrew.baumann=microsoft@nongnu.org
> > [mailto:qemu-devel-
> > bounces+andrew.baumann=microsoft@nongnu.org] On Behalf Of
> > Alvise Rigo
> > Sent: Mond
On Wed, Jan 6, 2016 at 6:13 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Add a simple helper function to emulate the CLREX instruction.
>
> And now I see ;-)
>
> I suspect this should be merged with the other helpers as a generic helper.
Agreed.
>
>
On Tue, Jan 5, 2016 at 5:10 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Add a new TLB flag to force all the accesses made to a page to follow
>> the slow-path.
>>
>> In the case we remove a TLB entry marked as EXCL, we unset the
>> co
On Fri, Dec 18, 2015 at 2:18 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> The purpose of this new bitmap is to flag the memory pages that are in
>> the middle of LL/SC operations (after a LL, before a SC) on a per-vCPU
>> basis.
>> For all these pages, t
On Thu, Dec 17, 2015 at 5:52 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
>> Attempting to simplify the helper_*_st_name, wrap the code relative to a
>> RAM access into an inline function.
>
> This commit breaks a default x86_64-softmmu build:
I see. Would th
Hi Alex,
On Thu, Dec 17, 2015 at 5:06 PM, Alex Bennée wrote:
>
> Alvise Rigo writes:
>
> > This is the sixth iteration of the patch series which applies to the
> > upstream branch of QEMU (v2.5.0-rc3).
> >
> > Changes versus previous versions are at the bottom
On Mon, Dec 14, 2015 at 10:35 AM, Paolo Bonzini wrote:
>
>
> On 14/12/2015 09:41, Alvise Rigo wrote:
>> +static inline void excl_history_put_addr(CPUState *cpu, hwaddr addr)
>> +{
>> +/* Avoid some overhead if the address we are about to put is equal to
>> +
Hi,
On Mon, Dec 14, 2015 at 11:14 AM, Laurent Vivier wrote:
>
>
> On 14/12/2015 09:41, Alvise Rigo wrote:
>> Use the new slow path for atomic instruction translation when the
>> softmmu is enabled.
>>
>> Suggested-by: Jani Kokkonen
>> Suggested-by: Claudio
On Tue, Dec 15, 2015 at 3:18 PM, Paolo Bonzini wrote:
>
>
> On 15/12/2015 14:59, alvise rigo wrote:
>>> > If we have two CPUs, with CPU 0 executing LL and the CPU 1 executing a
>>> > store, you can model this as a consensus problem. For example, CPU 0
>>&
1 - 100 of 298 matches
Mail list logo