needlessly.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa takuya.yoshik...@gmail.com
--
To unsubscribe from this list: send
-by: Takuya Yoshikawa yoshikawa_takuya...@lab.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c |2 +-
arch/powerpc/kvm/book3s_hv.c |2 +-
arch/x86/kvm/x86.c |2 +-
include/linux/kvm_host.h |1 -
virt/kvm/kvm_main.c |8
5 files changed, 3 insertions(+), 12
On Tue, 7 Aug 2012 12:57:13 +0200
Alexander Graf ag...@suse.de wrote:
+struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
+{
+ struct kvm_memslots *slots = kvm_memslots(kvm);
+ struct kvm_memory_slot *memslot;
+
+ kvm_for_each_memslot(memslot, slots)
+
On Thu, 9 Aug 2012 22:25:32 -0300
Marcelo Tosatti mtosa...@redhat.com wrote:
I'll send a patch to flush per memslot in the next days, you can work
out the PPC details in the meantime.
Are you going to implement that using slot_bitmap?
Since I'm now converting
?
Takuya Yoshikawa (3):
KVM: Stop checking rmap to see if slot is being created
KVM: MMU: Use gfn_to_rmap() instead of directly reading rmap array
KVM: Push rmap into kvm_arch_memory_slot
arch/powerpc/include/asm/kvm_host.h |1 +
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 ++--
arch/powerpc
Instead, check npages consistently. This helps to make rmap
architecture specific in a later patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm
This helps to make rmap architecture specific in a later patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c |3 ++-
arch/x86/kvm/mmu_audit.c |4 +---
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
v3-v4: Resolved trace_kvm_age_page() issue -- patch 6,7
v2-v3: Fixed intersection calculations. -- patch 3, 8
Takuya
Takuya Yoshikawa (8):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva()
KVM: MMU: Make
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 36 ++--
arch/x86/kvm
this by using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
This is needed to push trace_kvm_age_page() into kvm_age_rmapp() in the
following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 16 +---
1 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
On Sun, 01 Jul 2012 10:41:05 +0300
Avi Kivity a...@redhat.com wrote:
Note: in the new code we could not use trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Can't it be pushed to handler()?
Yes, but it will be changed to print rmap, not hva and
On Thu, 28 Jun 2012 20:39:55 +0300
Avi Kivity a...@redhat.com wrote:
Note: write_count: 4 bytes, rmap_pde: 8 bytes. So we are wasting
extra paddings by packing them into lpage_info.
The wastage is quite low since it's just 4 bytes per 2MB.
Yes.
Why not just introduce a function to get
On Thu, 28 Jun 2012 20:53:47 +0300
Avi Kivity a...@redhat.com wrote:
Note: in the new code we could not use trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Can't it be pushed to handler()?
Yes, but it will be changed to print rmap, not hva and gfn.
I
Updated patch 3 and 6 so that unmap handler be called with exactly same
rmap arguments as before, even if kvm_handle_hva_range() is called with
unaligned [start, end).
Please see the comments I added there.
Takuya
Takuya Yoshikawa (6):
KVM: MMU: Use __gfn_to_rmap() to clean up
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 36 ++--
arch/x86/kvm
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
trace_kvm_age_page(), so we just
dropped the point from kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 37 +++--
1 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
On Thu, 28 Jun 2012 11:12:51 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
struct kvm_arch_memory_slot {
+ unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1];
struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
};
It looks little complex than before - need
();
...
Takuya Yoshikawa (6):
KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva()
KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva()
KVM: MMU: Make kvm_handle_hva() handle range of addresses
KVM: Introduce kvm_unmap_hva_range() for
kvm_mmu_notifier_invalidate_range_start()
KVM
We can treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b53d9e..d3e7e6a 100644
--- a/arch/x86/kvm
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
is converted to a loop over rmap
which is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 31 +---
arch/x86/kvm
this by using kvm_handle_hva_range().
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
---
arch/powerpc/include/asm/kvm_host.h |2 ++
arch/powerpc/kvm/book3s_64_mmu_hv.c |7 +++
arch/x86/include/asm/kvm_host.h
This makes it possible to loop over rmap_pde arrays in the same way as
we do over rmap so that we can optimize kvm_handle_hva_range() easily in
the following patch.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c
I should have read this before sending v2...
On Thu, 21 Jun 2012 11:24:59 +0300
Avi Kivity a...@redhat.com wrote:
1. Separate rmap_pde from lpage_info-write_count and
make this a simple array. (I once tried this.)
This has the potential to increase cache misses, but I don't think
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity a...@redhat.com wrote:
Potential for improvement: don't do 512 iterations on same large page.
Something like
if ((gfn ^ prev_gfn) mask(level))
ret |= handler(...)
with clever selection of the first prev_gfn so it always matches
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity a...@redhat.com wrote:
kvm_for_each_memslot(memslot, slots) {
- gfn_t gfn = hva_to_gfn(hva, memslot);
+ gfn_t gfn = hva_to_gfn(start_hva, memslot);
+ gfn_t end_gfn = hva_to_gfn(end_hva, memslot);
These
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul Mackerras pau...@samba.org
this by using kvm_handle_hva_range().
On our x86 host, with a minimum configuration for the guest, the
invalidation became 40% faster on average and the worst case was also
improved to the same degree.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Cc: Alexander Graf ag...@suse.de
Cc: Paul
On Fri, 15 Jun 2012 20:31:44 +0900
Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp wrote:
...
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index d03eb6f..53716dd 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm
On Tue, 17 Apr 2012 17:56:24 +0300
Avi Kivity a...@redhat.com wrote:
For live migration, range-based control may be enough duo to the locality
of WWS.
What's WWS?
IIRC it was mentioned in a usenix paper: Writable Working Set.
May not be a commonly known concept.
Kind of working set, but
On Tue, 17 Apr 2012 10:51:40 +0300
Avi Kivity a...@redhat.com wrote:
That's true with the write protect everything approach we use now. But
it's not true with range-based write protection, where you issue
GET_DIRTY_LOG on a range of pages and only need to re-write-protect them.
(the
On Tue, 17 Apr 2012 15:41:39 +0300
Avi Kivity a...@redhat.com wrote:
Since there are many known algorithms to predict hot memory pages,
the userspace will be able to tune the frequency of GET_DIRTY_LOG for such
parts not to get too many faults repeatedly, if we can restrict the range
of
On Sun, 15 Apr 2012 12:32:59 +0300
Avi Kivity a...@redhat.com wrote:
Just to throw another idea into the mix - we can have write-protect-less
dirty logging, too. Instead of write protection, drop the dirty bit,
and check it again when reading the dirty log. It might look like we're
On Thu, 05 Apr 2012 20:02:44 +0300
Avi Kivity a...@redhat.com wrote:
In a recent conversation, Linus persuaded me that it's time for change
in our git workflow; the following will bring it in line with the
current practices of most trees.
The current 'master' branch will be abandoned (still
On Thu, 29 Mar 2012 17:26:59 +0200
Avi Kivity a...@redhat.com wrote:
Hm, the patch uses -slot_bitmap which we might want to kill if we
increase the number of slots dramatically, as some people want to do.
btw, what happened to that patch, did it just get ignored on the list?
I
On Wed, 28 Mar 2012 11:37:38 +0200
Avi Kivity a...@redhat.com wrote:
Now I see that x86 just seems to flush everything, which is quite heavy
handed considering how often cirrus does it, but maybe it doesn't have a
choice (lack of reverse mapping from GPA ?).
We do have a reverse mapping,
Avi Kivity a...@redhat.com wrote:
Slot searching is quite fast since there's a small number of slots, and
we sort the larger ones to be in the front, so positive lookups are fast.
We cache negative lookups in the shadow page tables (an spte can be
either not mapped, mapped to
On Tue, 24 Jan 2012 13:24:56 +0200
Avi Kivity a...@redhat.com wrote:
On 01/23/2012 12:42 PM, Takuya Yoshikawa wrote:
The last one is an RFC patch:
I think it is better to refactor the rmap things, if needed, before
other architectures than x86 starts large pages support
The last one is an RFC patch:
I think it is better to refactor the rmap things, if needed, before
other architectures than x86 starts large pages support.
Takuya
arch/ia64/kvm/kvm-ia64.c|8
arch/powerpc/kvm/book3s_64_mmu_hv.c |6 +++---
We want to eliminate direct access to the rmap array.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu_audit.c |4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu_audit.c b/arch/x86/kvm/mmu_audit.c
index 6eabae3..e62fa4f
We can hide the implementation details and treat every level uniformly.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 844fcce
this problem by decoupling rmap_pde from lpage_info
write_count and making the rmap array two dimensional which holds the
old rmap_pde elements in it.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c|8
arch/powerpc/kvm
(2011/12/26 8:35), Paul Mackerras wrote:
On Fri, Dec 23, 2011 at 02:23:30PM +0100, Alexander Graf wrote:
So if I read things correctly, this is the only case you're setting
pages as dirty. What if you have the following:
guest adds HTAB entry x
guest writes to page mapped by x
guest
(2010/11/19 15:01), Yang Rui Rui wrote:
Hi,
I searched the archive found some discutions about this, not fixed yet?
could someone tell, is g4 kvm available now?
Hi, (added kvm-ppc to Cc)
I'm using g4 (Mac mini box) to run KVM.
- though not tried 2.6.37-rc2 yet.
Aren't you using upstream
(2010/09/04 18:24), Alexander Graf wrote:
On 03.09.2010, at 10:34, Takuya Yoshikawa wrote:
This is the 2nd version of get_dirty_log cleanup.
Changelog:
In version 1, I changed the timing of copy_to_user() in the
powerpc's get_dirty_log by mistake. This time, I've kept the
timing
We move sanity check and lock related parts to the arch independent code.
This will help future cleanups.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c | 14 ++
arch/powerpc/kvm/book3s.c | 14 ++
arch/powerpc/kvm/booke.c
Are there any restrictions in KVM on ps3 linux?
Not really. The biggest thing to keep in mind is that ram is really limited. So
make sure you have a lot of swap. In fact, I used to use a PS3 for development
and testing quite a lot myself, so it definitely should work.
Thanks about the
(2010/07/12 17:00), Alexander Graf wrote:
On 12.07.2010, at 09:59, Takuya Yoshikawa wrote:
Are there any restrictions in KVM on ps3 linux?
Not really. The biggest thing to keep in mind is that ram is really limited. So
make sure you have a lot of swap. In fact, I used to use a PS3
Hi Alex,
I've been testing dirty logging on ps3 linux for a few weeks.
- I luckily got one by chance.
Although I could find what was the main cause of breaking dirty logging,
I'm struggling with stabilizing KVM on ps3 linux apart from dirty logging.
Problem: In almost every execution of
(2010/06/27 16:32), Avi Kivity wrote:
On 06/25/2010 10:25 PM, Alexander Graf wrote:
On 23.06.2010, at 08:01, Takuya Yoshikawa wrote:
kvm_get_dirty_log() is a helper function for
kvm_vm_ioctl_get_dirty_log() which
is currently used by ia64 and ppc and the following is what it is doing
On Fri, 25 Jun 2010 21:25:57 +0200
Alexander Graf ag...@suse.de wrote:
This patch plus 4/4 broke dirty bitmap updating on PPC. I didn't get around
to track down why, but I figured you should now. Is there any way to get you
a PPC development box? A simple G4 or G5 should be 200$ on ebay by
This patch plus 4/4 broke dirty bitmap updating on PPC. I didn't get around
to track down why, but I figured you should now. Is there any way to get you
a PPC development box? A simple G4 or G5 should be 200$ on ebay by now :).
A simple G4 or G5, thanks for the info, I'll buy one.
I hope
and sanity checks must
be done before kvm_ia64_sync_dirty_log(), we can say that this is not working
for code sharing effectively. So we just remove this.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c | 20 ++--
arch/powerpc/kvm/book3s.c
kvm_vm_ioctl_get_dirty_log() is now implemented as arch dependent function.
But now that we know what is actually arch dependent, we can split this into
arch dependent part and arch independent part easily.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64
(2010/06/23 17:48), Avi Kivity wrote:
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 801d9f3..bea6f7c 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1185,28 +1185,43 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
struct kvm_memory_slot
kvm_ia64_sync_dirty_log() is a helper function for kvm_vm_ioctl_get_dirty_log()
which copies ia64's arch specific dirty bitmap to general one in memslot.
So doing sanity checks in this is unnatural. We move these checks outside of
this and change the prototype appropriately.
Signed-off-by: Takuya
and sanity checks must
be done before kvm_ia64_sync_dirty_log(), we can say that this is not working
for code sharing effectively. So we just remove it.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c | 20 ++--
arch/powerpc/kvm/book3s.c
kvm_vm_ioctl_get_dirty_log() is now implemented as an arch dependent function.
But now that we know what is actually arch dependent, we can split this into
arch dependent part and arch independent part easily.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm
Marcelo Tosatti mtosa...@redhat.com wrote:
On Tue, Jun 22, 2010 at 06:03:58PM +0900, Takuya Yoshikawa wrote:
This patch set is for making dirty logging development, and of course
maintenance, easier. Please see individual patches for details.
Takuya
---
arch/ia64/kvm/kvm-ia64.c
This patch series is for making dirty logging development, and of course
maintenance, easier. Please see individual patches for details.
Changelog
v1 - v2:
- rebased
- booke and s390, kvm_vm_ioctl_get_dirty_log() to
kvm_arch_vm_ioctl_get_dirty_log()
Takuya
---
kvm_get_dirty_log() calls copy_to_user(). So we need to narrow the
dirty_log_lock spin_lock section not to include this.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch
-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
---
arch/ia64/kvm/kvm-ia64.c | 30 +++---
1 files changed, 11 insertions(+), 19 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index d85b5d2..5cb5865 100644
--- a/arch/ia64/kvm/kvm-ia64.c
(2010/06/01 19:55), Marcelo Tosatti wrote:
Sorry but I have to say that mmu_lock spin_lock problem was completely
out of
my mind. Although I looked through the code, it seems not easy to move the
set_bit_user to outside of spinlock section without breaking the
semantics of
its protection.
So
(2010/05/17 18:06), Takuya Yoshikawa wrote:
User allocated bitmaps have the advantage of reducing pinned memory.
However we have plenty more pinned memory allocated in memory slots, so
by itself, user allocated bitmaps don't justify this change.
Sorry for pinging several times
User allocated bitmaps have the advantage of reducing pinned memory.
However we have plenty more pinned memory allocated in memory slots, so
by itself, user allocated bitmaps don't justify this change.
In that sense, what do you think about the question I sent last week?
=== REPOST 1 ===
One alternative would be:
KVM_SWITCH_DIRTY_LOG passing the address of a bitmap. If the active
bitmap was clean, it returns 0, no switch performed. If the active
bitmap was dirty, the kernel switches to the new bitmap and returns 1.
And the responsability of cleaning the new bitmap could also
[To ppc people]
Hi, Benjamin, Paul, Alex,
Please see the patches 6,7/12. I first say sorry for that I've not tested these
yet. In that sense, these may not be in the quality for precise reviews. But I
will be happy if you would give me any comments.
Alex, could you help me? Though I have a
+static inline int set_bit_user_non_atomic(int nr, void __user *addr)
+{
+ u8 __user *p;
+ u8 val;
+
+ p = (u8 __user *)((unsigned long)addr + nr / BITS_PER_BYTE);
Does C do the + or the / first? Either way, I'd like to see brackets here :)
OK, I'll change like that! I
Yes, I'm just using in kernel space: qemu has its own endian related helpers.
So if you allow us to place this macro in asm-generic/bitops/* it will help us.
No problem at all then. Thanks for the explanation.
Acked-by: Arnd Bergmanna...@arndb.de
Thanks you both. I will add your Acked-by
get.org get.opt switch.opt
slots[7].len=32768 278379 66398 64024
slots[8].len=32768 181246 270 160
slots[7].len=32768 263961 64673 64494
slots[8].len=32768 181655 265 160
slots[7].len=32768 263736 64701 64610
slots[8].len=32768 182785 267 160
slots[7].len=32768 260925 65360 65042
77 matches
Mail list logo