On Wed, 14 May 2008, Robin Holt wrote:
Are you suggesting the sending side would not need to sleep or the
receiving side?
One thing to realize is that most of the time (read: pretty much *always*)
when we have the problem of wanting to sleep inside a spinlock, the
solution is actually to
On Wed, 14 May 2008, Robin Holt wrote:
Would it be acceptable to always put a sleepable stall in even if the
code path did not require the pages be unwritable prior to continuing?
If we did that, I would be freed from having a pool of invalidate
threads ready for XPMEM to use for that
On Wed, 14 May 2008, Christoph Lameter wrote:
The problem is that the code in rmap.c try_to_umap() and friends loops
over reverse maps after taking a spinlock. The mm_struct is only known
after the rmap has been acccessed. This means *inside* the spinlock.
So you queue them. That's what
On Thu, 8 May 2008, Andrea Arcangeli wrote:
Actually I looked both at the struct and at the slab alignment just in
case it was changed recently. Now after reading your mail I also
compiled it just in case.
Put the flag after the spinlock, not after the list_head.
Also, we'd need to make
On Wed, 7 May 2008, Andrew Morton wrote:
The patch looks OK to me.
As far as I can tell, authorship has been destroyed by at least two of the
patches (ie Christoph seems to be the author, but Andrea seems to have
dropped that fact).
The proposal is that we sneak this into 2.6.26. Are
On Wed, 7 May 2008, Andrea Arcangeli wrote:
Convert the anon_vma spinlock to a rw semaphore. This allows concurrent
traversal of reverse maps for try_to_unmap() and page_mkclean(). It also
allows the calling of sleeping functions from reverse map traversal as
needed for the notifier
On Wed, 7 May 2008, Andrea Arcangeli wrote:
I think the spinlock-rwsem conversion is ok under config option, as
you can see I complained myself to various of those patches and I'll
take care they're in a mergeable state the moment I submit them. What
XPMEM requires are different semantics
On Wed, 7 May 2008, Andrea Arcangeli wrote:
As far as I can tell, authorship has been destroyed by at least two of the
patches (ie Christoph seems to be the author, but Andrea seems to have
dropped that fact).
I can't follow this, please be more specific.
The patches were sent to
On Thu, 8 May 2008, Andrea Arcangeli wrote:
I rechecked and I guarantee that the patches where Christoph isn't
listed are developed by myself and he didn't write a single line on
them.
How long have you been doing kernel development?
How about you read SubmittingPatches a few times before
On Thu, 8 May 2008, Andrea Arcangeli wrote:
Ok so I see the problem Linus is referring to now (I received the hint
by PM too), I thought the order of the signed-off-by was relevant, it
clearly isn't or we're wasting space ;)
The order of the signed-offs are somewhat relevant, but no,
On Thu, 8 May 2008, Andrea Arcangeli wrote:
mmu_notifier_register only runs when windows or linux or macosx
boots. Who could ever care of the msec spent in mm_lock compared to
the time it takes to linux to boot?
Andrea, you're *this* close to going to my list of people who it is not
worth
On Thu, 8 May 2008, Andrea Arcangeli wrote:
At least for mmu-notifier-core given I obviously am the original
author of that code, I hope the From: of the email was enough even if
an additional From: andrea was missing in the body.
Ok, this whole series of patches have just been such a
On Wed, 7 May 2008, Christoph Lameter wrote:
Multiple vmas may share the same mapping or refer to the same anonymous
vma. The above code will deadlock since we may take some locks multiple
times.
Ok, so that actually _is_ a problem. It would be easy enough to also add
just a flag to the
On Wed, 7 May 2008, Robin Holt wrote:
In order to invalidate the remote page table entries, we need to message
(uses XPC) to the remote side. The remote side needs to acquire the
importing process's mmap_sem and call zap_page_range(). Between the
messaging and the acquiring a sleeping
On Thu, 8 May 2008, Andrea Arcangeli wrote:
Hi Andrew,
On Wed, May 07, 2008 at 03:59:14PM -0700, Andrew Morton wrote:
CPU0: CPU1:
spin_lock(global_lock)
spin_lock(a-lock); spin_lock(b-lock);
==
On Wed, 7 May 2008, Christoph Lameter wrote:
Set the vma flag when we locked it and then skip when we find it locked
right? This would be in addition to the global lock?
Yes. And clear it before unlocking (and again, testing if it's already
clear - you mustn't unlock twice, so you must
On Wed, 7 May 2008, Christoph Lameter wrote:
On Wed, 7 May 2008, Linus Torvalds wrote:
and you're now done. You have your mm_lock() (which still needs to be
renamed - it should be a mmu_notifier_lock() or something like that),
but you don't need the insane sorting. At most you
On Wed, 7 May 2008, Christoph Lameter wrote:
(That said, we're not running out of vm flags yet, and if we were, we
could just add another word. We're already wasting that space right now on
64-bit by calling it unsigned long).
We sure have enough flags.
Oh, btw, I was wrong - we
On Thu, 8 May 2008, Andrea Arcangeli wrote:
So because the bitflag can't prevent taking the same lock twice on two
different vmas in the same mm, we still can't remove the sorting
Andrea.
Take five minutes. Take a deep breadth. And *think* about actually reading
what I wrote.
The
Andrea, I'm not interested. I've stated my standpoint: the code being
discussed is crap. We're not doing that. Not in the core VM.
I gave solutions that I think aren't crap, but I already also stated that
I have no problems not merging it _ever_ if no solution can be found. The
whole issue
On Thu, 8 May 2008, Andrea Arcangeli wrote:
But removing sort isn't worth it if it takes away ram from the VM even
when global_mm_lock will never be called.
Andrea, you really are a piece of work. Your arguments have been bogus
crap that didn't even understand what was going on from the
On Tue, 17 Jul 2007, H. Peter Anvin wrote:
S.Çağlar Onur wrote:
If i'm not wrong X86_CMPXCHG64 depends on CONFIG_X86_PAE which depends on
HIGHMEM64 and again if i'm not wrong this means distributions who wants to
provide KVM must enable CONFIG_X86_PAE and CONFIG_HIGHMEM64G from now
On Sat, 14 Jul 2007, Avi Kivity wrote:
Linus, please do your usual thing from the repository and branch at
It has code like
+ /* Can deadlock when called with interrupts disabled */
+ WARN_ON(irqs_disabled());
+
/* prevent preemption and
On Fri, 1 Jun 2007, Avi Kivity wrote:
Please pull from the repository and branch
No. Not after -rc1. Not for something that changes core code and isn't a
core feature, and wasn't a regression.
The core issue is that we need a notification [...]
No. The core issue here is that people need
On Thu, 19 Apr 2007, Avi Kivity wrote:
Please pull from the 'linus' branch of
git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm.git
*please* put the branch-name after the git repo, so that I can
cut-and-paste without noticing only afterwards that the diffstat doesn't
match what it
On Thu, 19 Apr 2007, Jeff Garzik wrote:
What is the easiest way to completely undo a pull, reverting the branch to the
HEAD present before the pull?
You can either do
git reset --hard ORIG_HEAD
(git will set ORIG_HEAD before things like pulls or resets, so you can
always go
On Sun, 4 Mar 2007, Avi Kivity wrote:
The changes fall into three broad categories:
- initial kvm paravirtulization support
- the first batch of the stable userspace interface changes
- fixes, fixes, fixes
This is the absolute last time I say this.
WAY too late. You'd better get this in
27 matches
Mail list logo