In certain situations, it's possible for one of the jump labels in
drivers/net/ppp_generic.c to get mangled by a macro, causing a failure
in compilation. The following patch ameliorates this somewhat:
diff -r -N linux/drivers/net/ppp_generic.c linux.wli/drivers/net/ppp_generic.c
2345c2345
On Sun, Apr 15, 2007 at 10:48:24PM +0200, Ingo Molnar wrote:
2) plugsched did not allow on the fly selection of schedulers, nor did
it allow a per CPU selection of schedulers. IO schedulers you can
change per disk, on the fly, making them much more useful in
practice. Also, IO
* William Lee Irwin III [EMAIL PROTECTED] wrote:
I've been suggesting testing CPU bandwidth allocation as influenced by
nice numbers for a while now for a reason.
On Sun, Apr 15, 2007 at 09:57:48PM +0200, Ingo Molnar wrote:
Oh I was very much testing CPU bandwidth allocation as influenced
On Sun 2007-04-15 03:21:57, William Lee Irwin III wrote:
nvcsw and nivcsw are conventional variable names for these quantities.
On Sun, Apr 15, 2007 at 08:10:24PM +, Pavel Machek wrote:
I can't decipher them and would not want users see them in /proc.
Would nonvoluntary_ctxt_switch
William Lee Irwin III wrote:
One of the reasons I never posted my own code is that it never met its
own design goals, which absolutely included switching on the fly. I
think Peter Williams may have done something about that.
It was my hope
to be able to do insmod sched_foo.ko until it became
William Lee Irwin III wrote:
Driver models for scheduling are not so far out. AFAICS it's largely a
tug-of-war over design goals, e.g. maintaining per-cpu runqueues and
switching out intra-queue policies vs. switching out whole-system
policies, SMP handling and all. Whether this involves load
* William Lee Irwin III [EMAIL PROTECTED] wrote:
Worse comes to worse I might actually get around to doing it myself.
Any more detailed descriptions of the test for a rainy day?
On Mon, Apr 16, 2007 at 01:24:40PM +0200, Ingo Molnar wrote:
the main complication here is that the handling
On Sun, Apr 15, 2007 at 04:31:54PM -0500, Matt Mackall wrote:
That's irrelevant. Plugsched was an attempt to get alternative
schedulers exposure in mainline. I know, because I remember
encouraging Bill to pursue it. Not only did you veto plugsched (which
may have been a perfectly reasonable
William Lee Irwin III wrote:
The sorts of like explicit decisions I'd like to be made for these are:
(1) In a mixture of tasks with varying nice numbers, a given nice number
corresponds to some share of CPU bandwidth. Implementations
should not have the freedom to change
On Tue, Apr 17, 2007 at 02:17:22PM +1000, Peter Williams wrote:
I myself was thinking of this as the chance for a much needed
simplification of the scheduling code and if this can be done with the
result being reasonable it then gives us the basis on which to propose
improvements based on
On Tue, Apr 17, 2007 at 04:03:41PM +1000, Peter Williams wrote:
There's a lot of ugly code in the load balancer that is only there to
overcome the side effects of SMT and dual core. A lot of it was put
there by Intel employees trying to make load balancing more friendly to
their systems.
On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
All things are not equal; they all have different properties. I like
On Tue, Apr 17, 2007 at 08:15:03AM +0200, Nick Piggin wrote:
Exactly. So we have to explore those properties and evaluate performance
(in all meanings
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
I had a quick look at Ingo's code yesterday. Ingo is always smart to
prepare a main dish (feature) with a nice sider (code cleanup) to Linus ;)
And even this code does that pretty nicely. The deadline designs looks
good,
Ingo Molnar wrote:
this is the second release of the CFS (Completely Fair Scheduler)
patchset, against v2.6.21-rc7:
http://redhat.com/~mingo/cfs-scheduler/sched-cfs-v2.patch
i'd like to thank everyone for the tremendous amount of feedback and
testing the v1 patch got - i could hardly
On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
Any chance you'd be willing to put down a few thoughts on what sorts
of standards you'd like to set for both correctness (i.e. the bare
minimum a scheduler implementation must do to be considered valid
beyond not oopsing
On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote:
This observation of Peter's is the best thing to come out of this
whole foofaraw. Looking at what's happening in CPU-land, I think it's
going to be necessary, within a couple of years, to replace the whole
idea of CPU
* William Lee Irwin III [EMAIL PROTECTED] wrote:
The additive nice_offset breaks nice levels. A multiplicative priority
weighting of a different, nonnegative metric of cpu utilization from
what's now used is required for nice levels to work. I've been trying
to point this out politely
* William Lee Irwin III [EMAIL PROTECTED] wrote:
Also, given the general comments it appears clear that some
statistical metric of deviation from the intended behavior furthermore
qualified by timescale is necessary, so this appears to be headed
toward a sort of performance metric
William Lee Irwin III wrote:
Comments on which directions you'd like this to go in these respects
would be appreciated, as I regard you as the current project owner.
On Tue, Apr 17, 2007 at 06:00:06PM +1000, Peter Williams wrote:
I'd do scan through LKML from about 18 months ago looking
On Tue, Apr 17, 2007 at 09:07:49AM -0400, James Bruce wrote:
Nonlinear is a must IMO. I would suggest X = exp(ln(10)/10) ~= 1.2589
That value has the property that a nice=10 task gets 1/10th the cpu of a
nice=0 task, and a nice=20 task gets 1/100 of nice=0. I think that
would be fairly
Ingo Molnar wrote:
Anyone who thinks that there exists only two kinds of code: 100% correct
and 100% incorrect with no shades of grey inbetween is in reality a sort
of an extremist: whom, depending on mood and affection, we could call
either a 'coding purist' or a 'coding taliban' ;-)
On Tue,
On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
yeah. If you could come up with a sane definition that also translates
into low overhead on the algorithm side that would be great!
On Tue, Apr 17, 2007 at 05:08:09PM -0500, Matt Mackall wrote:
How's this:
If you're running two
On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
I'm already working with this as my assumed nice semantics (actually
something with a specific exponential base, suggested in other emails)
until others start saying they want something different and agree.
On Tue, Apr 17
On Tue, Apr 17, 2007 at 04:00:53PM -0700, Michael K. Edwards wrote:
Works, that is, right up until you add nonlinear interactions with CPU
speed scaling. From my perspective as an embedded platform
integrator, clock/voltage scaling is the elephant in the scheduler's
living room. Patch in DPM
Peter Williams wrote:
William Lee Irwin III wrote:
I was tempted to restart from scratch given Ingo's comments, but I
reconsidered and I'll be working with your code (and the German
students' as well). If everything has to change, so be it, but it'll
still be a derived work. It would
to explain to admins and users so that they can
know what to expect from nicing tasks.
On Tue, Apr 17, 2007 at 03:59:02PM -0700, William Lee Irwin III wrote:
I'm not likely to write the testcase until this upcoming weekend, though.
On Tue, Apr 17, 2007 at 05:57:23PM -0500, Matt Mackall wrote
On Wed, Apr 18, 2007 at 12:55:25AM -0500, Matt Mackall wrote:
Why are processes special? Should user A be able to get more CPU time
for his job than user B by splitting it into N parallel jobs? Should
we be fair per process, per user, per thread group, per session, per
controlling terminal?
On Wed, Apr 18, 2007 at 05:48:11PM +0200, Ingo Molnar wrote:
static void requeue_task_fair(struct rq *rq, struct task_struct *p)
{
dequeue_task_fair(rq, p);
p-on_rq = 0;
- enqueue_task_fair(rq, p);
+ /*
+ * Temporarily insert at the last position of the tree:
+
On Wed, Apr 18, 2007 at 10:22:59AM -0700, Linus Torvalds wrote:
So I claim that anything that cannot be fair by user ID is actually really
REALLY unfair. I think it's absolutely humongously STUPID to call
something the Completely Fair Scheduler, and then just be fair on a
thread level.
On Wed, Apr 18, 2007 at 08:35:22PM +0100, Hugh Dickins wrote:
I only have CONFIG_NUMA=y for build testing: surprised when trying a memhog
to see lots of other processes killed with No available memory (MPOL_BIND).
memhog is killed correctly once we initialize nodemask in constrained_alloc().
On Wed, Apr 18, 2007 at 07:50:17PM +0200, Ingo Molnar wrote:
this is the third release of the CFS patchset (against v2.6.21-rc7), and
can be downloaded from:
http://redhat.com/~mingo/cfs-scheduler/
this is a pure fix reported regressions release so there's much less
churn:
5 files
* Andrew Morton [EMAIL PROTECTED] wrote:
Yes, there are potential compatibility problems. Example: a machine
with 100 busy httpd processes and suddenly a big gzip starts up from
console or cron.
[...]
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
h. How about the
On Thu, Apr 19, 2007 at 09:35:04AM -0700, Christoph Lameter wrote:
This patchset modifies the core VM so that higher order page cache pages
become possible. The higher order page cache pages are compound pages
and can be handled in the same way as regular pages.
The order of the pages is
William Lee Irwin III wrote:
I'd further recommend making priority levels accessible to kernel threads
that are not otherwise accessible to processes, both above and below
user-available priority levels. Basically, if you can get SCHED_RR and
SCHED_FIFO to coexist as intimate scheduler classes
On Thu, 19 Apr 2007, William Lee Irwin III wrote:
Oh dear. Per-file pagesizes are foul. Better to fix up the pagecache's
radix tree than to restrict it like this. There are other attacks on the
multiple horizontal internal tree node allocation problem beyond
outright B+ trees that allow radix
On Fri, Apr 20, 2007 at 02:42:27PM +0100, Mel Gorman wrote:
That's fair enough for the moment but relaxing would make ramfs
potentially usable as a replacement for hugetlbfs so there would be just
one ram-based filesystem instead of two.
Careful there. mmap() needs more than this.
(1)
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
I have a suggestion I'd like to make that addresses both nice and
fairness at the same time. As I understand the basic principle behind
this scheduler it to work out a time by which a task should make it onto
the CPU and then
On Fri, 20 Apr 2007, Mel Gorman wrote:
While this looks fine, it seems that clear_huge_page() and
clear_mapping_page() could share a common helper. I also note that
clear_huge_page() calls cond_reched() and this doesn't which may be the
type of different behavior we want to avoid.
On Fri, Apr
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
Careful there. mmap() needs more than this.
(1) mapping-order is variable within an fs, so the architectural code
would need some vague awareness of the underlying page size
being variable unless the fs restricts it properly.
On Fri
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
The core VM can do that but the hugetlb architectural code can't fall
back to smaller page sizes. It also should not be put into a situation
where it needs to do so given the semantics it must honor.
On Fri, Apr 20, 2007 at 10:15:00AM -0700
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
Probably just terminological disagreement here. I was referring to
allocating the higher-order page from the fault path here, not mapping
it or a piece of it with a user pte.
On Fri, Apr 20, 2007 at 10:57:25AM -0700, Christoph Lameter wrote
On Wed, 18 Apr 2007, William Lee Irwin III wrote:
Mark the runqueues cacheline_aligned_in_smp to avoid false sharing.
On Fri, Apr 20, 2007 at 12:24:17PM -0700, Christoph Lameter wrote:
False sharing for a per cpu data structure? Are we updating that
structure from other processors
Fri, Apr 20, 2007 at 12:24:17PM -0700, Christoph Lameter wrote:
False sharing for a per cpu data structure? Are we updating that
structure from other processors?
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
Primarily in the load balancer, but also in wakeups.
On Fri, Apr 20, 2007 at 12
On Fri, 20 Apr 2007, William Lee Irwin III wrote:
I'm not really convinced it's all that worthwhile of an optimization,
essentially for the same reasons as you, but presumably there's a
benchmark result somewhere that says it matters. I've just not seen it.
On Fri, Apr 20, 2007 at 12:44:55PM
On Fri, Apr 20, 2007 at 11:30:04AM -0700, Tong Li wrote:
This patch extends the existing Linux scheduler with support for
proportional-share scheduling (as a new KConfig option).
http://www.cs.duke.edu/~tongli/linux/linux-2.6.19.2-trio.patch
It uses a scheduling algorithm, called Distributed
William Lee Irwin III wrote:
This essentially doesn't look correct because while you want to enforce
the CPU bandwidth allocation, this doesn't have much to do with that
apart from the CPU bandwidth appearing as a term. It's more properly
a rate of service as opposed to a time at which
On Sat, Apr 21, 2007 at 09:54:01AM +0200, Ingo Molnar wrote:
In practice they can starve a bit when one renices thousands of tasks,
so i was thinking about the following special-case: to at least make
them easily killable: if a nice 0 task sends a SIGKILL to a nice 19 task
then we could
* William Lee Irwin III [EMAIL PROTECTED] wrote:
Suppose a table of nice weights like the following is tuned via
/proc/:
-20 21 0 1
-1 2 19 0.0476
Essentially 1/(n+1) when n = 0 and 1-n when n 0.
On Sat, Apr 21, 2007 at 10:57:29AM
On Sat, Apr 21, 2007 at 06:00:08PM +0200, Ingo Molnar wrote:
arch/i386/kernel/ioport.c | 13 ++---
arch/x86_64/kernel/ioport.c |8 ++--
drivers/block/loop.c|5 -
include/linux/sched.h |7 +++
kernel/sched.c | 40
On Sat, 21 Apr 2007, Willy Tarreau wrote:
If you remember, with 50/50, I noticed some difficulties to fork many
processes. I think that during a fork(), the parent has a higher probability
of forking other processes than the child. So at least, we should use
something like 67/33 or 75/25 for
On 4/21/07, Kyle Moffett [EMAIL PROTECTED] wrote:
It might be nice if it was possible to actively contribute your CPU
time to a child process. For example:
int sched_donate(pid_t pid, struct timeval *time, int percentage);
On Sat, Apr 21, 2007 at 12:49:52PM -0700, Ulrich Drepper wrote:
If
On 4/21/07, Linus Torvalds [EMAIL PROTECTED] wrote:
And how the hell do you imagine you'd even *know* what thread holds the
futex?
On Sat, Apr 21, 2007 at 06:46:58PM -0700, Ulrich Drepper wrote:
We know this in most cases. This is information recorded, for
instance, in the mutex data
On Sat, Apr 21, 2007 at 02:17:02PM -0400, Gene Heskett wrote:
CFS-v4 is quite smooth in terms of the users experience but after prolonged
observations approaching 24 hours, it appears to choke the cpu hog off a bit
even when the system has nothing else to do. My amanda runs went from 1 to
On 4/22/07, William Lee Irwin III [EMAIL PROTECTED] wrote:
I'm just looking for what people want the API to be here. With that in
hand we can just go out and do whatever needs to be done.
On Sun, Apr 22, 2007 at 12:17:31AM -0700, Ulrich Drepper wrote:
I think a sched_yield_to is one interface
On Mon, Apr 23, 2007 at 09:36:13PM -0700, David Rientjes wrote:
oom_kill_task() calls __oom_kill_task() to OOM kill a selected task.
When finding other threads that share an mm with that task, we need to
kill those individual threads and not the same one.
ISTR shooting down something of this
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
I don't know if we've discussed this or not. Since both CFS and SD claim
to be fair, I'd like to hear more opinions on the fairness aspect of
these designs. In areas such as OS, networking, and real-time, fairness,
and its more
On Tue, Apr 24, 2007 at 03:21:05PM -0700, [EMAIL PROTECTED] wrote:
V2-V3
- More restructuring
- It actually works!
- Add XFS support
- Fix up UP support
- Work out the direct I/O issues
- Add CONFIG_LARGE_BLOCKSIZE. Off by default which makes the inlines revert
back to constants.
On Tue, Apr 24, 2007 at 06:22:53PM -0700, Li, Tong N wrote:
The goal of a proportional-share scheduling algorithm is to minimize the
above metrics. If the lag function is bounded by a constant for any
thread in any time interval, then the algorithm is considered to be
fair. You may notice that
* Li, Tong N [EMAIL PROTECTED] wrote:
[...] A corollary of this is that if both threads i and j are
continuously runnable with fixed weights in the time interval, then
the ratio of their CPU time should be equal to the ratio of their
weights. This definition is pretty restrictive since it
On Fri, Mar 09, 2007 at 12:07:06PM +0300, Serge Belyshev wrote:
If you see sched_yield() when stracing any 3d program, I suggest you
to try this bruteforce workaround, which works fine for me,
disable sched_yield():
May I suggest LD_PRELOAD of a library consisting of only a nopped
On Thu, Mar 08, 2007 at 10:31:48PM -0800, Linus Torvalds wrote:
No. Really.
I absolutely *detest* pluggable schedulers. They have a huge downside:
they allow people to think that it's ok to make special-case schedulers.
And I simply very fundamentally disagree.
If you want to play with a
William Lee Irwin III wrote:
I consider policy issues to be hopeless political quagmires and
therefore stick to mechanism. So even though I may have started the
code in question, I have little or nothing to say about that sort of
use for it.
There's my longwinded excuse for having originated
William Lee Irwin III wrote:
The short translation of my message for you is Linus, please don't
LART me too hard.
On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
Right.
Given where the code originally came from, I've got bullets to dodge.
William Lee Irwin III wrote:
This sort
On Fri, Mar 09, 2007 at 05:18:31PM -0500, Ryan Hope wrote:
from what I understood, there is a performance loss in plugsched
schedulers because they have to share code
even if pluggable schedulers is not a viable option, being able to
choose which one was built into the kernel would be easy
On Thu, 8 Mar 2007 22:12:27 -0500 Mathieu Desnoyers [EMAIL PROTECTED] wrote:
Fix sparc TIF_USEDFPU flag atomicity
Non atomic update of TIF can be very dangerous, except at thread structure
creation time. Here I standardize the TIF_USEDFPU usage of the sparc arch.
Applies on 2.6.20.
William Lee Irwin III wrote:
This sort of concern is too subjective for me to have an opinion on it.
On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
How diplomatic.
William Lee Irwin III wrote:
Impoliteness doesn't accomplish anything I want to do.
On Sat, Mar 10, 2007 at 08:34
On Sat, Mar 10, 2007 at 03:17:43AM -0500, Mathieu Desnoyers wrote:
@@ -348,7 +348,7 @@ void exit_thread(void)
#ifndef CONFIG_SMP
if(last_task_used_math == current) {
#else
- if(current_thread_info()-flags _TIF_USEDFPU) {
+ if(test_ti_thread_flag(current_thread_info(),
On Sat, 10 Mar 2007 00:26:46 -0800, William Lee Irwin III [EMAIL PROTECTED]
wrote:
Oh dear. Could we bit a bit more idiomatic here? For instance,
something like:
On Sat, Mar 10, 2007 at 12:29:44AM -0800, David Miller wrote:
Ok I pulled the sparc32 patch back out until there is some
consensus
William Lee Irwin III wrote:
Last I checked there were limits to runtime configurability centering
around only supporting a compiled-in set of scheduling drivers, unless
Peter's taken it the rest of the way without my noticing. It's unclear
what you have in mind in terms of dynamic
On Sun, 11 Mar 2007 13:28:22 +1100 Con Kolivas [EMAIL PROTECTED] wrote:
Well... are you advocating we change sched_yield semantics to a
gentler form?
On Sat, Mar 10, 2007 at 07:16:14PM -0800, Andrew Morton wrote:
From a practical POV: our present yield() behaviour is so truly awful that
it's
On Sat, Mar 10, 2007 at 06:09:34PM -0800, Christoph Lameter wrote:
i386: Convert to quicklists
Implement the i386 management of pgd and pmds using quicklists.
I approve, though it would be nice if ptes had an interface operating
on struct page * to use.
On Sat, Mar 10, 2007 at 06:09:34PM
On Tue, Mar 13, 2007 at 04:47:56AM -0800, Andrew Morton wrote:
I'm trying to remember why we ever would have needed to zero out the
pagetable pages if we're taking down the whole mm? Maybe it's
because oh, the arch wants to put this page into a quicklist to
recycle it, which is all rather
On Mon, Mar 12, 2007 at 03:51:57PM -0700, David Miller wrote:
Someone with some extreme patience could do the sparc 32-bit port too,
in fact it's lacking the cached PGD update logic that x86 et al. have
so it would even end up being a bug fix :-) This lack is why sparc32
pre-initializes the
On Tue, Mar 13, 2007 at 11:31:53PM -0400, Gene Heskett wrote:
Now, can someone suggest a patch I can revert that might fix this? The
total number of patches between 2.6.20 and 2.6.21-rc1 will have me
building kernels to bisect this till the middle of June at this rate.
4 billion patches
at 10:07:21PM -0700, William Lee Irwin III wrote:
4 billion patches could be bisected in 34 boots. Between 2.6.20 and
2.6.21-rc1 there are only:
$ git rev-list --no-merges v2.6.20..v2.6.21-rc1 |wc -l
3118
patches, requiring 14 boots. In general ceil(log(n)/log(2))+2 boots.
Of course
On Wednesday 14 March 2007, William Lee Irwin III wrote:
On Tue, Mar 13, 2007 at 11:31:53PM -0400, Gene Heskett wrote:
Now, can someone suggest a patch I can revert that might fix this?
The total number of patches between 2.6.20 and 2.6.21-rc1 will have me
building kernels to bisect
On Wed, Mar 14, 2007 at 10:30:59AM +0300, Pavel Emelianov wrote:
I'm looking at how alloc_pid() works and can't understand
one (simple/stupid) thing.
It first kmem_cache_alloc()-s a strct pid, then calls
alloc_pidmap() and at the end it taks a global pidmap_lock()
to add new pid to hash.
The
On Wed, Mar 14, 2007 at 08:12:35AM -0600, Eric W. Biederman wrote:
If we do dig into this more we need to consider a radix_tree to hold
the pid values. That could replace both the pid map and the hash
table, gracefully handle but large and small pid counts, might
be a smidgin simpler,
On Wed, Mar 14, 2007 at 09:18:50AM +, Al Viro wrote:
Signed-off-by: Al Viro [EMAIL PROTECTED]
---
arch/sparc/mm/init.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
Dave, I trust you'll pick it up until I get a git tree going.
Acked-by: William Irwin [EMAIL PROTECTED]
--
William Lee Irwin III [EMAIL PROTECTED] writes:
Radix trees' space behavior is extremely poor in sparsely-populated
index spaces. There is no way they would save space or even come close
to the current space footprint.
On Wed, Mar 14, 2007 at 10:54:07AM -0600, Eric W. Biederman wrote
On Thu, Mar 15, 2007 at 07:19:21PM +0530, Ankita Garg wrote:
Looking at oom_kill.c, found that the intention to not kill the selected
process if any of its children/siblings has OOM_DISABLE set, is not being met.
Signed-off-by: Ankita Garg [EMAIL PROTECTED]
Index:
On Fri, Mar 16, 2007 at 07:25:53AM +1100, Nick Piggin wrote:
I would just avoid the complexity and setup/teardown costs, and just
use a vmalloc'ed global hash for NUMA.
This patch is not the way to go, but neither are vmalloc()'d global
hashtables. When you just happen to hash to the wrong
On Tue, Mar 13, 2007 at 06:12:44PM -0700, William Lee Irwin III wrote:
There are furthermore distinctions to make between fork() and execve().
fork() stomps over the entire process address space copying pagetables
en masse. After execve() a process incrementally faults in PTE's one at
a time
William Lee Irwin III [EMAIL PROTECTED] writes:
I'd not mind something better than a hashtable. The fib tree may make
more sense than anticipated. It's truly better to switch data structures
completely than fiddle with e.g. hashtable sizes. However, bear in mind
the degenerate space behavior
On Mon, Feb 19, 2007 at 10:31:34AM -0800, Adam Litke wrote:
+struct pagetable_operations_struct {
+ int (*fault)(struct mm_struct *mm,
+ struct vm_area_struct *vma,
+ unsigned long address, int write_access);
+ int (*copy_vma)(struct mm_struct *dst, struct
On Sat, Mar 17, 2007 at 08:11:57AM +0100, Mike Galbraith wrote:
On a side note, I wonder how long it's going to take to fix all the
X/client combinations out there.
AIUI X's clients largely access it via libraries X ships, so the X
update will sweep the vast majority of them in one shot. You'll
On Sat, Mar 17, 2007 at 10:41:01PM +0200, Avi Kivity wrote:
Well, the heuristic here is that process == job. I'm not sure heuristic
is the right name for it, but it does point out a deficieny.
A cpu-bound process with many threads will overwhelm a cpu-bound single
threaded threaded process.
Ingo Molnar wrote:
what do you think about the idea i suggested: to do an x32_/x64_ prefix
(or _32/_64 postfix), in a brute-force way, _right away_. I.e. do not
have any overlap of having both arch/i386/ and arch/x86_64/ and
arch/x86/ - move everything to arch/x86/ right now.
On Sun, Mar
On Mon, Mar 19, 2007 at 01:05:02PM -0700, Adam Litke wrote:
Andrew, given the favorable review of these patches the last time
around, would you consider them for the -mm tree? Does anyone else
have any objections?
We need a new round of commentary for how it should integrate with
Nick
Adam Litke wrote:
struct vm_operations_struct * vm_ops;
+const struct pagetable_operations_struct * pagetable_ops;
On Wed, Mar 21, 2007 at 03:18:30PM +1100, Nick Piggin wrote:
Can you remind me why this isn't in vm_ops?
Also, it is going to be hugepage-only, isn't it? So should the
William Lee Irwin III wrote:
ISTR potential ppc64 users coming out of the woodwork for something I
didn't recognize the name of, but I may be confusing that with your
patch. I can implement additional users (and useful ones at that)
needing this in particular if desired.
On Wed, Mar 21, 2007
William Lee Irwin III wrote:
I'm tied up elsewhere so I won't get to it in a timely fashion. Maybe
in a few weeks I can start up on the first two of the bunch.
On Wed, Mar 21, 2007 at 05:51:23PM +1100, Nick Piggin wrote:
Care to give us a hint? :)
The first is something DISM-like. I've
On Wed, 21 Mar 2007 14:43:48 CDT, Adam Litke said:
The main reason I am advocating a set of pagetable_operations is to
enable the development of a new hugetlb interface.
On Wed, Mar 21, 2007 at 03:51:31PM -0400, [EMAIL PROTECTED] wrote:
Do you have an exit strategy for the *old* interface?
On Wed, Mar 21, 2007 at 03:26:59PM -0700, William Lee Irwin III wrote:
My exit strategy was to make hugetlbfs an alias for ramfs when ramfs
acquired the necessary functionality until expand-on-mmap() was merged.
That would've allowed rm -rf fs/hugetlbfs/ outright. A compatibility
wrapper
Adam Litke wrote:
We didn't want to bloat the size of the vm_ops struct for all of its
users.
On Thu, Mar 22, 2007 at 10:02:07AM +1100, Nick Piggin wrote:
But vmas are surely far more numerous than vm_ops, aren't they?
It should be clarified that the pointer to the operations structure
in
On Thu, Mar 22 2007, Eric Dumazet wrote:
Sure, but it's the first Tomas patch :)
On Thu, Mar 22, 2007 at 02:54:57PM +0100, Jens Axboe wrote:
The more the reason to guide him in the direction of a right solution,
instead of extending the current bad one!
On Thu, Mar 22 2007, Eric Dumazet
On Thu, Mar 22, 2007 at 01:21:46PM -0800, Tim Chen wrote:
I've tried running Volanomark and found a 80% regression
with RSDL 0.31 scheduler on 2.6.21-rc4 on a 2 socket Core 2 quad cpu
system (4 cpus per socket, 8 cpus for system).
The results are sensitive to rr_interval. Using Con's patch to
On Fri, Mar 23, 2007 at 11:04:18AM +0100, Ingo Molnar wrote:
isnt this patented by MS? (which might not worry you SuSE/Novell guys,
but it might be a worry for the rest of the world ;-)
On Fri, Mar 23, 2007 at 11:32:44AM +0100, Nick Piggin wrote:
Hmm, it looks like they have implemented a
On Thu, Mar 22, 2007 at 11:48:48PM -0800, Andrew Morton wrote:
afacit that two-year-old, totally-different patch has nothing to do with my
repeatedly-asked question. It appears to be consolidating three separate
quicklist allocators into one common implementation.
In an attempt to answer my
On Thu, Mar 22, 2007 at 11:48:48PM -0800, Andrew Morton wrote:
afacit that two-year-old, totally-different patch has nothing to do with my
repeatedly-asked question. It appears to be consolidating three separate
quicklist allocators into one common implementation.
In an attempt to answer my
1 - 100 of 815 matches
Mail list logo