Aunt Tillie doesn't even know what a kernel is, nor does she want
to. I think it's fair to assume that people who configure and
compile their own kernel (as opposed to using the distribution
supplied ones) know what they are doing.
I'd like to break these assumptions. Or at the very least
1. The Mac derivations were half-right. The MAC_SCC one is good but Macs
can have either of two different SCSI controllers. I fixed that with help
from Ray Knight, who maintains the 68K Mac port.
If I understand the philosophy correctly, it is still possible to specify
additional cards for
order to hold down ruleset complexity and simplify the user
experience. The cost of deciding that the answer to that question is
The user experience can be simplified by a NOVICE/EASY/SANE_DEFAULTS
option, and perhaps a HACKER option for the really strange
but _theoretically_ ok stuff.
If you run into a case where you have a config which would work, but
CML2 doesn't let you, why don't you fix the grammar instead of saying
CML2 is wrong? Let's not confuse these two issues as well.
Strongly agree. Especially since I'm pushing for an explicit recognition
of the difference
Time to hunt around for a 386 or 486 which is limited to such
a small amount of RAM ;)
I've got an old knackered 486DX/33 with 8Mb RAM (in 30-pin SIMMs, woohoo!),
a flat CMOS battery, a 2Gb Maxtor HD that needs a low-level format every
year, and no case. It isn't running anything right now...
* Live Upgrade
LOBOS will let one Linux kernel boot another, but that requires a boot
step, so it is not a live upgrade. so, no, afaik
If you build nearly everything (except, obviously what you need to boot) as
modules, you can unload modules, build new versions, and reload them. So,
you
At 12:17 am +0100 3/6/2001, M.N. wrote:
Basically, that's the question. I compiled my kernel with the SCSI AIC7xxx.o
driver as a module, and then when it booted up, it paniced. I thought it was
some sort of a kernel bug, but it didn't really seem that way when I
recompiled the kernel with SCSI
Now that you provide source for r5 and dx_hack_hash, let me feed my
collections to them.
r5: catastrophic
dx_hack_hash: not bad, but the linear hash is better.
snip verbose results
So, not only does the linear hash normally provide a shorter worst-case
chain, its' results are actually more
At 2:32 am + 25/2/2001, Jeremy Jackson wrote:
Jeff Garzik wrote:
(about optimizing kernel network code for busmastering NIC's)
Disclaimer: This is 2.5, repeat, 2.5 material.
Related question: are there any 100Mbit NICs with cpu's onboard?
Something mainstream/affordable?(i.e. not 1G
Would it not be useful if the isa-pnp driver would fall back
to utilizing the PnP BIOS (if possible) in order to read and
I would find this EXTREMELY usefull... my Compaq laptop's
hot-dock with power eject will only work if Linux uses
PnP BIOS's insert/eject methods.
I saw some code in early
I'm seeing a lot of messages in my gateway's system log of the form:
lithium kernel: NAT: 0 dropping untracket packet c233f340 1 10.38.10.67 -
224.0.0.2
Virtually all these packets come from machines on the student LAN on the
"outside" of the gateway. Whether or not iptables is configured to
milkplus:~# hdparm /dev/hda
/dev/hda:
multcount= 0 (off)
I/O support = 0 (default 16-bit)
unmaskirq= 0 (off)
using_dma= 1 (on)
keepsettings = 0 (off)
nowerr = 0 (off)
readonly = 0 (off)
readahead= 8 (on)
geometry = 2584/240/63, sectors = 39070080,
Does anyone know whereabouts I could go to get an index of all
configurations options (i.e. drivers, etc.) that are available in the
latest Linux kernel? I am waiting on a kernel mode driver for my USB
digital camera, but I don't want to go ahead and download the full 24Mb
just to find out if the
1) ES1371 driver in 2.4.2 produces high-pitched buzzing instead of sound.
2) AudioPCI/97 card in friend's Duron-based machine (very similar to mine,
but different soundcard) works fine under Mandrake 7.1 stock kernel
(2.2.15-4mdk), but produces only loud, high-pitched buzzing noises when
used
I've run the test on my own system and noted something interesting about
the results:
When the write() call extended the file (rather than just overwriting a
section of a file already long enough), the performance drop was seen, and
it was slower on SCSI than IDE - this is independent of whether
I don't know if there is any way to turn of a write buffer on an IDE disk.
hdparm has an option of this nature, but it makes no difference (as I
reported). It's worth noting that even turning off UDMA to the disk on my
machine doesn't help the situation - although it does slow things down a
It's pretty clear that the IDE drive(r) is *not* waiting for the physical
write to take place before returning control to the user program, whereas
the SCSI drive(r) is. Both devices appear to be performing the write
Wrong, IDE does not unplug thus the request is almost, I hate to admit it
i assume you meant to time the xlog.c program? (or did i miss another
program on the thread?)
Yes.
i've an IBM-DJSA-210 (travelstar 10GB, 5411rpm) which appears to do
*something* with the write cache flag -- it gets 0.10s elapsed real time
in default config; and gets 2.91s if i do "hdparm
Pathological shutdown pattern: assuming scatter-gather is not allowed (for
IDE), and a 20ms full-stroke seek, write sectors at alternately opposite
ends of the disk, working inwards until the buffer is full. 512-byte
sectors, 2MB of them, is 4000 writes * 20ms = around 80 seconds (not
On Tue, 6 Mar 2001, Mike Black wrote:
Write caching is the culprit for the performance diff:
Indeed, and my during-the-boring-lecture benchmark on my 18Gb IBM
TravelStar bears this out. I was confused earlier by the fact that one of
my Seagate drives blatently ignores the no-write-caching
Jonathan Morton ([EMAIL PROTECTED]) wrote :
The OS needs to know the physical act of writing data has finished
before
it tells the m/board to cut the power - period. Pathological data sets
included - they are the worst case which every engineer must take into
account. Out of interest, does
VP_IDE: IDE controller on PCI bus 00 dev 39
VP_IDE: chipset revision 16
VP_IDE: not 100% native mode: will probe irqs later
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci00:07.1
ide0: BM-DMA at
I am not going to bite on your flame bate, and are free to waste you money.
I don't flamebait. I was trying to clear up some confusion...
No, SCSI does with queuing.
I am saying that the ata/ide driver rips the heart out of the
io_request_lock what to darn long. This means that upon execution
\Is there something generally wrong with how linux determines total cpu
usage (via procmeter3 and top) when dealing with applications that are
threaded? I routinely get 0% cpu usage when playing mpegs and mp3s and
some avi's even (Divx when using no software enhancement) ... Somehow i
doubt
It's pretty clear that the IDE drive(r) is *not* waiting for the physical
write to take place before returning control to the user program, whereas
the SCSI drive(r) is.
This would not be unexpected.
IDE drives generally always do write buffering. I don't even know if you
_can_ turn it off.
Indeed. The whole concept is fatally flawed; probably the biggest
challenge facing a cracker attacking this system is choosing which of the
many avenues to start with :-)
1. The drivers. I really like displaying audio and video via my hard
drive, so I use drivers which do that...
Or you could
At this point I am 100% lost. any help would be
greatly appreciated. I am willing to do any testing
of the system that anyone may need. Currently I have
no working copy of linux on the sytem. My normal
process to get running is to install slackware.
download 2.4.2 and the latest ac patch.
And we have done experiments with controlling interrupts and running
the RX at "lower" priority. The idea is take RX-interrupt and immediately
postponing the RX process to tasklet. The tasklet opens for new RX-ints.
when its done. This way dropping now occurs outside the box since and
Duh, before making such a claim you should consider the fact that
this is overclocking your PCI/AGP bus and I have yet to see any
graphic cards/IDE controllers/other devices which are rated for
37MHz PCI bus speed.
The "blue and white" PowerMac G3 and certain early PowerMac G4s used a
66MHz
- automated heavy stress testing
This would be an interesting one to me, from a benchmarking POV. I'd like
to know what my hardware can really do, for one thing - it's all very well
saying this box can do X Whetstones and has a 100Mbit NIC, but it's a much
more solid thing to be able to say "my
I've been noticing a problem associated with certain pairings of
applications on my home LAN, specifically when attempting to send large
amounts of data through some types of forwarder. I have just been able to
isolate the exact symptoms and a possible cause of the problem, which I
describe
Not entirely sure whether this is the right place to ask support questions,
but here goes...
I have set up a gateway machine running SuSE 6.4 and kernel 2.4.0-test12
for a family I am staying with in NM. The gateway is running fine on a
28.8 modem now, but the intent is to use it with the ADSL
On Sun, 31 Dec 2000, Alan Cox wrote:
How is this solved? Personally, I am behind a CIPE tunnel with an MTU of
1442 or something like that. I experienced problems to some places and
You have to get the other end to fix it.
Could it be some kind of incompability at the tunnel level that
Where is a patch to allow the sensible OOM I had in prior kernels?
(cause this crap is getting pitched)
I gave Alan a patch to fix the problem where the OOM activates too early
(eg. when there's still plenty of swap and buffer memory to eat). I don't
know whether this made it into the
the only general issue is that kx133 systems seem to be difficult
to configure for stability. ugly things like tweaking Vio.
there's no implication that has anything to do with Linux, though.
When I reported my problem a couple weeks back another fellow
said he and several others on the
I'm using an Abit KT7 board (KT133) and my new 1GHz T'bird (running 50-60°C
in a warm room) is giving me no trouble. This is with the board and RAM
pushed as fast as it will go without actually overclocking anything... and
yes, I do have Athlon/K7 optimisations turned on in my kernel
At 7:20 am +0100 5/5/2001, Mark Hahn wrote:
On Fri, 4 May 2001, Seth Goldberg wrote:
Hi,
Before I go any further with this investigation, I'd like to get an
idea
of how much of a performance improvement the K7 fast_page_copy will give
me.
Can someone suggest the best benchmark to test
At 3:41 pm +0100 5/5/2001, Alan Cox wrote:
My wild guess is that with the faster code, the K7 is avoiding loading
cache lines just to write them out again, and is just writing tons of data.
The PPC G4 - and perhaps even the G3 - performs a similar trick
automatically, without special
- page_count(page) == (1 + !!page-buffers));
Two inversions in a row? I'd like to see that made more explicit,
otherwise it looks like a bug to me. Of course, if it IS a bug...
--
from: Jonathan Chromatix
That said, anyone who doesn't understand the former should probably
get some more C experience before commenting on others' code...
I understood it, but it looked very much like a typo.
--
from: Jonathan Chromatix Morton
mail:
On a side question: does Linux support swap-files in addition to
sawp-partitions? Even if that has a performance penalty, when the system
is swapping performance is dead anyway.
Yes. Simply use mkswap and swapon/off on a regular file instead of a
partition device. I don't notice any
It seems bizarre that a 4GB machine with a working set _far_ lower than that
should be dying from OOM and swapping itself to death, but that's life in 2.4
land.
I posted a fix for the OOM problem long ago, and it didn't get integrated
(even after I sent Alan a separated-out version from the
I am waiting patiently for the bug to be fixed. However, it is a real
embarrasment that we can't run this stable kernel in production yet
because somethign as fundamental as this is so badly broken.
Rest assured that a fix is in the works. I'm already seeing a big
improvement in behaviour on my
Did you try to put twice as much swap as you have RAM ? (e.g. add a
512M swapfile to your box)
This is what Linus recommended for 2.4 (swap = 2 * RAM), saying
that anything less won't do any good: 2.4 overallocates swap even
if it doesn't use it all. So in your case you just have enough
I'd be happy to write a new routine in assembly
I sincerely hope you're joking.
It's the algorithm that needs fixing, not the implementation of that
algorithm. Writing in assembler? Hope you're proficient at writing in
x86, PPC, 68k, MIPS (several varieties), ARM, SPARC, and whatever other
At 11:27 pm +0100 6/6/2001, android wrote:
I'd be happy to write a new routine in assembly
I sincerely hope you're joking.
It's the algorithm that needs fixing, not the implementation of that
algorithm. Writing in assembler? Hope you're proficient at writing in
x86, PPC, 68k, MIPS (several
As suggested by Linus, I've cleaned the reapswap code to be contained
inside an inline function. (yes, the if statement is really ugly)
I can't seem to find the patch which adds this behaviour to the background
scanning. Can someone point me to it?
As suggested by Linus, I've cleaned the reapswap code to be contained
inside an inline function. (yes, the if statement is really ugly)
I can't seem to find the patch which adds this behaviour to the background
scanning.
I've just sent Linus a patch to free swap cache pages at the time we
This is going to make all pages have age 0 on an idle system after some
time (the old code from Rik which has been replaced by this code tried to
avoid that)
There's another reason why I think the patch may be ok even without any
added logic: not only does it simplify the code and remove a
At 12:29 am +0100 8/6/2001, Shane Nay wrote:
(VM report at Marcelo Tosatti's request. He has mentioned that rather than
complaining about the VM that people mention what there experiences were. I
have tried to do so in the way that he asked.)
By performance you mean interactivity or
[ Re-entering discussion after too long a day and a long sleep... ]
There is the problem in terms of some people want pure interactive
performance, while others are looking for throughput over all else,
but those are both extremes of the spectrum. Though I suspect
raw throughput is the less
On the subject of Mike Galbraith's kernel compilation test, how much
physical RAM does he have for his machine, what type of CPU is it, and what
(approximate) type of device does he use for swap? I'll see if I can
partially duplicate his results at this end. So far all my tests have been
My box has
320280K
from proc/meminfo
17140 buffer
123696 cache
32303 free
leaving unaccounted
123627K
This is your processes' memory, the inode and dentry caches, and possibly
some extra kernel memory which may be allocated after boot time. It is
*very* much accounted for.
clock drift of a few minutes per day.
That's about 0.1%. It may be relatively large compared to tolerances of
hardware clocks, but it's realistically tiny. It certainly compares
favourably with mkLinux on my PowerBook 5300, which usually drifts by
several hours per day regardless of actual
Btw: can the aplication somehow ask the tcp/ip stack what was
actualy acked?
(ie. how many bytes were acked).
no, but it's not necessarily a useful number anyhow -- because it's
possible that the remote end ACKd bytes but the ACK never arrives. so you
can get into a situation where the
Btw: can the aplication somehow ask the tcp/ip stack what was
actualy acked?
(ie. how many bytes were acked).
no, but it's not necessarily a useful number anyhow -- because it's
possible that the remote end ACKd bytes but the ACK never arrives. so you
can get into a
I have seen school projects with interfaces done in java (to be 'portable')
and you could go to have a coffee while a menu pulled down.
Yeah, but the slowness there comes from the phrase school project and not
the phrase done in java. I've seen menuing interfaces on a 1 mhz commodore
64
Only the truly stupid would assume accuracy from decimal places.
Well then, tell all the teachers in this world that they're stupid, and tell
everyone who learnt from them as well.
*All*?
I'm in high school (gd. 11, junior)
and my physics teacher is always screaming at us for
Now my question is how can it be
thrashing with swap explicitly turned off?
Easy. All applications are themselves swap space - the binary is
merely memory-mapped onto the executable file. When the system gets
low on memory, the only thing it can do is purge some binary pages,
and then
The conclusion of most of this discussion is in my FREENIX
paper, which can be found at http://www.surriel.com/lectures/.
Aha... that paper answers a lot of the questions I had about how
things work. I seem to remember asking some of them, too, and didn't
get an answer... :P
--
There is a simple change in strategy that will fix up the updatedb case quite
nicely, it goes something like this: a single access to a page (e.g., reading
it) isn't enough to bring it to the front of the LRU queue, but accessing it
twice or more is. This is being looked at.
Say, when a page is
I have a D-Link DFE-530TX Rev A, PCI ethernet card, but it refuses
to work.
I have looked at http://www.scyld.com/network/index.html#pci
which sugests using the via-rhine driver.
I did this and compiled it into the kernel. It detects it at boot (via-
rhine v1.08-LK1.1.6 8/9/2000 Donald Becker)
At 1:51 pm + 2/2/2001, Pavel Machek wrote:
Hi!
I am asking because I have just ordered a new drive for my Vaio (8.1 gig
in a 8.45mm drive!) and I want to install 2.4.x on it. (I like getting
8.1GB in under centimeter? That's 8.1GB in compactflash slot?
In general, i think the normal
The attached patch for the via-daig program plays with a few registers.
Run it as 'via-diag -aaeemm -I' then do a 'ifconfig eth0 down; ifconfig
eth0 up' and see if anything happens.
OK, after a little trouble applying the patch, here's what I found:
Starting with the card in working condition,
You can always try writing all the registers with "good" values.
No good - nothing actually changes except 16 bits at 0x6C, and that doesn't
change to anything useful.
Is there a reset 'thing' for thses chips, that sets them back to
factory tests (like switching them off)?
[snip]
So.How
The units seem to vary. I suggest using fundamental SI units.
That would be meters, kilograms, seconds, and maybe a very
few others -- my memory fails me on this.
There are lots of SI units, one for each physical dimension that can be
measured. Some of the ones that might apply here are:
-
/var/log/messages on the linux-server with the d-link dfe-530 tx:
[THIS IS THE ERROR-MESSAGE!]
Feb 1 17:25:56 Nethost kernel: NETDEV WATCHDOG: eth0: transmit timed out
Feb 1 17:25:56 Nethost kernel: eth0: Transmit timed out, status ,
PHY status 782d, resetting...
after booting everthing
I still get corruption with "I/O Recovery Time" enabled :-(
I don't get corruption with the BIOS "normal" settings (1004D).
I might update my BIOS to the latest BIOS in case it changes any other
settings.
I'm using an Abit KT7 m/board, which uses the same KT133 chipset that I
believe you are
... after about 10 minutes waiting, while adding to this e-mail, the box is
still hung. Hmph... *RESET*
System log shows no "DMA timeout" messages after rebooting, and no errors
from the inevitable FSCK.
--
from: Jonathan
I just installed Urban's most recent patch, and I still get much the same
problems when I reboot from Windows. The main difference appears to be
that there's a few seconds' pause during the via-rhine driver
initialisation (presumably while it tries to find PHY devices), and there
aren't quite so
I have two questions:
1) ISA-PnP detection in the kernel doesn't work properly on my Abit KT7
(the card in question is a SoundBlaster AWE64), but userspace ISA-PnP works
fine...
2) I'm trying to pass options to the SoundBlaster driver using LILO - it's
built into the kernel - but can't figure
Not the case, sorry. An IDE drive is needed. However, it still might be
worth to pass the PCI speed to other drivers ...
But beware, the timing should be a per-bus value.
Indeed - remember the PowerMac G3 (blue white) and the "Yikes" G4 have a
66MHz PCI slot in place of the AGP slot used in
I've seen in recently purchased computers that the very initial
messages, like memory test, are masked by some kind of picture or logo
(example are the HP kayaks). They display a message saying that pressing
ESC or some function key displays the messages. Why not having the same
in this pretty
At 9:10 am + 14/2/2001, David Howells wrote:
How this for a laugh:
http://www.microsoft.com/WINDOWS2000/hpc/indstand.asp
Can anybody say "Beowulf cluster"? I bet you need a W2K license for every
box you hook up, too.
--
from:
You know XOR is patented (yes, the logical bit operation XOR).
But wasn't that Xerox that had that?
US Patent #4,197,590 held by NuGraphics, Inc.
The patent was for using the technique of using XOR for dragging/moving
parts of a graphics image without erasing other parts.
Henning P. Schmiedehausen writes:
But at least I would be happy if there would be a printing
engine that is entirely open source and all the printer vendors can
write a small, closed source stub that drives their printer over
parallel port, ethernet or USB and give us all the features, that
1) I know that some of the the MAC addresses given by tcpdump are
invalid. Is this a bug? In what?
Nope. The addresses (with mostly zeroes) are like IP addressses with many
zeroes or '255' - they handle concepts like "broadcast" or "me".
Huh? It's a vanilla unicast IP datagram over
On the other hand, they make excellent mice. The mouse wheel and
the new optical mice are truly innovative and Microsoft should be
commended for them.
The wheel was a nifty idea, but I've seen workstations 15 years old with
optical mice. It wasn't MS's idea.
I think their
Perhaps rm -rf . would be faster? Let rm do glob expansion,
without the sort. Care to recreate those 65535 files and try it?
Perhaps, but I think that form is still fairly slow. It takes an
"uncomfortable" amount of time to remove a complex directory structure
using, eg. "rm -rf
At 11:00 pm + 21/2/2001, Dr. Kelsey Hudson wrote:
On Sat, 17 Feb 2001, Augustin Vidovic wrote:
1- GPL code is the opposite of crap
By saying this, you are implying that all pieces of code released under
the GPL are 'good' pieces of code. I can give you several examples of code
where this
I'm annoyed when persons post virus alerts to unrelated lists but this
is a serious threat. If your offended flame away.
Since this worm exploits a BIND vulerability, it would be better placed on
the BIND mailing list than the kernel one. If it exploited a kernel bug,
then it would be more
Rik, is there any way we could get a /proc entry for this, so that one
could do something like:
I will respond; NO there is no way for security reasons this is not a
good idea.
Just out of interest, what information does the OOM score expose that isn't
already available to Joe Random
It would make much sense to make the oom killer
leave not just root processes alone but processes belonging to a UID
lower
then a certain value as well (500). This would be:
1. Easly managable by the admin. Just let oracle/www and analogous users
have a UID lower then let's say 500.
That
The main point is letting malloc fail when the memory cannot be
guaranteed.
If I read various things correctly, malloc() is supposed to fail as you
would expect if /proc/sys/vm/overcommit_memory is 0. This is the case on
my RH 6.2 box, dunno about yours. I can write a simple test program which
[to various people]
No, ulimit does not work. (But it helps a little.)
No, /proc/sys/vm/overcommit_memory does not work.
Entirely correct. ulimit certainly makes it much harder for a single
runaway process to take down important parts of the system - now why
doesn't $(MAJOR_DISTRO_VENDOR) set
Hmm... "if ( freemem (size_of_mallocing_process / 20) ) fail_to_allocate;"
Seems like a reasonable soft limit - processes which have already got lots
of RAM can probably stand not to have that little bit more and can be
curbed more quickly. Processes with less probably don't deserve to die
General thread comment:
To those who are griping, and obviously rightfully so, Rik has twice
stated on this list that he could use some help with VM auto-balancing.
The responses (visible on this list at least) was rather underwhelming.
I noted no public exchange of ideas.. nada in fact.
Get off
At 6:58 am + 24/3/2001, Rik van Riel wrote:
On Sat, 24 Mar 2001, Jonathan Morton wrote:
Hmm... "if ( freemem (size_of_mallocing_process / 20) )
fail_to_allocate;"
Seems like a reasonable soft limit - processes which have already got
lots of RAM can probably stand n
I thought of some things which could break it, which I want to try and deal
with before releasing a patch. Specifically, I want to make freepages.min
sacrosanct, so that malloc() *never* tries to use it. This should be
fairly easy to implement - simply subtract freepages.min from the freemem
While my post didn't give an exact formula, I was quite clear on the fact that
the system is allowing the caches to overrun memory and cause oom problems.
I'm more than happy to test patches, and I would even be willing to suggest
some algorithms that might help, but I don't know where to stick
free = atomic_read(buffermem_pages);
free += atomic_read(page_cache_size);
free += nr_free_pages();
- free += nr_swap_pages;
+ /* Since getting swap info is expensive, see if our allocation
can happen in physical RAM */
Actually, getting swap info is as
While my post didn't give an exact formula, I was quite clear on the
fact that
the system is allowing the caches to overrun memory and cause oom problems.
Yes. A testcase would be good. It's not happening to everybody nor is
it happening under all loads. (if it were, it'd be long dead)
- the AGE_FACTOR calculation will overflow after the system has
an uptime of just _3_ days
Tsk tsk tsk...
Now if you can make something which preserves the heuristics which
serve us so well on desktop boxes and add something that makes it
also work on your Oracle servers, then I'd be
start your app, wait for malloc to fail, hit enter for the other app and
watch you app to be OOM killed ;)
That would only happen if memory_overcommit was turned on, in which case my
modification would have zero effect anyway (the overcommit test happens
before my code).
Thanks for
I didn't quite understand Martin's comments about "not normalised" -
presumably this is some mathematical argument, but what does this actually
mean?
Not mathematics. It's from physics. Very trivial physics, basic scool
indeed.
If you try to calculate some weightning
factors which involve
[ about non-overcommit ]
Nobody feels its very important because nobody has implemented it.
Enterprises use other systems because they have much better resource
management than Linux -- adding non-overcommit wouldn't help them much.
Desktop users, Linux newbies don't understand
My patch already fixes OOM problems caused by overgrown caches/buffers, by
making sure OOM is not triggered until these buffers have been cannibalised
down to freepages.high. If balancing problems still exist, then they
should be retuned with my patch (or something very like it) in hand, to
The attached patch is against 2.4.1 and incorporates the following:
- More optimistic OOM checking, and slightly improved OOM-kill algorithm,
as per my previous patch.
- Accounting of reserved memory, allowing for...
- Non-overcommittal of memory if sysctl_overcommit_memory 0, enforced
even
ACK! that last diff got linewrapped somewhere in transit. Try this one...
-
The attached patch is against 2.4.1 and incorporates the following:
- More optimistic OOM checking, and slightly improved OOM-kill algorithm,
as per my previous patch.
- Accounting of reserved memory, allowing
Ugh, something was going screwy. Trying from a different machine.
--
The attached patch is against 2.4.1 and incorporates the following:
- More optimistic OOM checking, and slightly improved OOM-kill algorithm,
as per my previous patch.
- Accounting of reserved memory, allowing for...
-
1 - 100 of 238 matches
Mail list logo