Thank you to all for all the information.

With respect to altering the O/S Memory Management - I really cannot
justify literally playing with an aspect of the O/S that requires a CPU
to work out. At different stages in my working life I have to learn or
understand the memory management from Main Frame O/S such as TFP/DB/UG
to Netware to a very ugly M$ and now Linux.

All you contributions have answered all my needs for understanding of
the Linux O/S with respect to memory management - and I cannot forget
its origin evolution/derivation from Zenix/Unix/Linux.

In return I offer you an article I wrote for Techrepublic.com on M$ poor
excuse for relying (in my opinion) far too heavily on swap files which
it does poorly.It also touches on its horrific direct memory addressing,
but does leave out file cacheing. Although the intended audience for the
article was "technical per say" I was asked to re-write it several times
and make the section on M$ Memory that is directly accessible by the
poor old O/S
I am now so very grateful I  now use Linux, for many many reasons.

Again Thanks to all
Scott and good night 05:18 GMT +10

 Quote
The Good, the Bad and the poor overwork I/O subsystem

Using the Hard Disk to simulate RAM, which is a long standing feature of
Main Frame operating systems certainly has its advantages but ultimately
will degrade performance.

Having an application use virtual RAM means that there must be an
increase in Disk I/O. The advantage is that the application will run and
not run out of RAM but there is a balance.

Firstly an application must perform a disk I/O to receive the program
files in limited pieces and the more physical RAM the greater the amount
of information that can be read from the Disk. (Theoretically)

Now IF some of that RAM is a page file on the hard disk another Disk I/O
is required to access the page file RAM.

If the disk is already busy reading application files and now we
increase its load by pagefile also creating a disk I/O there must be a
point were the disk and O/S needs to decide which is more important.

The decisions are
1. The disk I/O to read the application program files
2. The disk I/O to write back and simulate RAM to  run the same
application just read from Disk.

Effectively, as you don't get anything for nothing, the dilemma is "Are
the Disk I/O's more busy reading the application program files than it
is writing a  disk I/O to simulate the RAM required to run the very same
program.

The problem that we face is always the amount of RAM that an application
can directly access to process instructions - this is a constraint of
the O/S.

Just because you have 4 GIG of RAM does not mean that the O/S can
directly address all 4 GIG. Quite the contrary. This is where our memory
managers come into play. Lets say the O/S can only directly address the
first 640K of RAM. What the memory manager does is load instructions
into the registers of the 640K RAM - process those instructions and the
memory manager then throws the result up into the rest of the RAM. When
the result is needed again, to further process, it is dragged out of the
upper RAM back down to conventional RAM and further processing can occur.

A memory manager that allows the whole amount of RAM to be used is just
like a juggler throwing thing up and fetching them back when needed.

With the added facet of the pagefile once the physical RAM becomes full
a Disk I/O is required.

At some stage the O/S needs to make a decision. Is it better to devote
more Disk I/O time to reading instructions OR does it utilise time to
write an I/O to the page file. This is all managed by the O/S.

Personally IF the PC has the max amount of physical RAM installed AND
still requires a Disk I/O to simulate more RAM then we really need to
think about the fundamental operation of the O/S.  How much RAM can it
directly address and how much RAM does it need to juggle in and then
does it requires a page file as well - Personally then its time to
re-write the O/S in its memory management and its directly addressable
RAM to process and the amount of physical RAM used by the memory manager.

Page file addressing should be a last resort by the O/S. Adding to an
overworked disk I/O will slow things down ultimately, however the
application will never fall over and you will never see the old "out of
memory" error response which is the only advantage of such an arrangement.

If you have multiple hard disks the best thing you can do is direct the
temp variable (another story) and place the page file on a different
disk than the disk containing the O/S. This will help the overworked
disk I/O of the single disk but don't get excited yet – we then run into
the bottle neck that is the I/O Bus speed which has nothing to do with
the speed of the processor nor the speed of the RAM. Thank GOD for a 64
bit BUS.

Next time you purchase a PC – just compare the BUS speed as this will
ultimately constrain the total amount of I/O weather they come from the
Disk array, the processor and the RAM – this is of particular
importance, together with the speed of the Disk I/O when we start to
employ a page file to speed up? the PC.

Unquote.



Randall R Schulz wrote:
> On Monday 07 May 2007 18:16, Carlos E. R. wrote:
>   
>> The Monday 2007-05-07 at 07:25 +1000, Registration Account wrote:
>>
>> ...
>>
>>     
>>> For example I have 2 GIG of RAM currently and am thinking of
>>> changing it to 4 GIG. I understand that the kernel can use more
>>> file cacheing, but that is what I do not want to know. With the
>>> superior way the Linux Kernel  manages Memory, if we remove the
>>> increased file caching ability will the Kernel  be able to utilise
>>> the extra memory  registers for processing.
>>>       
>> I think you got it wrong... if there is more memory, programs will be
>> able to use more memory, /if/ they request it. All unused memory will
>> simply end up being used as cache.
>>
>> If currently, with 2G, you see no swap used, increasing the ram will
>> not give more memory to programs.
>>     
>
> This is true, as far as it goes. However, the kernel makes good use of 
> physical memory pages not currently needed by applications. It uses 
> them to cache disk contents and reduce the amount of physical disk I/O 
> required to satisfy any given set of file system requests.
>
> So even if your application mix never needs more than, say, 1GB, having 
> 2GB or 4GB (or any larger amount, as long as your hardware is such that 
> it's actually accessible, which depends in large part on the CPU you're 
> using--modern systems can all typically access at least 2 or 3GB), 
> having more than that much physical RAM can still improve your system's 
> overall throughput.
>
>
>   
>> --
>> Cheers,
>>        Carlos E. R.
>>     
>
>
> Randall Schulz
>   

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to