On Sun, Jun 16, 2019 at 9:06 PM David Wright <deb...@lionunicorn.co.uk>
wrote:

> On Sun 16 Jun 2019 at 14:17:21 (-0500), Richard Owlett wrote:
>
> > > It's rather easy to work around this problem in one of two ways (at
> least):
> >
> > Ways on order of {# users}**N { N < world_population} ;/
>
> Eh?
>

He's claiming that his needs are the same as the rest of humanity to the
n-th power.
....

> > I suspect
> > >    you won't even need to bother, because you'll be overwriting it
> shortly.
> > >    Does  top  show much use of swap anyway?)
> >
> > Not a parameter of my experiment's protocol.
>
> I don't care. My point is that any reasonably endowed modern PC is
> unlikely to do any swapping during your "installation/result
> experiment" (whatever terminology you want to call it) as they have
> so much memory. My old 500MB desktop doesn't, nor did its 384MB
> predecessor (used from potato through squeeze).
>
> > As I do not "know" how much swap space I require, I provide swap space
> > based on conservative estimates of _typical_ requirements. That
> > logically leads to my preference for a SINGLE large swap vs multiple
> > small swap areas. *YMMV* !!!
>

I'll pass David on the left here ;-)
Knoppix proved years ago that you can run the whole damn thing out of RAM
back when 512K was big.
In datacenters in recent years, if a server is swapping, a problem ticket
is opened and alarm raised. Just because
the OS can handle it easily, nevertheless it's still a negative indicator.
I just took possession of a free used Dell
PowerEdge R610 for home, retiring after 5 years hard time in chilled rooms.
It has 96GB RAM. I could run NASA out of that much RAM ;-)


>
> Cheers,
> David.
>

Reply via email to