Single process needing a lot of memory

2013-12-13 Thread Zé Loff
Hi all

First of all, sorry for the kind on newbie question.
I'm running some memory-heavy statistical analyses using R, which
require more memory than what's physically available. I.e. the machine
(a x201, which is running -current amd64) has 4Gb of physical mem, but R
needs at least 6Gb. If I understand correctly, this is what virtual mem
is there for, but -- and here's the newbie part -- I'm not quite sure on
how to make it work.

Steps taken so far:
1. Raise datasize-cur and datasize-max to 3G.

Everything runs OK, but R complains about not being able to allocate
enough mem (expected).

2. datasize-cur=3G + datasize-max=3G + vmemoryuse=4G (swap partition has
5G).
Everything runs OK, but R complains about not being able to allocate
enough mem. systat swap shows absolutely no change.

3. vmemoryuse=3G + datasize-max=infinity
Admittedly not knowing what I was doing. Big time SNAFU.
Everything slows to a crawl when memory usage goes past the available
phys mem (about 3.6G). And by a crawl I mean unusable if using X,
requiring great patience if on virtual consoles.
top shows R using over 1000% (not a typo) CPU although the CPU summary
lines say they're all idling. "state" is "run/3", "wait" column says
"pagerse". Swap usage increases, though. R never gets back to a usable
state.

Clue bat required. Is there anything else that needs to be done to
enable R to (properly) use some of the virtual memory?

Thanks in advance
Zé

--

OpenBSD 5.4-current (GENERIC.MP) #185: Thu Dec  5 17:02:54 MST 2013
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 4062691328 (3874MB)
avail mem = 3946405888 (3763MB)
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.6 @ 0xe0010 (78 entries)
bios0: vendor LENOVO version "6QET69WW (1.39 )" date 04/26/2012
bios0: LENOVO 3680WE9
acpi0 at bios0: rev 2
acpi0: sleep states S0 S3 S4 S5
acpi0: tables DSDT FACP SSDT ECDT APIC MCFG HPET ASF! BOOT SSDT TCPA SSDT SSDT 
SSDT
acpi0: wakeup devices LID_(S3) SLPB(S3) IGBE(S4) EXP1(S4) EXP2(S4) EXP3(S4) 
EXP4(S4) EXP5(S4) EHC1(S3) EHC2(S3) HDEF(S4)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpiec0 at acpi0
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz, 2660.46 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,POPCNT,AES,NXE,LONG,LAHF,PERF,ITSC
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
cpu0: apic clock running at 133MHz
cpu0: mwait min=64, max=64, C-substates=0.2.1.1.0, IBE
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz, 2660.02 MHz
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,POPCNT,AES,NXE,LONG,LAHF,PERF,ITSC
cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 1, core 0, package 0
cpu2 at mainbus0: apid 4 (application processor)
cpu2: Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz, 2660.02 MHz
cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,POPCNT,AES,NXE,LONG,LAHF,PERF,ITSC
cpu2: 256KB 64b/line 8-way L2 cache
cpu2: smt 0, core 2, package 0
cpu3 at mainbus0: apid 5 (application processor)
cpu3: Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz, 2660.02 MHz
cpu3: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,POPCNT,AES,NXE,LONG,LAHF,PERF,ITSC
cpu3: 256KB 64b/line 8-way L2 cache
cpu3: smt 1, core 2, package 0
ioapic0 at mainbus0: apid 1 pa 0xfec0, version 20, 24 pins
ioapic0: misconfigured as apic 2, remapped to apid 1
acpimcfg0 at acpi0 addr 0xe000, bus 0-255
acpihpet0 at acpi0: 14318179 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpiprt1 at acpi0: bus -1 (PEG_)
acpiprt2 at acpi0: bus 13 (EXP1)
acpiprt3 at acpi0: bus -1 (EXP2)
acpiprt4 at acpi0: bus -1 (EXP3)
acpiprt5 at acpi0: bus 5 (EXP4)
acpiprt6 at acpi0: bus 2 (EXP5)
acpicpu0 at acpi0: C3, C1, PSS
acpicpu1 at acpi0: C3, C1, PSS
acpicpu2 at acpi0: C3, C1, PSS
acpicpu3 at acpi0: C3, C1, PSS
acpipwrres0 at acpi0: PUBS: resource for EHC1, EHC2
acpitz0 at acpi0: critical temperature is 100 degC
acpibtn0 at acpi0: LID_
acpibtn1 at acpi0: SLPB
acpibat0 at acpi0: BAT0 not present
acpibat1 at acpi0: BAT1 not present
acpiac0 at acpi0: AC unit online
acpithinkpad0 at acpi0
acpidock0 at acpi0: GDCK docked (15)
cpu0: Enhanced SpeedStep 2660 MHz: speeds: 2400, 2399, 2266, 2133, 1999, 1866, 
1733, 1599, 1466, 1333, 1199 MHz
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel Core Host" rev 0x0

Re: Single process needing a lot of memory

2013-12-13 Thread Shawn K. Quinn
On Fri, Dec 13, 2013, at 05:36 AM, Zé Loff wrote:
> Hi all
> 
> First of all, sorry for the kind on newbie question.
> I'm running some memory-heavy statistical analyses using R, which
> require more memory than what's physically available. I.e. the machine
> (a x201, which is running -current amd64) has 4Gb of physical mem, but R
> needs at least 6Gb. If I understand correctly, this is what virtual mem
> is there for, but -- and here's the newbie part -- I'm not quite sure on
> how to make it work.
[...]
> 3. vmemoryuse=3G + datasize-max=infinity
> Admittedly not knowing what I was doing. Big time SNAFU.
> Everything slows to a crawl when memory usage goes past the available
> phys mem (about 3.6G). And by a crawl I mean unusable if using X,
> requiring great patience if on virtual consoles.
> top shows R using over 1000% (not a typo) CPU although the CPU summary
> lines say they're all idling. "state" is "run/3", "wait" column says
> "pagerse". Swap usage increases, though. R never gets back to a usable
> state.
> 
> Clue bat required. Is there anything else that needs to be done to
> enable R to (properly) use some of the virtual memory?

I think R is using virtual memory as best it can, and I seriously doubt
you will get anything resembling satisfactory performance without
upgrading the RAM (memory) to 8Gb.

Basic computing terminology: "virtual (something X)" means "(something
X) that isn't really there." "Virtual memory" isn't really RAM (memory),
it's disk space. And you're going to get the performance of disk space,
which is orders of magnitude slower than RAM.

So: 1) segment this problem such that R never needs more than about 3Gb
of RAM in one run if possible, 2) upgrade the RAM, or 3) give R a very
long time to complete the task at hand and back up your hard disk
regularly because it will get a workout.

-- 
  Shawn K. Quinn
  skqu...@rushpost.com



Re: Single process needing a lot of memory

2013-12-13 Thread Zé Loff
On Fri, Dec 13, 2013 at 07:16:06AM -0600, Shawn K. Quinn wrote:
> On Fri, Dec 13, 2013, at 05:36 AM, Zé Loff wrote:
> > Hi all
> > 
> > First of all, sorry for the kind on newbie question.
> > I'm running some memory-heavy statistical analyses using R, which
> > require more memory than what's physically available. I.e. the machine
> > (a x201, which is running -current amd64) has 4Gb of physical mem, but R
> > needs at least 6Gb. If I understand correctly, this is what virtual mem
> > is there for, but -- and here's the newbie part -- I'm not quite sure on
> > how to make it work.
> [...]
> > 3. vmemoryuse=3G + datasize-max=infinity
> > Admittedly not knowing what I was doing. Big time SNAFU.
> > Everything slows to a crawl when memory usage goes past the available
> > phys mem (about 3.6G). And by a crawl I mean unusable if using X,
> > requiring great patience if on virtual consoles.
> > top shows R using over 1000% (not a typo) CPU although the CPU summary
> > lines say they're all idling. "state" is "run/3", "wait" column says
> > "pagerse". Swap usage increases, though. R never gets back to a usable
> > state.
> > 
> > Clue bat required. Is there anything else that needs to be done to
> > enable R to (properly) use some of the virtual memory?
> 
> I think R is using virtual memory as best it can, and I seriously doubt
> you will get anything resembling satisfactory performance without
> upgrading the RAM (memory) to 8Gb.
> 
> Basic computing terminology: "virtual (something X)" means "(something
> X) that isn't really there." "Virtual memory" isn't really RAM (memory),
> it's disk space. And you're going to get the performance of disk space,
> which is orders of magnitude slower than RAM.
> 
> So: 1) segment this problem such that R never needs more than about 3Gb
> of RAM in one run if possible, 2) upgrade the RAM, or 3) give R a very
> long time to complete the task at hand and back up your hard disk
> regularly because it will get a workout.

So it's normal for a system to get slowed down to the point of losing
network connections and freezing X every time a process uses swap? I
find that hard to believe...

-- 



Re: Single process needing a lot of memory

2013-12-13 Thread Marc Espie
On Fri, Dec 13, 2013 at 01:24:41PM +, Zé Loff wrote:
> So it's normal for a system to get slowed down to the point of losing
> network connections and freezing X every time a process uses swap? I
> find that hard to believe...


Not *every time*, but yes, that does happen.

Some network drivers are notably flaky when some timeouts occur, I've managed
to lose iwn and re  due to delays related to excessive swapping.



Re: Single process needing a lot of memory

2013-12-13 Thread Peter Hessler
On 2013 Dec 13 (Fri) at 13:24:41 + (+), Zé Loff wrote:
:On Fri, Dec 13, 2013 at 07:16:06AM -0600, Shawn K. Quinn wrote:
:> On Fri, Dec 13, 2013, at 05:36 AM, Zé Loff wrote:
:> > Hi all
:> > 
:> > First of all, sorry for the kind on newbie question.
:> > I'm running some memory-heavy statistical analyses using R, which
:> > require more memory than what's physically available. I.e. the machine
:> > (a x201, which is running -current amd64) has 4Gb of physical mem, but R
:> > needs at least 6Gb. If I understand correctly, this is what virtual mem
:> > is there for, but -- and here's the newbie part -- I'm not quite sure on
:> > how to make it work.
:> [...]
:> > 3. vmemoryuse=3G + datasize-max=infinity
:> > Admittedly not knowing what I was doing. Big time SNAFU.
:> > Everything slows to a crawl when memory usage goes past the available
:> > phys mem (about 3.6G). And by a crawl I mean unusable if using X,
:> > requiring great patience if on virtual consoles.
:> > top shows R using over 1000% (not a typo) CPU although the CPU summary
:> > lines say they're all idling. "state" is "run/3", "wait" column says
:> > "pagerse". Swap usage increases, though. R never gets back to a usable
:> > state.
:> > 
:> > Clue bat required. Is there anything else that needs to be done to
:> > enable R to (properly) use some of the virtual memory?
:> 
:> I think R is using virtual memory as best it can, and I seriously doubt
:> you will get anything resembling satisfactory performance without
:> upgrading the RAM (memory) to 8Gb.
:> 
:> Basic computing terminology: "virtual (something X)" means "(something
:> X) that isn't really there." "Virtual memory" isn't really RAM (memory),
:> it's disk space. And you're going to get the performance of disk space,
:> which is orders of magnitude slower than RAM.
:> 
:> So: 1) segment this problem such that R never needs more than about 3Gb
:> of RAM in one run if possible, 2) upgrade the RAM, or 3) give R a very
:> long time to complete the task at hand and back up your hard disk
:> regularly because it will get a workout.
:
:So it's normal for a system to get slowed down to the point of losing
:network connections and freezing X every time a process uses swap? I
:find that hard to believe...
:
:-- 
:

Using swap is a bug.  Buy more ram.


-- 
Just because everything is different doesn't mean anything has
changed.
-- Irene Peter



Re: Single process needing a lot of memory

2013-12-13 Thread Marc Espie
On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:
> Using swap is a bug.  Buy more ram.
  ^^^

I run into bugs all the time...

Memory: Real: 2785M/3694M act/tot Free: 4217M Cache: 550M Swap: 900K/8384M



Re: Single process needing a lot of memory

2013-12-13 Thread Zé Loff
On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:
> On 2013 Dec 13 (Fri) at 13:24:41 + (+), Zé Loff wrote:
> :On Fri, Dec 13, 2013 at 07:16:06AM -0600, Shawn K. Quinn wrote:
> :>
> :> I think R is using virtual memory as best it can, and I seriously doubt
> :> you will get anything resembling satisfactory performance without
> :> upgrading the RAM (memory) to 8Gb.

[snip]

> :> So: 1) segment this problem such that R never needs more than about 3Gb
> :> of RAM in one run if possible, 2) upgrade the RAM, or 3) give R a very
> :> long time to complete the task at hand and back up your hard disk
> :> regularly because it will get a workout.

[snip]

> Using swap is a bug.  Buy more ram.

Thanks for your answers (and Marc's too, BTW). I never meant swapping to
be more than a workaround, I wasn't expecting good performance. But I
never expected it to render the machine virtually useless like it does, 
hence the first post. Off to the shop, then.

-- 



Re: Single process needing a lot of memory

2013-12-13 Thread Nick Holland

On 12/13/2013 09:10 AM, Zé Loff wrote:

On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:

On 2013 Dec 13 (Fri) at 13:24:41 + (+), Zé Loff wrote:
:On Fri, Dec 13, 2013 at 07:16:06AM -0600, Shawn K. Quinn wrote:
:>
:> I think R is using virtual memory as best it can, and I seriously doubt
:> you will get anything resembling satisfactory performance without
:> upgrading the RAM (memory) to 8Gb.


[snip]


:> So: 1) segment this problem such that R never needs more than about 3Gb
:> of RAM in one run if possible, 2) upgrade the RAM, or 3) give R a very
:> long time to complete the task at hand and back up your hard disk
:> regularly because it will get a workout.


[snip]


Using swap is a bug.  Buy more ram.


Thanks for your answers (and Marc's too, BTW). I never meant swapping to
be more than a workaround, I wasn't expecting good performance. But I
never expected it to render the machine virtually useless like it does,
hence the first post. Off to the shop, then.



swap is intended for things that are not currently being used much to be 
pushed out of the way "for now" until they are needed again, presumably 
much later (relatively speaking)


It works great (relatively) when you have lots of stuff loaded and 
running but are using only little parts at a time, when you can dump a 
big chunk of unused RAM to disk, and bring in a big chunk of now desired 
data from disk into RAM.


That's not what you are doing.

You have ONE application which is using huge amounts of data, that it is 
thrashing all over.  Odds are, if it was able to chunk the data up so it 
could work on one little part, then another little part, then another 
little part, 1) it would probably work great for you. 2) it would 
probably just do this, keeping most of the data on disk, rather than 
sucking it all into RAM.


If your app wants one "number" off something that is swapped out, it has 
to bring in the whole swapped out page just to read or write that one value.


You are running into the fact that memory is accessed on the order of 
nanoseconds, and disk is accessed on the order of milliseconds, TIMES 
the fact that any one location in RAM can be accessed almost as quickly 
as any other location in RAM, but to get data swapped to disk requires a 
painfully slow swap process of (relatively) huge blocks of data.


you could be looking at million-to-one performance ratio here. 
Something that could run in a minute in RAM might run for years in swap 
(that messes up your upgrade plans :).


Your application is a textbook example of "When swap fails".  OpenBSD 
might be able to manage its swap use better, but nothing will save you 
from what you are trying to do.  (well... ok, long, long ago... I've 
seen some mainframes which, after you hit their physical RAM limits 
(16MB, iirc), swapped to ... huge (for the day) RAM disks.  But even 
then, the act of swapping big pages of data out to get access to 
individal values of data would be several orders of magnitude slower 
than a direct RAM access),


Nick.



Re: Single process needing a lot of memory

2013-12-13 Thread Marc Espie
Nevertheless, things ought to work slightly better.
I still consider network driver failing due to swap to be
a bug in the driver. It should lock down memory if it's
necessary. Or there is something in the bufcache swap routines
or some disk driver that locks other users for inordinately long periods, 
especially wrt interrupts.

A machine that doesn't run out of swap should work. Not be very responsive,
that's fine. Having a network driver that downright FAILS because of that is
a bug.



Re: Single process needing a lot of memory

2013-12-13 Thread Zé Loff
On Fri, Dec 13, 2013 at 04:55:11PM +0100, Marc Espie wrote:
> Nevertheless, things ought to work slightly better.
> I still consider network driver failing due to swap to be
> a bug in the driver. It should lock down memory if it's
> necessary. Or there is something in the bufcache swap routines
> or some disk driver that locks other users for inordinately long periods, 
> especially wrt interrupts.
> 
> A machine that doesn't run out of swap should work. Not be very responsive,
> that's fine. Having a network driver that downright FAILS because of that is
> a bug.

I think it's definitely the driver in this case. After posting I saw the
firmware-crapping-out messages on the console.

-- 



Re: Single process needing a lot of memory

2013-12-13 Thread Zé Loff
On Fri, Dec 13, 2013 at 10:39:35AM -0500, Nick Holland wrote:
> On 12/13/2013 09:10 AM, Zé Loff wrote:
> >On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:
> >>On 2013 Dec 13 (Fri) at 13:24:41 + (+), Zé Loff wrote:
> >>:On Fri, Dec 13, 2013 at 07:16:06AM -0600, Shawn K. Quinn wrote:
> >>:>
> >>:> I think R is using virtual memory as best it can, and I seriously doubt
> >>:> you will get anything resembling satisfactory performance without
> >>:> upgrading the RAM (memory) to 8Gb.
> >
> >[snip]
> >
> >>:> So: 1) segment this problem such that R never needs more than about 3Gb
> >>:> of RAM in one run if possible, 2) upgrade the RAM, or 3) give R a very
> >>:> long time to complete the task at hand and back up your hard disk
> >>:> regularly because it will get a workout.
> >
> >[snip]
> >
> >>Using swap is a bug.  Buy more ram.
> >
> >Thanks for your answers (and Marc's too, BTW). I never meant swapping to
> >be more than a workaround, I wasn't expecting good performance. But I
> >never expected it to render the machine virtually useless like it does,
> >hence the first post. Off to the shop, then.
> >
> 
> swap is intended for things that are not currently being used much to be
> pushed out of the way "for now" until they are needed again, presumably much
> later (relatively speaking)
> 
> It works great (relatively) when you have lots of stuff loaded and running
> but are using only little parts at a time, when you can dump a big chunk of
> unused RAM to disk, and bring in a big chunk of now desired data from disk
> into RAM.
> 
> That's not what you are doing.
> 
> You have ONE application which is using huge amounts of data, that it is
> thrashing all over.  Odds are, if it was able to chunk the data up so it
> could work on one little part, then another little part, then another little
> part, 1) it would probably work great for you. 2) it would probably just do
> this, keeping most of the data on disk, rather than sucking it all into RAM.
> 
> If your app wants one "number" off something that is swapped out, it has to
> bring in the whole swapped out page just to read or write that one value.
> 
> You are running into the fact that memory is accessed on the order of
> nanoseconds, and disk is accessed on the order of milliseconds, TIMES the
> fact that any one location in RAM can be accessed almost as quickly as any
> other location in RAM, but to get data swapped to disk requires a painfully
> slow swap process of (relatively) huge blocks of data.
> 
> you could be looking at million-to-one performance ratio here. Something
> that could run in a minute in RAM might run for years in swap (that messes
> up your upgrade plans :).
> 
> Your application is a textbook example of "When swap fails".  OpenBSD might
> be able to manage its swap use better, but nothing will save you from what
> you are trying to do.  (well... ok, long, long ago... I've seen some
> mainframes which, after you hit their physical RAM limits (16MB, iirc),
> swapped to ... huge (for the day) RAM disks.  But even then, the act of
> swapping big pages of data out to get access to individal values of data
> would be several orders of magnitude slower than a direct RAM access),
> 
> Nick.
> 

Thanks for your time and for the detailed explanation (as you always do,
which is great). I am well aware that swapping is crap (and why),
especially in this particular case. I just didn't expect it to go so
bad.

The OP was just because I wasn't sure if I was doing things right. I saw
the system practically freeze in a weird way (HD activity but frozen
display, switching virtual consoles but no typing, etc), which was
unexpected and made me wonder what the hell was going on. I've also run
the same analysis on a mac also with 4Gb RAM running OS X (i.e.
swapping), and it worked out nicely, so I thought it could be done here
as well (and before the flames come out, *I know* that's comparing
apples and oranges (no pun intended), and that there's little more than
similar hardware between the two systems).

I'll just get the right tool for the job: some more RAM. It'll make the
RAM usage % indicator on my dwm desktop even more fun to look at.

Cheers
Zé

-- 



Re: Single process needing a lot of memory

2013-12-13 Thread Ted Unangst
On Fri, Dec 13, 2013 at 14:53, Marc Espie wrote:
> On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:
>> Using swap is a bug.  Buy more ram.
> ^^^
> 
> I run into bugs all the time...
> 
> Memory: Real: 2785M/3694M act/tot Free: 4217M Cache: 550M Swap: 900K/8384M

900k? That's only a tiny bug...



Re: Single process needing a lot of memory

2013-12-13 Thread Marc Espie
On Fri, Dec 13, 2013 at 02:10:49PM -0500, Ted Unangst wrote:
> On Fri, Dec 13, 2013 at 14:53, Marc Espie wrote:
> > On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:
> >> Using swap is a bug.  Buy more ram.
> > ^^^
> > 
> > I run into bugs all the time...
> > 
> > Memory: Real: 2785M/3694M act/tot Free: 4217M Cache: 550M Swap: 900K/8384M
> 
> 900k? That's only a tiny bug...

I have bigger ones from time to time. I come from a time when it was normal
to use swap (SunOS + Xwindows *before* shared objects...), and when it was
"just" a normal slowdown. Heck, I remember running a dual-boot OpenBSD/linux
box with 32MB of physical memory, where OpenBSD outperformed linux by
a huge margin in terms of responsiveness when it started hitting swap.

Did we get so complacent with memory that it's no longer the case ?...



Re: Single process needing a lot of memory

2013-12-13 Thread Otto Moerbeek
On Fri, Dec 13, 2013 at 02:10:49PM -0500, Ted Unangst wrote:

> On Fri, Dec 13, 2013 at 14:53, Marc Espie wrote:
> > On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:
> >> Using swap is a bug.  Buy more ram.
> > ^^^
> > 
> > I run into bugs all the time...
> > 
> > Memory: Real: 2785M/3694M act/tot Free: 4217M Cache: 550M Swap: 900K/8384M
> 
> 900k? That's only a tiny bug...

But why is it a bug? This machine has been swapping at some point in
time, and then the pages in swap were not accessed, so they were not
swapped in. 

-Otto



Re: Single process needing a lot of memory

2013-12-13 Thread patrick keshishian
On 12/13/13, Marc Espie  wrote:
> On Fri, Dec 13, 2013 at 02:10:49PM -0500, Ted Unangst wrote:
>> On Fri, Dec 13, 2013 at 14:53, Marc Espie wrote:
>> > On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:
>> >> Using swap is a bug.  Buy more ram.
>> > ^^^
>> >
>> > I run into bugs all the time...
>> >
>> > Memory: Real: 2785M/3694M act/tot Free: 4217M Cache: 550M Swap:
>> > 900K/8384M
>>
>> 900k? That's only a tiny bug...
>
> I have bigger ones from time to time. I come from a time when it was normal
> to use swap (SunOS + Xwindows *before* shared objects...), and when it was
> "just" a normal slowdown. Heck, I remember running a dual-boot
> OpenBSD/linux
> box with 32MB of physical memory, where OpenBSD outperformed linux by
> a huge margin in terms of responsiveness when it started hitting swap.

since we are all ruminating  ... I remember days I ran
OS/2 on my 486dx2 with 4MB of RAM. It ran Window 3.x apps
(e.g., MS Word 2.0) so much smoother than Windows on the
same hardware. I should've kept that computer 

--patrick

> Did we get so complacent with memory that it's no longer the case ?...



Re: Single process needing a lot of memory

2013-12-13 Thread Jeff Simmons
On Friday, December 13, 2013 11:47:06 am you wrote:
> since we are all ruminating  ... I remember days I ran
> OS/2 on my 486dx2 with 4MB of RAM. It ran Window 3.x apps
> (e.g., MS Word 2.0) so much smoother than Windows on the
> same hardware. I should've kept that computer 

"Nobody will ever need more than 640k RAM!" -- Bill Gates, 1981

-- 
Jeff Simmons   jsimm...@goblin.punk.net
Simmons Consulting - Network Engineering, Administration, Security
"You guys, I don't hear any noise.  Are you sure you're doing it right?"
--  My Life With The Thrill Kill Kult



Re: Single process needing a lot of memory

2013-12-13 Thread Marc Espie
On Fri, Dec 13, 2013 at 08:18:55PM +0100, Otto Moerbeek wrote:
> On Fri, Dec 13, 2013 at 02:10:49PM -0500, Ted Unangst wrote:
> 
> > On Fri, Dec 13, 2013 at 14:53, Marc Espie wrote:
> > > On Fri, Dec 13, 2013 at 02:44:26PM +0100, Peter Hessler wrote:
> > >> Using swap is a bug.  Buy more ram.
> > > ^^^
> > > 
> > > I run into bugs all the time...
> > > 
> > > Memory: Real: 2785M/3694M act/tot Free: 4217M Cache: 550M Swap: 900K/8384M
> > 
> > 900k? That's only a tiny bug...
> 
> But why is it a bug? This machine has been swapping at some point in
> time, and then the pages in swap were not accessed, so they were not
> swapped in. 

Oh, it's just Peter making an ass of himself :p and saying "using swap is
a bug". Well, I'm consistently running into bugs.



Re: Single process needing a lot of memory

2013-12-13 Thread Ted Unangst
On Fri, Dec 13, 2013 at 12:33, Jeff Simmons wrote:
> "Nobody will ever need more than 640k RAM!" -- Bill Gates, 1981
> 

I realize this is often quoted in jest, but I've taken to setting the
record straight because I think the truth is more interesting than the
lie. People who don't know the real history are doomed to repeat it
without even realizing it.

The 8088 CPU in the original PC, which was designed and built by IBM
before MS was involved, had a 20 bit physical address space. That's
one megabyte. So the most RAM the PC ever could have supported was
1MB, not so very much more than 640KB. But then out of that 1MB you
have to carve out some space for things like the BIOS and the video card
(and sound card, and network, and ISA whatever). So the engineers at
IBM said that the top 384K of the address space would be wired up to
peripherals instead of RAM, leaving 640K. It's a hardware limitation,
not one of software. OpenBSD doesn't use that 384K either.

And it's not a limitation that only happened once. If you stick 4GB of
RAM into your PC and boot OpenBSD i386, you'll see that you only get
about 3GB. Basically the same thing. Space has been reserved for
peripherals, so you don't get to use the RAM in that space. If you
boot amd64, you'll get to use it because the memory is remapped higher
up, above 4GB. (And if you bought a 80386 and booted 32-bit Windows,
you got to use the memory above 640K too).

Nobody ever proclaims "3GB of RAM will be enough for everybody!"
-- random dude at Intel, but that's exactly what happened. The same
"mistake" was repeated. And then came the various workarounds like
PAE, just like there were workarounds like expanded memory in the DOS
days. For that matter, nobody ever says "80 bytes of memory will be
enough for everybody!" -- John Mauchly (ENIAC)

There's a lesson in there about foreseeing future requirements, but
there's also a lesson that should be learned about real world products
being subject to real world constraints. You go to market with the CPU
architecture you have, not the CPU architecture you want. I'm reminded
of Bjarne Stroustrup's comment about there being languages people like
and languages people use.

Sorry to spoil the fun.



Re: Single process needing a lot of memory

2013-12-13 Thread Jeff Simmons
On Friday, December 13, 2013 01:23:15 pm Ted Unangst wrote:
> On Fri, Dec 13, 2013 at 12:33, Jeff Simmons wrote:
> > "Nobody will ever need more than 640k RAM!" -- Bill Gates, 1981
> 
> I realize this is often quoted in jest, but I've taken to setting the
> record straight because I think the truth is more interesting than the
> lie. People who don't know the real history are doomed to repeat it
> without even realizing it.
> 
> The 8088 CPU in the original PC, which was designed and built by IBM
> before MS was involved, had a 20 bit physical address space. That's
> one megabyte. So the most RAM the PC ever could have supported was
> 1MB, not so very much more than 640KB. But then out of that 1MB you
> have to carve out some space for things like the BIOS and the video card
> (and sound card, and network, and ISA whatever). So the engineers at
> IBM said that the top 384K of the address space would be wired up to
> peripherals instead of RAM, leaving 640K. It's a hardware limitation,
> not one of software. OpenBSD doesn't use that 384K either.
> 
> And it's not a limitation that only happened once. If you stick 4GB of
> RAM into your PC and boot OpenBSD i386, you'll see that you only get
> about 3GB. Basically the same thing. Space has been reserved for
> peripherals, so you don't get to use the RAM in that space. If you
> boot amd64, you'll get to use it because the memory is remapped higher
> up, above 4GB. (And if you bought a 80386 and booted 32-bit Windows,
> you got to use the memory above 640K too).
> 
> Nobody ever proclaims "3GB of RAM will be enough for everybody!"
> -- random dude at Intel, but that's exactly what happened. The same
> "mistake" was repeated. And then came the various workarounds like
> PAE, just like there were workarounds like expanded memory in the DOS
> days. For that matter, nobody ever says "80 bytes of memory will be
> enough for everybody!" -- John Mauchly (ENIAC)
> 
> There's a lesson in there about foreseeing future requirements, but
> there's also a lesson that should be learned about real world products
> being subject to real world constraints. You go to market with the CPU
> architecture you have, not the CPU architecture you want. I'm reminded
> of Bjarne Stroustrup's comment about there being languages people like
> and languages people use.
> 
> Sorry to spoil the fun.

Not at all. Once upon a time, I made a lot of money using memory managers to 
cram stuff into that 384k, especially Novell Netware drivers. And I cut my 
teeth hacking PDPs in the early 1970s, so I'm fairly familiar with memory 
limits in early machines.

And I still (especially given the context in which Mr. Gates said it) think 
it's funny.

-- 
Jeff Simmons   jsimm...@goblin.punk.net
Simmons Consulting - Network Engineering, Administration, Security
"You guys, I don't hear any noise.  Are you sure you're doing it right?"
--  My Life With The Thrill Kill Kult



Re: Single process needing a lot of memory

2013-12-13 Thread Donald Allen
On Fri, Dec 13, 2013 at 4:46 PM, Jeff Simmons  wrote:
> On Friday, December 13, 2013 01:23:15 pm Ted Unangst wrote:
>> On Fri, Dec 13, 2013 at 12:33, Jeff Simmons wrote:
>> > "Nobody will ever need more than 640k RAM!" -- Bill Gates, 1981
>>
>> I realize this is often quoted in jest, but I've taken to setting the
>> record straight because I think the truth is more interesting than the
>> lie. People who don't know the real history are doomed to repeat it
>> without even realizing it.
>>
>> The 8088 CPU in the original PC, which was designed and built by IBM
>> before MS was involved, had a 20 bit physical address space. That's
>> one megabyte. So the most RAM the PC ever could have supported was
>> 1MB, not so very much more than 640KB. But then out of that 1MB you
>> have to carve out some space for things like the BIOS and the video card
>> (and sound card, and network, and ISA whatever). So the engineers at
>> IBM said that the top 384K of the address space would be wired up to
>> peripherals instead of RAM, leaving 640K. It's a hardware limitation,
>> not one of software. OpenBSD doesn't use that 384K either.
>>
>> And it's not a limitation that only happened once. If you stick 4GB of
>> RAM into your PC and boot OpenBSD i386, you'll see that you only get
>> about 3GB. Basically the same thing. Space has been reserved for
>> peripherals, so you don't get to use the RAM in that space. If you
>> boot amd64, you'll get to use it because the memory is remapped higher
>> up, above 4GB. (And if you bought a 80386 and booted 32-bit Windows,
>> you got to use the memory above 640K too).
>>
>> Nobody ever proclaims "3GB of RAM will be enough for everybody!"
>> -- random dude at Intel, but that's exactly what happened. The same
>> "mistake" was repeated. And then came the various workarounds like
>> PAE, just like there were workarounds like expanded memory in the DOS
>> days. For that matter, nobody ever says "80 bytes of memory will be
>> enough for everybody!" -- John Mauchly (ENIAC)
>>
>> There's a lesson in there about foreseeing future requirements, but
>> there's also a lesson that should be learned about real world products
>> being subject to real world constraints. You go to market with the CPU
>> architecture you have, not the CPU architecture you want. I'm reminded
>> of Bjarne Stroustrup's comment about there being languages people like
>> and languages people use.
>>
>> Sorry to spoil the fun.
>
> Not at all. Once upon a time, I made a lot of money using memory managers to
> cram stuff into that 384k, especially Novell Netware drivers. And I cut my
> teeth hacking PDPs in the early 1970s, so I'm fairly familiar with memory
> limits in early machines.
>
> And I still (especially given the context in which Mr. Gates said it) think
> it's funny.

More war stories:

- At Interactive Data Corp., with the technical people coming largely
from the MIT Lincoln Lab and IBM Cambridge, we ran a time-shared
stock-screening service on IBM 360/67s running CP/CMS. CP created
virtual 360s, accurate enough to run OS/360 (I'm not positive of this,
but CP may have been the earliest system to create virtual machines on
which you could run a full-blown OS, ala VMWare or QEMU). We ran this
service initially on a 360/67 with 256KB of memory (the memory unit
was the size of at least two refrigerators).

- I don't recall if anyone famous said it, but we certainly thought
that the 18-bits of (36-bit word) address space on the PDP-10 would be
enough for a very long time, maybe forever.

- There's an entertaining talk by Bob Kahn (I'll have to do a little
digging to find it), one of the key visionaries behind the Internet
(TCP/IP was originally called the Kahn-Cerf Protocol) about how they
thought 32-bit IP addresses would be fine for a very long time. Six
months later, Bob Metcalfe's little invention, Ethernet, became
public, and Kahn realized that their thinking about address space,
based on the handful of networks that existed when IP was designed,
was wrong. They bought time by defining network classes that carved up
the 32 bits in different ways, but they knew then, post-Ethernet, that
32 bits wasn't going to suffice.

>
> --
> Jeff Simmons   
> jsimm...@goblin.punk.net
> Simmons Consulting - Network Engineering, Administration, Security
> "You guys, I don't hear any noise.  Are you sure you're doing it right?"
> --  My Life With The Thrill Kill Kult



Re: Single process needing a lot of memory

2013-12-14 Thread Brad Davis
Seymour Cray thought that virtual memory in general was a bug. Anything 
that got in the way of raw performance (including hardware address 
translation and memory protection) was not allowed in his machines.


FYI: The standard amd64 kernel sets MAXDSIZ to 8gig.  If you need more 
then you need to add
option  "MAXDSIZ='((paddr_t)16*1024*1024*1024)'" # 16 gig 
process space...

to your kernel config file and rebuild/reboot.

Brad Davis
(still wondering how I went from 56k on LSI-11s and 4meg on 686s to 
32gig on FX-8350s...)